For the Wicket review I tested out the Greendepot system.
A. Review the System
I was able to download and build the system without any errors.
B. Review system usage
When I ran tests on the system I noticed that I was able to input a date and have the application recognize the date, however no data would be printed out. Also while testing false values (entering strings or invalid dates) I noticed that all of them resulting in crashing the program. However the interface is simple and self explanatory.
C. Review the JavaDocs
All of the package, class and method summaries provide high-level descriptions with a self contained first sentence. Also the relationships between packages, classes and methods is well described. However the description of the system summary could be better than "a greendepot client". Also all of the JavaDocs conform to the standards put forth in chapter 4.
D. Review the names
In the GreenDepot.java there is a variable called "x". This variable should be renamed into something with meaning. Other than that all of the names in the JavaDocs as well as the actual code conform to the standards set in chapter 3 of Elements of Java Style.
E. Review the testing
There were no tests included with this system besides the HelloWorld test from the example system.
F. Review the package design
The design of the package is good and reflects a logical structure for the system. I found it interesting that the .wicket was left out of the package and instead just went to edu.hawaii.greendepot.
G. Review the class design
Each class is well organized and accomplishes one well defined task. However I think that some of the accessor methods could be set to private as they aren't called on outside of the class that they are contained in.
H. Review the method design
The methods are all short and easy to understand. However in the getCarbonList method of the CarbonCalculator.java I think that the code that creates a timestamp should be in its own method. That way it can be called by other classes and allows for the method to accomplish only one task instead of 2.
I. Check for common look and feel
The look and the feel of the code is consistent. However to improve the coding I would add more comments as there are very few now which causes the code to be a little harder to understand.
J. Review the documentation
The project home pages does a good job of providing the goals for the project as well as what the system accomplishes. However there is no screen dump of the system available.
The UserGuide provides a great step by step method to download and run the system. Just as a small note, I would bold the section titles just so it is easier to find and it doesn't look like one big paragraph.
The DevelopersGuide does a great job of explaining the system and how to extended, but it does not include instructions to build the file, but those can be found in the UserGuide pages.
K. Review the Software ICU data
The ICU seems to be gathering data consistently and reliably. Currently the overall health of the project is good with the only negative aspect being the churn rate. The from the data we can see that the majority of the work took place recently due to the spike in the DevTime and Build fields at the end. Also with the exception of the huge spike in the churn rate we can see that the source code is constantly at a medium to high quality rate.
L. Review the issue management
The issues page was never used during the development of this system.
M. Review continuous integration
There are no long gaps in between commits and with the exception of the first bad build all failed builds are fixed extremely quickly. Also there is a continuous integration job that executes on commit and executes all of the appropriate Ant tasks. Lastly there is a daily build job that executes once a day and executes all of the correct Ant tasks.
Summary
I think that the greendepot system is an ok system. The coding is sound except I think that it could use more comments. Also the lack of outputs, error handling and testing is also a down side to this system. However the based on the ICU and the continuous integration the group seems to be working well and producing good quality code and I think that the next version of this system has the potential to be a well functioning web application.
Tuesday, November 24, 2009
Monday, November 23, 2009
Wicket Web Development
I found that Wicket was hard to learn and that it is actually quiet complicated. However the teamwork helped me to get a better grasp on the Wicket framework and how it relates to the Java coding.
The good:
I thought that at first the basic Wicket syntax such as the HTML tagging and the basic set up of the corresponding Wicket tags in the Java code was really easy to setup and were clearly defined. Also the WattDeopt commands to get the data were easy to figure out since we had previous experience writing code to get WattDepot data.
The bad:
Other than the basic set up of the tags and basic objects I found that Wicket was very hard to use. Throughout the project I found myself constantly thinking that JavaScript could do the same function, in less code and in an easier way to understand (or at least thats what I think). However in the end we were able to reason the framework out and complete the system.
System design:
I think that our system is very solid. We have all of the requirements fulfilled but we would have liked to put pictures that correspond to the carbon intensity level instead of merely listing the level. Also I believe that the system can be upgraded easily.
Teamwork:
I thought that our team worked very well in that we divided the tasks up and everyone did their part. Also if one of us got stuck we could ask the other members for help and get a greater incite to what we might have been doing wrong. Also often times if someone got stuck, the other members would stop and help look at the code and figure out a solution.
ICU:
Here is a link to the ICU for the past 7 days. The coverage is about average but the complexity and the coupling are good. However the churn rate is really bad which means that we were constantly changing our source code and adding or deleting lines. the rest of the fields remain average. So over all the development for this project was good, however it seems that we tended to refactor our code very often.
The good:
I thought that at first the basic Wicket syntax such as the HTML tagging and the basic set up of the corresponding Wicket tags in the Java code was really easy to setup and were clearly defined. Also the WattDeopt commands to get the data were easy to figure out since we had previous experience writing code to get WattDepot data.
The bad:
Other than the basic set up of the tags and basic objects I found that Wicket was very hard to use. Throughout the project I found myself constantly thinking that JavaScript could do the same function, in less code and in an easier way to understand (or at least thats what I think). However in the end we were able to reason the framework out and complete the system.
System design:
I think that our system is very solid. We have all of the requirements fulfilled but we would have liked to put pictures that correspond to the carbon intensity level instead of merely listing the level. Also I believe that the system can be upgraded easily.
Teamwork:
I thought that our team worked very well in that we divided the tasks up and everyone did their part. Also if one of us got stuck we could ask the other members for help and get a greater incite to what we might have been doing wrong. Also often times if someone got stuck, the other members would stop and help look at the code and figure out a solution.
ICU:
Here is a link to the ICU for the past 7 days. The coverage is about average but the complexity and the coupling are good. However the churn rate is really bad which means that we were constantly changing our source code and adding or deleting lines. the rest of the fields remain average. So over all the development for this project was good, however it seems that we tended to refactor our code very often.
Monday, November 16, 2009
CLI Ver. 2.0
This assignment allowed us to take a look back at our version 1.0 code and take into account the reviews that were given by our classmates. From there we were able to implement the appropriate changes as well as use a revised library. Along with this revised library came 3 additional methods that were implemented to the end of our code. The newest distribution of the Eono system can be found here.
Version 2.0:
In this version we extended our code to meet the requirements in a better way. The first step was we implemented a different interface. Originally the interface had many method signatures, but in this design we made only one method signature (doCommand) available. Also we took the individual commands and made them into their own class. Lastly we implemented extensive testing as opposed to our last build where we only used the tests supplied in the example system.
Group meetings:
We would meet as often as we felt necessary which would be a couple times a week. In addition we would work individually and collaborate over instant messenger. Also we divided the work equally and I believe that the both of us each carried our own weight.
Software ICU:
The software ICU statistics suggest that our project is not healthy but at the same time not sick. We seem to be in between the two leaning more towards the healthy side.
Here are the statistics for Eono
According to the screen cast our coverage as well as the churn rate were unhealthy, but our complexity, coupling, DevTime, commit and build were all healthy. Also we since we created our test cases last, our test column value is unhealthy.
Questions:
All questions were answered by adding and additional test class and then changing the test class to meet the requirements of the questions. The test class will not be added to the final distribution of the system.
1. 2009-11-02T19:00:00.000-10:00 : 9.835E8
2. 2009-11-02T04:00:00.000-10:00 : 4.97E8
The above answers were found by starting a date at 2009-11-01T00:00:000-10:00 and ending at the end of the day. The time would be incremented by 1 hr in a loop and the largest or smallest value would be held in a counter and returned at the end.
3. highest: 2009-11-02 (monday) : 14764.0 MWh
4. lowest: 2009-11-07 (saturday) : 14089.0 MWh
The above was found by calculating the energyGenerate by the grid each day and the lowest/highest levels were kept and returned at the end.
5. highest: 2009-11-04 (wednesday) : 2.9959472E7 lbs
6. lowest: 2009-11-07 (saturday) : 2.2908808E7 lbs
The above was found by calculating the carbonGenerated by the grid each day and the lowest and highest values were kept and returned at the end.
Version 2.0:
In this version we extended our code to meet the requirements in a better way. The first step was we implemented a different interface. Originally the interface had many method signatures, but in this design we made only one method signature (doCommand) available. Also we took the individual commands and made them into their own class. Lastly we implemented extensive testing as opposed to our last build where we only used the tests supplied in the example system.
Group meetings:
We would meet as often as we felt necessary which would be a couple times a week. In addition we would work individually and collaborate over instant messenger. Also we divided the work equally and I believe that the both of us each carried our own weight.
Software ICU:
The software ICU statistics suggest that our project is not healthy but at the same time not sick. We seem to be in between the two leaning more towards the healthy side.
Here are the statistics for Eono
According to the screen cast our coverage as well as the churn rate were unhealthy, but our complexity, coupling, DevTime, commit and build were all healthy. Also we since we created our test cases last, our test column value is unhealthy.
Questions:
All questions were answered by adding and additional test class and then changing the test class to meet the requirements of the questions. The test class will not be added to the final distribution of the system.
1. 2009-11-02T19:00:00.000-10:00 : 9.835E8
2. 2009-11-02T04:00:00.000-10:00 : 4.97E8
The above answers were found by starting a date at 2009-11-01T00:00:000-10:00 and ending at the end of the day. The time would be incremented by 1 hr in a loop and the largest or smallest value would be held in a counter and returned at the end.
3. highest: 2009-11-02 (monday) : 14764.0 MWh
4. lowest: 2009-11-07 (saturday) : 14089.0 MWh
The above was found by calculating the energyGenerate by the grid each day and the lowest/highest levels were kept and returned at the end.
5. highest: 2009-11-04 (wednesday) : 2.9959472E7 lbs
6. lowest: 2009-11-07 (saturday) : 2.2908808E7 lbs
The above was found by calculating the carbonGenerated by the grid each day and the lowest and highest values were kept and returned at the end.
Tuesday, November 10, 2009
Review of reviewing
After reviewing the Umi and Umikumakahi systems I realized a couple of things.
The Good:
Reviewing other people's code can give great incite to your own current project. I know that my group had problems implementing the interface in the correct manner, however I got some ideas on how to better design my system with the interface after looking at others code. Also the responses from the reviewers of my system gave me more incite on how to better improve my system.
The Bad:
Reviewing these systems thoroughly was very time consuming. This can be attributed to the great depth of detail that the reviews cover. Everything from the initial package to the system structure is carefully analyzed which allows for very helpful feed back on all aspects of the system.
Lessons Learned:
After reviewing the Umikumakahi system I realized the importance of comments in the code. In this system there are no comments in the code making it near impossible to understand in a decent amount of time. Occasionally the thought of "if I don't comment here, it won't make a big deal since this is a minor line of code" crosses my head. But after reviewing this system and seeing how impossible it is to understand, I have told myself to comment at all possible locations that might be hard for an outside reviewer to understand.
The Good:
Reviewing other people's code can give great incite to your own current project. I know that my group had problems implementing the interface in the correct manner, however I got some ideas on how to better design my system with the interface after looking at others code. Also the responses from the reviewers of my system gave me more incite on how to better improve my system.
The Bad:
Reviewing these systems thoroughly was very time consuming. This can be attributed to the great depth of detail that the reviews cover. Everything from the initial package to the system structure is carefully analyzed which allows for very helpful feed back on all aspects of the system.
Lessons Learned:
After reviewing the Umikumakahi system I realized the importance of comments in the code. In this system there are no comments in the code making it near impossible to understand in a decent amount of time. Occasionally the thought of "if I don't comment here, it won't make a big deal since this is a minor line of code" crosses my head. But after reviewing this system and seeing how impossible it is to understand, I have told myself to comment at all possible locations that might be hard for an outside reviewer to understand.
Sunday, November 8, 2009
Systems Review
Over this past weekend I reviewed the Umi and the Umikumakahi systems. Both of the systems gave me a better incite on how to better implement my system. The first system I reviewed was Umi. Overall the system was very solid and I liked the way the implementation was set up.
A. Review the build
I was able to build the system using ant -f verify.build.xml without any problems.
B. Review the system usage:
I was able to execute all of the commands with the exception of the 2.8 command which I called with:
list power generated SIM_WAIAU_8 day 2009-11-15 sampling-interval 30 statistic average
which yielded a command invalid error. But along the lines of invalid commands, I found that the system had very good error handling and that I was unable to crash the system. However one of the downsides was that I was able to execute invalid input on commands that took no arguments. For example I inputted the command: list sources SIM_KAHE_1 which yielded the same results as the command: list command. The same is true for quit (ex. quit test) and the help command.
Also I found that the execution of a false source on the list power generated command caused the system to freeze up and I had to force quit the application. An example of this is:
list power generated SIM_OAHU_GRID_1 timestamp 2009-11-15T12:13:00.000-10:00
C. Review the JavaDocs
Upon reviewing the JavaDocs I found that the system was well documented. All of the class, method and package descriptions were well defined in 1 sentence. Also the effects of the methods were also documented (such as the throwing of errors). However I found that there was no sample code examples and that there was no use of HTML tags in the comment code in the source code.
D. Review the names
After looking at the JavaDocs and the source code itself I found that the system implemented good, logical names for their variables, classes, methods and package names.
E. Review the testing
There was no additional testing besides the tests that were already given with the example system.
F. Review the package design
I found that the name and design of the package of this system made perfect sense.
G. Review the class design
While reviewing the source code I found that the methods accomplished a single well defined task and that the instance variables for the methods as well as the classes and instance variables were all appropriately named. Also the methods were made private as often as possible.
H. Review the method design
I found that the methods of this system were well designed with the exception of the listPowerSample method which was quiet long. Also for this method I found that a block of code stores items in an ArrayList, but this block can be separated from this method and made into its own separate method which would return and ArrayList. Lastly I found that the methods had very low number of side effects as the variables used were usually instantiated in the method itself.
I. Check for common look and feel
I found that the code seem very uniform with no major differences between authors.
-----------------------------------------------------------------------------------------------------------------
This next review is for Umikumakahi. I found that their system had good structure, but other than that it caused me lots of problems, especially since there were no comments in the code besides the JavaDoc comments.
A. Review the build
I was able to build the system with ant -f verify.build.xml without any problems.
B. Review the system usage
I was unable to run the system on my computer. I tired to use Eclipse but the error "Some projects cannot be imported because they already exist in the workspace" occurred which prevented me from importing the project even after I deleted my project from the workspace. In an attempt to test it with another IDE I downloaded and imported the system into NetBeans IDE. However once imported NetBeans reported that the system had 100 errors and these errors prevented me from testing the system. Screen shots of my NetBeans can be found here:
NetBeans IDE
100 Errors
C. Review the JavaDocs
The methods, classes and packagese all have high level descriptions and were all summed up well in one sentence. Also the relationship between classes and packages were well noted. However like the umi system there is no examples of code and the use of HTML tags inside the source codes's JavaDocs comment was not found.
D. Review the names
The names of the classes, packages and methods were well chosen and conform to the standards defined in Java Style chapter 3. However some of the instances of class were misnamed as they were the same as the instance of the object they were instantiating. For example the instance for the Console object was called console.
E. Review the testing
After running ant -f emma.build.xml I found that nearly all the code was accessed with the exception of the WattDepotChartCommand and the CommandLineInterface classes. Also the tests were well thought out and the test results were set to match those sent out by email.
F. Review the package design
This system implemented 2 packages and I found that the packages were logically broken down and that they do a good job separating the commands and the interface itself.
G. Review the class design
Each of the methods in this system do one "well" defined task. I say "well" since each command has its own method so it is easy to tell what method you are looking at, however since there is no comments in the code, I found that often times the method was hard to follow easily. Also there were no private methods used in this system. However the instance variables were all named and used correctly.
H. Review the method design
The method design is good in that it implements only one task. However without any comments in the code I'm not sure if some of the longer methods could be cut down by making a method that could be shared between classes. However the methods have no side effects on the variables since each command has its own class which means the variables are independent of each other.
I. Check for common look and feel
I found that the coding in this system was uniform and there were no discrepancies in the format of coding used between authors.
A. Review the build
I was able to build the system using ant -f verify.build.xml without any problems.
B. Review the system usage:
I was able to execute all of the commands with the exception of the 2.8 command which I called with:
list power generated SIM_WAIAU_8 day 2009-11-15 sampling-interval 30 statistic average
which yielded a command invalid error. But along the lines of invalid commands, I found that the system had very good error handling and that I was unable to crash the system. However one of the downsides was that I was able to execute invalid input on commands that took no arguments. For example I inputted the command: list sources SIM_KAHE_1 which yielded the same results as the command: list command. The same is true for quit (ex. quit test) and the help command.
Also I found that the execution of a false source on the list power generated command caused the system to freeze up and I had to force quit the application. An example of this is:
list power generated SIM_OAHU_GRID_1 timestamp 2009-11-15T12:13:00.000-10:00
C. Review the JavaDocs
Upon reviewing the JavaDocs I found that the system was well documented. All of the class, method and package descriptions were well defined in 1 sentence. Also the effects of the methods were also documented (such as the throwing of errors). However I found that there was no sample code examples and that there was no use of HTML tags in the comment code in the source code.
D. Review the names
After looking at the JavaDocs and the source code itself I found that the system implemented good, logical names for their variables, classes, methods and package names.
E. Review the testing
There was no additional testing besides the tests that were already given with the example system.
F. Review the package design
I found that the name and design of the package of this system made perfect sense.
G. Review the class design
While reviewing the source code I found that the methods accomplished a single well defined task and that the instance variables for the methods as well as the classes and instance variables were all appropriately named. Also the methods were made private as often as possible.
H. Review the method design
I found that the methods of this system were well designed with the exception of the listPowerSample method which was quiet long. Also for this method I found that a block of code stores items in an ArrayList, but this block can be separated from this method and made into its own separate method which would return and ArrayList. Lastly I found that the methods had very low number of side effects as the variables used were usually instantiated in the method itself.
I. Check for common look and feel
I found that the code seem very uniform with no major differences between authors.
-----------------------------------------------------------------------------------------------------------------
This next review is for Umikumakahi. I found that their system had good structure, but other than that it caused me lots of problems, especially since there were no comments in the code besides the JavaDoc comments.
A. Review the build
I was able to build the system with ant -f verify.build.xml without any problems.
B. Review the system usage
I was unable to run the system on my computer. I tired to use Eclipse but the error "Some projects cannot be imported because they already exist in the workspace" occurred which prevented me from importing the project even after I deleted my project from the workspace. In an attempt to test it with another IDE I downloaded and imported the system into NetBeans IDE. However once imported NetBeans reported that the system had 100 errors and these errors prevented me from testing the system. Screen shots of my NetBeans can be found here:
NetBeans IDE
100 Errors
C. Review the JavaDocs
The methods, classes and packagese all have high level descriptions and were all summed up well in one sentence. Also the relationship between classes and packages were well noted. However like the umi system there is no examples of code and the use of HTML tags inside the source codes's JavaDocs comment was not found.
D. Review the names
The names of the classes, packages and methods were well chosen and conform to the standards defined in Java Style chapter 3. However some of the instances of class were misnamed as they were the same as the instance of the object they were instantiating. For example the instance for the Console object was called console.
E. Review the testing
After running ant -f emma.build.xml I found that nearly all the code was accessed with the exception of the WattDepotChartCommand and the CommandLineInterface classes. Also the tests were well thought out and the test results were set to match those sent out by email.
F. Review the package design
This system implemented 2 packages and I found that the packages were logically broken down and that they do a good job separating the commands and the interface itself.
G. Review the class design
Each of the methods in this system do one "well" defined task. I say "well" since each command has its own method so it is easy to tell what method you are looking at, however since there is no comments in the code, I found that often times the method was hard to follow easily. Also there were no private methods used in this system. However the instance variables were all named and used correctly.
H. Review the method design
The method design is good in that it implements only one task. However without any comments in the code I'm not sure if some of the longer methods could be cut down by making a method that could be shared between classes. However the methods have no side effects on the variables since each command has its own class which means the variables are independent of each other.
I. Check for common look and feel
I found that the coding in this system was uniform and there were no discrepancies in the format of coding used between authors.
Wednesday, November 4, 2009
Energetic data
Two is better than one:
In this assignment we worked in pairs on a command line interface that would retrieve specified data from a server and then print out the results. I thought that working in pairs was a good experience as we were each able to offer each other different perspectives when it came to solving an error or figuring out the best way to implement code. The tasks were initially divided in half with my partner taking the odds and myself taking the evens. However towards the end (methods 9 and 10) my partner worked on ten while I worked on the interface. So in the end I thought that the distribution of tasks was quite even. Through this even distribution we were able to finish all of the given tasks and set up an interface for our classes. Like I said before I liked working in pairs and I thought that it was a great learning experience.
Problems:
As with all programming there is bound to be problems and probably lots of them at that. In our case we ran into problems where our project in eclipse wasn't the project in the SVN folder, so we found ourselves constantly copying over code and files to keep SVN up to date. The other problems that we ran into were a variety of pmd and findbug errors which were easily resolved after the problem was identified. But probably our biggest hurdle was that after attempting to build an interface, my partners SVN stopped working. This was a major set back as now I was the only one who was able to commit the changes, so towards the end all of her code was emailed to me, then put into the appropriate file and finally committed. Lastly a problem that seemed to bug us a lot was how to implement a certain method. However after discussing it would would find an agreeable way to create a method and one of us would put the ideal into code.
In this assignment we worked in pairs on a command line interface that would retrieve specified data from a server and then print out the results. I thought that working in pairs was a good experience as we were each able to offer each other different perspectives when it came to solving an error or figuring out the best way to implement code. The tasks were initially divided in half with my partner taking the odds and myself taking the evens. However towards the end (methods 9 and 10) my partner worked on ten while I worked on the interface. So in the end I thought that the distribution of tasks was quite even. Through this even distribution we were able to finish all of the given tasks and set up an interface for our classes. Like I said before I liked working in pairs and I thought that it was a great learning experience.
Problems:
As with all programming there is bound to be problems and probably lots of them at that. In our case we ran into problems where our project in eclipse wasn't the project in the SVN folder, so we found ourselves constantly copying over code and files to keep SVN up to date. The other problems that we ran into were a variety of pmd and findbug errors which were easily resolved after the problem was identified. But probably our biggest hurdle was that after attempting to build an interface, my partners SVN stopped working. This was a major set back as now I was the only one who was able to commit the changes, so towards the end all of her code was emailed to me, then put into the appropriate file and finally committed. Lastly a problem that seemed to bug us a lot was how to implement a certain method. However after discussing it would would find an agreeable way to create a method and one of us would put the ideal into code.
Subscribe to:
Posts (Atom)