Giter Site home page Giter Site logo

ide-it's People

Contributors

alyssaricketts avatar johnbaruw avatar wahleric avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ide-it's Issues

Proj 7 Customer Feedback

Great job! Overall your report is complete and well written. The only real feedback I have is to improve your Initial Results section by describing the importance of the tests, and what they tell you about how you can improve your project. You show the graph, but never really discuss what that specific graph is telling you, and how you can use it to improve your product. You do a good job of having tests that show your project is improving over time, but you don't seem to have tests (such as a usability test) that show weaknesses in your plugin.

(1/1pt) overall organization is sensible
Yes, your organization makes sense.

(1/1pt) introduction motivates the problem and describes the approach
Yes, I found the introduction compelling, and you describe your approach at a high level.

(1/1pt) the technical details of the approach are well-described
Yes, all of your Architecture section is clear and comprehensive.

(1/1pt) the report contains a clear description of a reproducable methodology
Yes, you have mock user tests that can be run through runTestCases.sh, and you link to your repo where instructions can be found on how to reproduce these results. However, I think having a real usability test would be extremely helpful, because mock tests will only notify you of issues that you've already foreseen, whereas a user test could reveal something new about your project.

(1/1pt) results are presented clearly and described in text
You discuss how your tests work, and you have a graph showing that the tests you've written are working, but you don't ever describe what this graph tells you, or how it can help you improve your product. I think it would be beneficial to discuss the importance of the tests in regards to product improvement.

(1/1pt) the report describes related work and contains academic references
Yes, you describe related work in the "Related Works" section, and you have academic references for both that and sources used in your "Motivation" section.

Customer Feedback Proposal

As a customer, the proposal answers most of the questions I could have. The problem to be solved is clear as are the inadequacies of current solutions. It is also clear that this team will evaluate their approach through unit and suite testing, as well as experiments. Perhaps a valuable test group for experimentation could be programmers who have never used Eclipse before. This team has a schedule, although some of the points on it may be difficult to measure (though not impossible). An example is "Identify the extension points we need to integrate the plugin". This team also included ample citations to multiple sources.
We also, however, see a few points in your proposal that need to be addressed and strengthened. The first paragraph of your proposal is very drawn out and does not tell the customer what your tool is. I think that you should have one or two examples of why you want to create IDE-IT, but then get straight to the point of what IDE-IT is and what it will do for the customer. This will draw the customer in more instead of needing to read half a page before they know what the proposal is for. Another point that we thought was important to address is that you do not give many concrete examples of how IDE-IT will work. You describe that the tool will track user input and display a tip once a specific sequence of input is registered, but I think this is vague to someone looking to use your plugin. It would be helpful to give an example of a sequence of user input that would trigger a specific feature to be discovered. I think it is also important to specify how much input data you will need from the user before a tip will be displayed. Will they get a tip the first time they do not use a feature or the third or fourth time? Lastly, you mention in the opening paragraph that some IDE tools are not used because they are non trivial to learn. You should be clear about what your tips will say when they are displayed, because if you present a tip for a non trivial feature but do not explain how to use it I think the user will be just as likely to skip over it as will other tip suggestion tools.

Update Import Evaluation Functions

Following the discussions at the all team meeting, the following changes should be considered for both Remove Unused Imports and Auto Add Imports.

They should only be suggested on document saves.

Ideas -

  1. We can pull all IMarkers from the entire project. IMarkers are only updated on project saves / builds. However, we will trigger notifications on documents that are not currently open (unless we special case for only open documents)

  2. We can use document annotations and only trigger notifications on a document save. This will keep the notifications scoped to only open documents.

Refine trailing white space removal

The remove trailing white space evaluation function triggers when any white space is removed from the end of a line.

We should look into different ways to filter out triggering the evaluation when just editing a line of code naturally.

Idea: keep track of the previously edited line, and if white space is removed on a separate line, then it's reasonable to trigger the evaluation then

Customer feedback, Architecture

(1/1 pt) The document builds on the original project proposal
Your proposal does build off of the previous version.
(1/1 pt) There is an architecture diagram
There is an architecture diagram but it could be refined. In the diagram you have input from the keyboard and mouse going to KeyPressTracker.java and MouseTracker.java respectively but the boxes in the diagram for these classes are not connected to anything else in the diagram. Do they pass along information or are they just stand alone features?
(1/1 pt) The architecture diagram is explained in English
You do explain the architecture diagram in english, but it is a little vague on how each component works and what the lines in-between boxes represent.
(1/1 pt) The system's interface is documented. (For graphical interfaces, a UI mockup should be included. For command-line interfaces, the document should explain how a user interacts with it. For components of larger systems, the document should explain how the component interacts with the system around it.)
This is the backend part of the team so there is not a graphical user interface. You do describe that other plugins will be able to use your tool to display tips using your IDE-IT, but is vague on how they will be able to do this and what the interface will be for them to incorporate their tips. It would be good to discuss exactly how third parties will be able to add features to IDE-IT.
(1/1 pt) The document contains discussion of tools and APIs that the team plans to use
The APIs you plan to use are discussed.

Project 8a Customer Feedback

Customer Feedback Summary:
Your repository is in great shape, your code and results are well-documented, and your results matched what you said they should be. Great work!

Customer Feedback Rubric:
(4 pts) You can follow the instructions to reproduce the results in the paper: 4
Building was successful with the instruction you provided. After changing to the specified directory, we ran your bash script. Your bash script produced the .html file that you specified and gave us information on your tests, including error rates, skips, failures, and success rates.

(2 pts) These instructions are fully automated: 2
The results didn’t require much thought to produce. All we had to do was move into a directory, run one command to build, and run one command to produce results.

(4 pts) The code and experimental results are well documented: 4
The code was well documented, even the test files. The results were located exactly where you said they’d be, and provide a lot of insight into what test cases you run, and things like error rates, total tests, failures, skips, success rates, time taken, etc. It’s also good that you mention in your paper that your evaluative techniques are based on what your team deems to be proper behavior (when it’s appropriate to notify) because there is some subjectivity to the tests you use.

Customer feedback, User Manual

(1pt) The document builds on the previous submission

  • The report seems similar or identical to the report from the week before

(1pt) There is a user manual and I could find it easily on the repo

  • The user manual is easy to find on the first page on the GitHub repository.

(1pt) The user manual is self contained and well documented

  • The user manual is not self contained. A user that wants to create a front-end interface for your tool only needs the usage section to learn what they need to create a front-end. The user does not need the implementation details section of your user manual. The implementation details and architecture diagram is for the audience of your proposal.
  • As an API document, the usage of each parameter and the possible output from return should be specified more clearly. Currently the document is nothing more than abstractions.

(1pt) The user manual contains all instructions required to use the tool

  • The user manual looks like it contains everything a new user would need to integrate it with their front-end plugin.

(1pt) The report contains clear implementation details

  • The report does contain an architecture diagram detailing the responsibilities of each module of the system and the implementation details are clear and understandable.

(1pt) The team's plans are clear and measurable

  • The team’s plans are clear, but not all of them are easily measurable (or they don’t state their threshold for success). Items such as “Include additional feature evaluations” and “Should have an almost fully functional plugin” don’t provide a way to determine whether these goals were met or not.

Suggestions:
The only input that the backend is passing to the frontend is the FeatureID String. This will allow the frontend to display the relevant tip at the correct time. This is good for providing the user a new tip that they could be using at that time but you are not passing any more information to the frontend to allow them to show the user why this tip is being displayed. We think it would be useful for the frontend to have to ability to highlight or somehow show the user what they did and on which lines of code that caused the tip to be triggered. This would allow the user to know exactly what they were doing that the tip can replace.

Another note is you have a “Future Plans” section, but everything in this section is discussed in more detail in the Usage section so we do not think the “Future Plans” section is necessary. The user manual is supposed to reflect a manual for your finished project so you should include everything you plan to have in the finished product.

Project 5 Customer Feedback

Overall Score: 7/7
Great job! It was super straightforward to download and build your plugin. One comment is that after the "Building the Plugin" section, you say that the project needed to be cloned in an eclipse workspace. It would be helpful if you specified exactly where it should be cloned, and also you should mention this earlier. By the time I read it, I'd already followed the earlier instructions, and cloned it in a random place on my machine.

(1pt) I could find the instructions to build the project easily from the repository
Yes - in the README

(1pt) I could reproduce the build on a machine matching the prerequisites mentioned in the instructions.
Yes - I didn't run into any problems

(1pt) The instructions were clear and easy to use.
Yes, although someone who doesn't have experience with git might have a hard time with the instructions. Perhaps give the commands (ex. git clone [repo]).

(1pt) The continuous integration setup makes sense to me, and I could find the build history.
Yes

(1pt) The continuous integration history includes at least one failing test.
Yes

(1pt) The continuous integration history includes at least one passing test.
Yes

Project 6 Customer Feedback

Overall you guys did a good job on this! Some of the instructions in your user manual need to be updated, and I didn't seem to have permission for your tests. You discuss the results of your tests, but not what you learned for them, which I think would be a valuable addition. I also think that having a "real" user test would be more valuable. It looks like you're testing various keystrokes to test that your plugin suggests the correct hotkey/solution, which is great for a test suite, but I think for next time you'd gain a lot more from seeing a real person interact with your plugin. They might do something in a way you didn't anticipate.

(0.5/1 pt) The report or the README contains instructions to reproduce the initial results.
I wasn't able to install the plugin based on your instructions. When I ran "mvn clean install" I got the following error: "The goal you specified requires a project to execute but there is no POM in this directory (/Users/retina/Documents/University of Washington/CSE 403/IDE-IT). Please verify you invoked Maven from the correct directory." I was however able to install it by running "mvn clean install" from your backend_plugin folder, so I'm guessing you just forgot to update the user manual.

(1/3 pts) The initial results are reproducible.
I'm able to run "mvn -Dtest=*Negative surefire-report:report" successfully. However, if I try to run runTestCases.sh (./runTestCases.sh), I get the following error: "-bash: ./runTestCases.sh: Permission denied". Additionally, when I try to run "python feature-stats.py", I get the following error:

"Traceback (most recent call last):
File "feature-stats.py", line 1, in
from git import Repo
ModuleNotFoundError: No module named 'git'"

(1.5/2 pts) The report has a discussion of the results.
Yes, you show and discuss the results of your user test. One area for improvement is that you don't talk about what your initial results tell you about the status of your project, or how they influence your design. For example, do the results show any bugs that you need to fix, or weaknesses in your design?

(1/1 pt) The report expands on the methodology and is complete.
Yes.

(1/1 pt) The schedule is updated and there is a reasonable discussion on why it changed.
Yes - It looks like you're checking off completed tasks and updating your schedule regularly.

(2/2 pts) The user manual is updated.
Yes, the user manual has been updated, but some of the instructions need to be updated.

Proj06 Customer Feedback

(1/1) The report contains instructions to reproduce the results.

(0/3) I tried running both the "mvn -DTest..." command and the "python feature-stats.py" command. The mvn command failed to compile, citing "The method getDisplay() from the type IWorkbench refers to the missing type Display". I also could not successfully pip install "git" to run the python file (Could not find a version that satisfies the requirement git). Overall, I was not able to reproduce the results.

(1/2) The report discusses the results, but doesn't have much to say on what they mean about the project, its progress, and if there's any insights to be gleaned from it.

(1/1) The report looks pretty similar to before, but given the nature of the initial results I feel this is reasonable as it looks that the methodology hasn't changed.

(1/1) The schedule is updated to the current week. Mainly new checkmarks were added to mark the group's progress.

(2/2) The user manual contains instructions on how to reproduce the initial result.

The main thing I have to say is it looks like the initial results are kind of bare-bones. There is nothing to compare against when looking at them, so there is no understanding as to whether the tool is helpful or not (for the purpose it serves). Relative to the use cases this team has manually defined, there is a point of reference, but it is difficult to tell whether this really covers the most common use cases or not.

Draft Final Report Feedback

(1pt) overall organization is sensible
The organization is good and makes logical sense. The sections that require the reader to have knowledge of other sections are placed later on and in a good order that keeps the reader engaged.
(1pt) introduction motivates the problem and describes the approach
The introduction covers the motivation for the problem and describes the general approach for your solution which is good to get the reader intrigued, and then you go into more detail later on.
(1pt) the technical details of the approach are well-described
The technical details are well described.
(1pt) the report contains a clear description of a reproducible methodology
The report does contain a clear description for the evaluation methodology and it would be able to be reproduced.
(1pt) results are presented clearly and described in text
The results are presented clearly and are described. I also like that you go into detail about the research and tests you plan to conduct in the future.
(1pt) the report describes related work and contains academic references
You do a good job of addressing related work and describing why your solution is unique. There are also many academic references.

Other Notes:
In your evaluation you say that your main goal in testing is to reduce the number of false negatives, however the graph shown is a graph representing when your tool is performing correctly. I think this is still a good graph to show, but I also think you should have a visual for your data on false negative results since that is your main focus. One other note is you mention that some changes could be hard to track due to users using shortcuts or other plugins. Does this mean they are using the shortcuts and other plugins instead of the Eclipse features you would suggest or does it mean that them using other shortcuts and plugins interfere with your suggestions? I think a bit more detail on this point would be good. It could also be good to give a sentence or two on the limitations of your tool. Are there any Eclipse features that you would not be able to detect if a user could be in need of it or can you make a suggester for every feature, given enough time?

Evaluation Function Interface

Currently, there is no connection between the BlockCommentEvaluator or the RemoveImportEvaluator. There should be some sort of interface to represent the connection between these two and deal with some of the same functionality. Care should be taken when designing this interface because it's likely that different evaluators will care about different features of the document/eclipse.

Proj05 Customer Feedback

(1/1pt) I could find the instructions to build the project easily from the repository

I was able to find the instructions easily in your repository. One thing you could add to make the instructions even easier to find would be a table of contents at the top of your README.md file. This would allow your users to know exactly where to look for any details they need without needing to scroll until they find it. This also tells your user right away all the information they will be able to find by reading your README.md.

(1/1pt) I could reproduce the build on a machine matching the prerequisites mentioned in the instructions.

I was able to reproduce the build with a machine matching the prerequisites mentioned in the instructions.

(1/1pt) The instructions were clear and easy to use.

The instructions were clear and easy to use. I followed the instructions exactly and the build worked as expected. One suggestion would be to include the exact commands to build your project in the README. Including makes it much simpler for the user, as they would be able to just copy and paste the commands and be done.

(1/1pt) The continuous integration setup makes sense to me, and I could find the build history.

The CI setup makes sense. The build history is very easy to find since you have included a Travis CI badge at the top of your README.

(1/1pt) The continuous integration history includes at least one failing test.

The continuous integration history includes multiple failing tests.

(1/1pt) The continuous integration history includes at least one passing test.

The continuous integration history includes multiple passing tests.

Other Notes:Reading over your proposal I think you could add quotes or statistics for some of the sources that you cite. For example, in your first paragraph you cite survey results that show developers only use a small number of IDE features and I think your point could be made stronger if the reader knows the exact results of the study. One other thing is you mention that your team is prioritizing five feature suggestions, but you do not go into depth on why you chose these specific features. I think it would be good to mention why you chose these features to implement over others. For example, do these five features show a broad range of how your tool can detect if a user could be using a feature that they are not currently.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.