mvillis / measure-mate Goto Github PK
View Code? Open in Web Editor NEWSimple tool to track maturity assessments
License: MIT License
Simple tool to track maturity assessments
License: MIT License
Record actions for the assessment (ie, not for a particular attribute) and display on the summary tab
Not sure which one at this stage.
Will figure it out and add to the repo.
At the moment when you click the launch button, no data is stored about the template that was created or the tags that were selected.
As part of completing this card we have to use the data from the launch form and create an Assessment object in the database.
Update the assessment form to include a spot where the person selecting a given rating for an attribute can supply a series of comments.
As per linter:
client/js/components/assessment/ratingList.js
48:48 error JSX props should not use .bind() react/jsx-no-bind
50:17 error JSX props should not use .bind() react/jsx-no-bind
64:48 error JSX props should not use .bind() react/jsx-no-bind
71:38 error JSX props should not use .bind() react/jsx-no-bind
โ 4 problems (4 errors, 0 warnings)
There are some tips here on what we should do:
https://github.com/yannickcr/eslint-plugin-react/blob/master/docs/rules/jsx-no-bind.md
Looks like the general gist is to extract the looped code into a separate component.
Currently getting an application error:
An error occurred in the application and your page could not be served. Please try again in a few moments.
More info on django-bower install here:
https://django-bower.readthedocs.org/en/latest/installation.html
Dependencies include:
** react
** react-select
** query
*bootstrap
* react-input-autosize
Incorporate the target ratings from the previous assessment for the team.
Ideas:
Would be cool for people in a given team to do an assessment on their own and then have measure mate show them an aggregation of everyone's results.
To aggregate it could be just on tags to keep it simple to start with.
For a given attribute you could show the distribution of measurements and then a summary of the comments (maybe aggregated as a word cloud?)
At this stage this view would be read only.
A future card may be added to allow teams to set an "official" value based on feedback.
At some point you need confidence that an assessment has been finished and shouldn't be changed.
This looks like an easier way to do this:
https://github.com/quickleft/react-loader
Should look at completing #18 before doing this
At present a comment is tied to a given measurement. A measurement is typically tied to a given rating (this is not enforced in the model but when displaying a comment we have to know which tab (attribute) to display it against. This is inferred by the rating that was stored in the assessment).
In general being able to talk about a given attribute and have that conversation persisted independently of a rating being selected is reasonable. Being able to make overall comments on the assessment also seems quite reasonable. As does being able to have a more natural conversation which could include more than one piece of text.
....so too is being able to reply, like etc but this is getting a bit too crazy for the moment :-)
Once we have a concept of a user in the system you could also add them.
This would support all three situations:
With this proposal I think the UI of the app would also evolve. This is an opportunity to asynchronously load comments as they are visible to the user.
You would need to consider whether you want to have a separate view for overall assessment comments versus those that relate to just the given attribute. I feel that if you're not careful in this space you make cause additional confusion to the user.
You would also need to be mindful that comments can often be quite messy and you often want to create an official looking conclusion for a given assessment / attribute. Maybe an additional capability is still needed
Thoughts?
Things we know do not work on windows:
The complete this card the readme must include guidance for folks on windows.
The preference is to provide the guidance inline with the linux / windows using the table syntax:
For example:
OSX/Linux | Windows |
---|---|
$ source .venv/bin/activate |
$ .venv\Scripts\activate |
Currently the form just redirects to a generic assessment view.
localhost:8000/assessment/
Complete this card and upon creation of the assessment the form will redirect to the actual assessment object:
localhost:8000/assessment/1/
At this stage the view just needs to be moved across. There is no need to populate data against the assessment. This will come in additional cards.
As more and more assessments have been created users will want to see a list of any that have been created historically.
This would just be a simple visualisation of the results from an assessment like below:
Standup @@@@@
Retro @@
Planning @@@@
User feedback has indicated it would be very handy to be able to tick off individual points to help you understand which rating you fall under.
Any missing ticks could form part of a recommendation.
The entire data model is currently writeable from the API endpoints.
The template config data, including attributes, and ratings, should only be changeable from the admin interface, and each assessment should only be changeable by members of the "team".
When you've filled in both the current and target value for a given attribute, update the tab for that attribute to show a tick that it is complete.
This allows you to instantly see what has and hasn't been done.
It also provides a nice visual queue that you've completed a given form.
Pressing the Next or Previous button should scroll to the top.
Maybe only on mobile?
Currently running 1.8. Should move to the latest version.
Just export all measurements for a given template (with a link to the assessment for grouping) to an xslx.
In the future (as a separate card) you will be able to filter these results.
When a user selects a rating for an attribute in an assessment, we need to store that "click" in the database.
To complete this card we must:
(1) Create a measurement object when a user clicks a row by calling the api
(2) Update the cell so that they know it was clicked
To implement this feature we need to:
Yeah.....so there are currently zero tests around the javascript of the application.
Python coverage is now in the high nineties. Javascript is getting pretty jealous!
This could be as simple as an additional tab or a modal that explains.
At this stage we have no understanding how experienced a given user is so in completing this card be mindful as to not annoy pro users too much.
Allow setting the CSS class for the attribute descriptions, similar to rating descriptions.
(This is a short-term hack that should be deprecated once #56 is implemented)
Add unit tests for the decorators in headers.py to fix the test coverage regression from #101
Add a router to the front end to allow linking to specific parts/tabs/pages of the app (for example, link directly to the summary tab from the list page)
Link up the launch button the on home page of Measure Mate so that it redirects onto the assessment page.
When you create a new assessment, if you add new tags they are not handled.
Need to fix!
Only impacts small width devices.
Will add screenshot shortly.
Allow recording of more than one action against particular attributes and displaying them on the summary tab.
Add a ranking value to the attributes so that they can be ordered similar to the ratings
Currently just scrolls on forever....this will only get us so far......
Team --> The bunch of people doing an assessment together - responsible for the assessment.
Maybe this is a group of labels?
If the user has "Display intranet sites in Compatibility View" selected then it takes precedence over the tag.
If the app finds itself running in compatibility mode, it should display a warning and explain how to disable.
Should be able to use code like that in: http://stackoverflow.com/questions/3103890/determining-why-a-page-is-being-renderred-in-compatibility-mode/29288153#29288153
Only the "owner" of an assessment should be able to update the assessment data.
"Owner" will be the team once #66 is implemented.
Add a recommendations field for each attribute
Display the rating from the previous assessment for the team on each attribute, and on the summary tab, including a trend arrow.
Along with the current ranking, record a target value for each attribute.
Work Breakdown:
When the viewport is below a certain size, the Nav bar changes to a dropdown menu... and it doesn't show anything when you tap/click it.
at the moment if you launch an assessment without selecting a template or tags, you get no feedback (javascript quietly errors behind the scenes):
Uncaught TypeError: this.state.template.map is not a function
The system should not allow you to launch until the values have been selected.
This should help maintenance and enable unit testing of the Javascript
At the moment there are no tests around the models in the project.
We should add some!
Use the test in wordplay as some good examples:
https://github.com/mvillis/wordplay/tree/master/wordplay/tests/models
Was thinking something like this:
Leverage bootstrap left tabs:
https://react-bootstrap.github.io/components.html#left-tabs
any thoughts @rloomans @mjohno ?
The desc(ription) fields should allow Markdown or similar to add formatting rather than the current hack of allowing specifying a CSS class
As per warning in the app at the moment:
You are using the in-browser JSX transformer. Be sure to precompile your JSX for production - http://facebook.github.io/react/docs/tooling-integration.html#jsx
react.js:19588 Warning: Each child in an array or iterator should have a unique "key" prop. Check the render method of AssessmentList. See https://fb.me/react-warning-keys for more information.
Not quite sure how to implement this (at least for the current pipeline). Maybe we can just run a command in heroku for now.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.