varianteffect / mavedb Goto Github PK
View Code? Open in Web Editor NEWMaveDB database web application
Home Page: https://www.mavedb.org
License: GNU Affero General Public License v3.0
MaveDB database web application
Home Page: https://www.mavedb.org
License: GNU Affero General Public License v3.0
The "Resources" section in the footer can probably get removed.
When I add contributors to an Experiment or Score Set and click "Save" the only way to go back to the previous page is to press "Cancel".
Ideally, after saving, the page would show a preview of the current contributor list (as its rendered on a Score Set page) with a single button to "Return" to the Profile page.
I'm not sure what the difference is between usage and documentation. I think it makes more sense to have a single (potentially very long) documentation page and get rid of usage. If we want to create a step-by-step tutorial later, we can add a separate page back in if we want it to be separate.
Documentation, about page, etc. should be handled in a similar way to licenses. Static markdown files should be in the GitHub repository that are pulled into the site using a manage.py
command during setup or when those documents are updated.
It would be nice if keywords had little boxes around them when they are displayed on the ScoreSet or Experiment page, rather than being space separated. The current implementation makes multi-word keywords run together a bit.
Currently there is no HGVS validation in place when a user uploads a new dataset. This could be implmented using regex validation on a subset of the HGVS standard, or the hgvs python library.
@jweile What are the requirements for viewing a ScoreSet in MaveVis?
If someone has a UniProt ID and offset with their ScoreSet can we add a "View in MaveVis" or similar button to the ScoreSet page, and if so what is the format of the URL?
The button should launch in a new page/new tab by default.
Experiments and ScoreSets should be able to have multiple DOI's linked to them. The documentation and UI should clarify that these DOI's are for data rather than publications. Papers will be added by PMID (see #18).
The login button/link only works on the main page, but not while viewing experiments or scoresets. Clicking on the ORCID button results in the following error message:
The requested URL /experiment/EXP000001A/accounts/login/ was not found on this server.
The creation message when a Score Set is created is unintuitive. It should clarify that "procesing" means that scores and counts are still being entered into the database. Otherwise it seems like it worked properly but there's no counts and scores.
ScoreSets should not display the SRA identifiers field.
There should be a user list and a page to view all (public) datasets associated with each user. Users should only appear on this list if they appear on at least one public dataset.
UniProt, offset, etc. are not filled out when choosing an existing target on creation of a new ScoreSet.
Add collapse all button to preview an abstract for all experiments/scoresets on a page
The GitHub README is woefully out of date. It needs to get updated with the current requirements, new manage.py
commands that have to get run on server setup, and a pointer to the CentOS 6 document.
Alternatively, the CentOS 6 markdown document could get merged into the GitHub README under a "setup on CentOS 6" section or similar.
There should be an obvious button to add a score set after adding an experiment.
To avoid the loaded connotation of authorship, I suggest renaming authors as "contributors" in the UI.
When trying to upload a scoreset with a csv file that uses quotes around strings I got an error saying that the HGVS column was missing. Removing the quotes around the strings in the first row works.
Currently accession prefixes (EXPS, EXP, SCS) should be replaced with a single MAVEDB-specific prefix, since the type of entity is already encoded in the accession number format. This will help easily identify MAVEDB records in future publications.
I'm in favor of MAVE or MAVEDB as a prefix. Existing entities would be renamed, and I think the database is still small enough to get away with it without requiring potentially complicated backwards compatibility.
@jweile Comments on this idea or suggestions for a good prefix?
python manage.py createlicenses
should generate some output ("Added N licenses" or similar) so that it's clear that it worked.
Value-added tools will need access to experiment metadata, especially:
Add collapse buttons to preview an abstract for an experiment/scoreset
Contributors with the Editor role should not be able to publish Score Sets.
The "Access Denied" message displayed when trying to access a private entry refers to the controlling user by ORCID number rather than name. This should be replaced with the user's name as a link to their ORCID page.
Public vs. private ScoreSet and Experiment tiles should be color coded or otherwise more obvious.
When adding administrators or viewers to an existing entry, the autocompletion is done by ORCID rather than by name, which is much less intuitive.
After finishing the upload of a scoreset, when trying to navigate to the page of the new scoreset I get a "500 Server Error".
See http://ec2-13-210-169-246.ap-southeast-2.compute.amazonaws.com/scoreset/urn:mavedb:00000001-a-1/
Score sets should be licensed using one of the following two options:
Right now the Score Set licence is not shown on the Score Set page but this is critical information! It seems like it's only viewable through the API.
We could consider adding graphics corresponding to the licences next to the MaveVis and download buttons since CC has nice banner icons for us to use (see below), but a text field is fine for now.
https://mirrors.creativecommons.org/presskit/buttons/88x31/svg/by-nc-sa.svg
https://mirrors.creativecommons.org/presskit/buttons/88x31/svg/cc-zero.svg
The tabs when viewing an Experiment are titled "Information" "Scores Sets" and "Reference Maps". The second should be "Score Sets".
Target names double up in the search results even if they are identical.
While it is possible to link to the publication in the abstract field, an explicit PMID field might be useful that automatically obtains citation information and a clickable link to the paper for display.
Columns in the ScoreSet preview should be in scientific notation with with 3(?) significant figures.
The short description field only shows up in the table view, not when viewing it from the profile or on its own page.
There should be an edit button on the Score Set and Experiment pages for admin contributors.
It would be helpful for debugging now and in the future to have the GitHub branch and commit number for the build running on the server to be displayed in small font in the footer on each page. This could be turned off in production, or just made very subtle.
When the data doesn't validate during score set submission, the keywords get cleared when the page reloads.
Labels on the data fields are "Scores data" and "Counts data". Should be "Score data" and "Count data".
The counts field is marked with an asterisk, indicating that it's required, but counts should be optional.
Standard boilerplate "* indicates a required field" does not appear on this page, may be missing from others as well.
The search result table misspells "Accession" as "Accesion"
Every time a new get request is made on the score set page, the page number the user is currently on for the table which is not being viewed is reset. Implement a fix to stop the resetting so table rows can be easily compared.
Contributors are currently identified by their first and last names. However, ORCID supports a "published name" field that we should use if it's filled out.
Score sets should by default be populated with tags from the associated experiment.
Counts and scores should not be required to have the same number of rows.
The repository has a blank license file. We need to figure out whether to choose GPL (AGPL?), MIT, or something else.
Submit Experiment and Submit ScoreSet pages should have a "Cancel" button instead of just "Submit".
Right now, the user has to go to their profile page and click the menu to upload a new dataset. There should be a new option ("Upload"? "Contribute"?) between Profile and Logout that makes this easier to find.
Method descriptions, etc. need a preview button. This can open a new window or display below the active field but would be very helpful for the math stuff in particular.
Right now there are empty spots for "Terms and conditions" and "Privacy". We need to figure out what goes here. I'm assuming that this is dictated by Washington state law, since that is where the server is hosted.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.