Giter Site home page Giter Site logo

repronim / brainverse Goto Github PK

View Code? Open in Web Editor NEW
27.0 27.0 11.0 5.38 MB

BrainVerse is an electronic laboratory notebook built as an open-source, cross-platform desktop application to help researchers manage, track and share information in a comprehensive format.

License: Other

CSS 12.87% HTML 7.59% JavaScript 79.55%
electron javascript

brainverse's Introduction

ReproMan

Supports python version GitHub release PyPI version fury.io Tests codecov.io Documentation

ReproMan aims to simplify creation and management of computing environments in Neuroimaging. While concentrating on Neuroimaging use-cases, it is by no means is limited to this field of science and tools will find utility in other fields as well.

Status

ReproMan is under rapid development. While the code base is still growing the focus is increasingly shifting towards robust and safe operation with a sensible API. There has been no major public release yet, as organization and configuration are still subject of considerable reorganization and standardization.

See CONTRIBUTING.md if you are interested in internals and/or contributing to the project.

Installation

ReproMan requires Python 3 (>= 3.8).

Linux'es and OSX (Windows yet TODO) - via pip

By default, installation via pip (pip install reproman) installs core functionality of reproman allowing for managing datasets etc. Additional installation schemes are available, so you could provide enhanced installation via pip install 'reproman[SCHEME]' where SCHEME could be

  • tests to also install dependencies used by unit-tests battery of the reproman
  • full to install all of possible dependencies, e.g. DataLad

For installation through pip you would need some external dependencies not shipped from it (e.g. docker, singularity, etc.) for which please refer to the next section.

Debian-based systems

On Debian-based systems we recommend to enable NeuroDebian from which we will soon provide recent releases of ReproMan. We will also provide backports of all necessary packages from that repository.

Dependencies

Python 3.8+ with header files possibly needed to build some extensions without wheels. They are provided by python3-dev on debian-based systems or python-devel on Red Hat systems.

Our setup.py and corresponding packaging describes all necessary python dependencies. On Debian-based systems we recommend to enable NeuroDebian since we use it to provide backports of recent fixed external modules we depend upon. Additionally, if you would like to develop and run our tests battery see CONTRIBUTING.md regarding additional dependencies.

A typical workflow for reproman run

This example is heavily based on the "Typical workflow" example created for ///repronim/containers which we refer you to discover more about YODA principles etc. In this reproman example we will follow exactly the same goal -- running MRIQC on a sample dataset -- but this time utilizing ReproMan's ability to run computation remotely. DataLad and ///repronim/containers will still be used for data and containers logistics, while reproman will establish a little HTCondor cluster in the AWS cloud, run the analysis, and fetch the results.

Step 1: Create the HTCondor AWS EC2 cluster

If it is the first time you are using ReproMan to interact with AWS cloud services, you should first provide ReproMan with secret credentials to interact with AWS. For that edit its configuration file (~/.config/reproman/reproman.cfg on Linux, ~/Library/Application Support/reproman/reproman.cfg on OSX)

[aws]
access_key_id = ...
secret_access_key = ...

Disclaimer/Warning: Never share or post those secrets publicly.

filling out the ...s. If reproman fails to find this information, error message Unable to locate credentials will appear.

Run (need to be done once, makes resource available for reproman login or reproman run):

reproman create aws-hpc2 -t aws-condor -b size=2 -b instance_type=t2.medium

to create a new ReproMan resource: 2 AWS EC2 instances, with HTCondor installed (we use NITRC-CE instances).

Disclaimer/Warning: It is important to monitor your cloud resources in the cloud provider dashboard(s) to ensure absent run away instances etc. to help avoid incuring heavy cost for used cloud services.

Step 2: Create analysis DataLad dataset and run computation on aws-hpc2

Following script is an exact replica from ///repronim/containers where only the datalad containers-run command, which fetches data locally and runs computation locally and serially, is replaced with reproman run which publishes dataset (without data) to the remote resource, fetches the data, runs computation via HTCondor in parallel across 2 nodes, and then fetches results back:

#!/bin/sh
(  # so it could be just copy pasted or used as a script
PS4='> '; set -xeu  # to see what we are doing and exit upon error
# Work in some temporary directory
cd $(mktemp -d ${TMPDIR:-/tmp}/repro-XXXXXXX)
# Create a dataset to contain mriqc output
datalad create -d ds000003-qc -c text2git
cd ds000003-qc
# Install our containers collection:
datalad install -d . ///repronim/containers
# (optionally) Freeze container of interest to the specific version desired
# to facilitate reproducibility of some older results
datalad run -m "Downgrade/Freeze mriqc container version" \
    containers/scripts/freeze_versions bids-mriqc=0.16.0
# Install input data:
datalad install -d . -s https://github.com/ReproNim/ds000003-demo sourcedata
# Setup git to ignore workdir to be used by pipelines
echo "workdir/" > .gitignore && datalad save -m "Ignore workdir" .gitignore
# Execute desired preprocessing in parallel across two subjects
# on remote AWS EC2 cluster, creating a provenance record
# in git history containing all condor submission scripts and logs, and
# fetching them locally
reproman run -r aws-hpc2 \
   --sub condor --orc datalad-pair \
   --jp "container=containers/bids-mriqc" --bp subj=02,13 --follow \
   --input 'sourcedata/sub-{p[subj]}' \
   --output . \
   '{inputs}' . participant group -w workdir --participant_label '{p[subj]}'
)

ReproMan: Execute documentation section provides more information on the underlying principles behind reproman run command.

Step 3: Remove resource

Whenever everything is computed and fetched, and you are satisfied with the results, use reproman delete aws-hpc2 to terminate remote cluster in AWS, to not cause unnecessary charges.

License

MIT/Expat

Disclaimer

It is in a beta stage -- majority of the functionality is usable but Documentation and API enhancements is WiP to make it better. Please do not be shy of filing an issue or a pull request. See CONTRIBUTING.md for the guidance.

brainverse's People

Contributors

derrydchen avatar djarecka avatar mgxd avatar sanuann avatar satra avatar smpadhy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brainverse's Issues

Finding alias for a new data element

While creating a new data element, the new data element could be similar to an existing one. A list of alias suggestion could be provided. If the new data element is created, then the list will be added to the alias field.

Add Field validation

Use field 'type', eg., string, number, or any variety to validate the field's input entered by the user

Add logger to BrainverseApp

Explored different logging packages exist, e.g winstorn, bunyan, npmlog, log4js.
In expressjs, it is recommended to use winston or bunyan.

Preview form display

Preview form could be shown as modal that pops out and display. With current implementation, it should align with the fields selection div container.

Parsing NDA Data Element fields

We have been so far parsing two fields - ValueRange and Notes. We need to parse/add other field details while allowing editing of a data element, e.g. Translation, size. In addition to that, there are several forms, ValueRange and Notes are not properly being parsed.

NDA Updated Form Versioning

Each NDA data dictionary has 'shortName' field which is unique and has some version information on it. With NDA Editor allowing to edit a form, the challenge is how to version the updated form and preserve the provenance information.
As a first step, we added a new term, 'derivedFrom' denoting the original form shortname from which it is derived from. The new shortName for the updated form is oldShortName followed by randomNumber. However, we need a more concrete scheme to solve this problem as we start updating the original NDA form more collaboratively, versioning plays a crucial role.

[EP] 'Add Participant' to an experiment plan

Add participant to a project plan.
Customize the plan for the participant
Add a button ‘execute’ on a participant plan board
Click on checkbox when the task/instrument is done

Checking the draft changes of data structure using NDA API

Even though every time we pull the data dictionary from NDA using NDA REST API before editing or resuse, as per David's feedback, there is a chance that by the time the pull request is submitted and accepted, there might be draft changes to the form. We should check the hash of the form before creating a pull request to make sure we are making changes to the origin form. Again this is related to the issue #75

Improving the UI of the Editor interface

There is lots of scrolling in the editor Interface. We could improve on it.

  • Once the file is saved it should pop up and show message that it has saved the file.

Options parsing for select/radio buttons

Radio button style and options not showing as desired.
For example, in grit01, Grit Total Score not displaying all the 5 options. If we look at the term (shown below for reference), in the 'valueRange' field, it has 5 values while in the 'notes', it has 2 values. Though for a human, it is easy to understand, for machine we need to provide rules for such case for parsing.
One solution might be to check of number of options specified in valueRange field matches with the 'notes' field. If it does not, which ever filed provides more options should be considered and the other fields information can go into the tooltip.

{
"id": 2081184,
"required": "Required",
"condition": null,
"aliases": [],
"filterElement": "1",
"position": 18,
"dataElementId": 1289666,
"name": "grit_total",
"type": "Float",
"size": null,
"description": "Grit Total score",
"valueRange": "1::5",
"notes": "1=not at all gritty; 5=extremely gritty",
"translations": []
}

Acquisition Form save button

Acquisition Form on clicking 'save' button allows saving the form even mandatory input fields not entered.
Save button needs to be disabled until form validation is passed.

Buttons Alignment in nda import

In the NDAImport form, buttons - select all, clear, PreviewForm, save, go to main page, fill up form needs to be rearranged. Right now, buttons stack up.

Update README

  • App configuration information
  • App Packaging info
  • Node installation
  • Accessing as Webapp

Validation of data elements

Currently, we have minimal validation based on the required field and field type. We need to enhance the validation based on other fields such as size.
Also, we could use validation API (from David's demo) or if standalone package provided (similar to Hauke's RedCap importer) to improve this.

Create a single .ttl file for saving experiment plans

Currently, for each new experiment plan submitted, a new turtle file is created. Each plan is represented as a RDF graph. We decided to serialize all plans to a single file and loads/query the qraph as required.

Beck Depression Inventory preview form failing

When one selects all fields of Beck Depression Inventory and click on preview form, it fails to show the preview. However, if one select few fields, preview works fine.
@derry0822 any idea on this?

Add parser for an OWL/turtle

Given a nidm experiment in OWL/turtle/jsonld file format, the parser needs to parse the file to obtain field names for the forms

Unable to delete primary packaged folder

  • On mac, authentication required to delete created package folder, and even then, does not delete folder
  • ADVISORY WARNING: Only way to delete so far is in console: rm -rf

Score calculation data element

Currently, while previewing the form, the fields that needs to be calculated based on the previous fields values are also shown without any indication that they needs to be calculated and the dependency structure.
We need some attributes to indicate the dependency structure. As per Hauke's presentation, we could store score calculations or any transform in separately and then can use a graphical workflow interface to specify the score calculation on a group of fields.
We need more discussion to come up with the model.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.