Giter Site home page Giter Site logo

mavedb-api's Introduction

mavedb-api

API for MaveDB. MaveDB is a biological database for Multiplex Assays of Variant Effect (MAVE) datasets. The API powers the MaveDB website at mavedb.org and can also be called separately (see instructions below).

For more information about MaveDB or to cite MaveDB please refer to the MaveDB paper in Genome Biology.

Using mavedb-api

Using the library as an API client or validator for MaveDB data sets

Simply install the package using PIP:

pip install mavedb

Or add mavedb to your Python project's dependencies.

Building and running mavedb-api

Prerequisites

  • Python 3.9 or later
  • PIP
  • Poetry for building and publishing distributions. For details on installing poetry, consult its documentation.

Building distribution packages

To build the source distribution and wheel, run

poetry build

The build utility will look at pyproject.toml and invoke Poetry to build the distributions. Note that it will output build artifacts to ./dist by default.

The distribution can be uploaded to PyPI using Poetry as well. After building the packaged, simply invoke

poetry publish -r pypi -u <username> -p <password>

To build and publish the package in one go, just pass the --build flag to the publish command.

For use as a server, this distribution includes an optional set of dependencies, which are only invoked if the package is installed with poetry install mavedb --extras server.

Running a local version of the API server

First build the application's Docker image:

docker build --tag mavedb-api/mavedb-api .

Then start the application and its database:

docker-compose -f docker-compose-local.yml up -d

Omit -d (daemon) if you want to run the application in your terminal session, for instance to see startup errors without having to inspect the Docker container's log.

To stop the application when it is running as a daemon, run

docker-compose -f docker-compose-local.yml down

docker-compose-local.yml configures four containers: one for the API server, one for the PostgreSQL database, one for the worker node and one for the Redis cache which acts as the job queue for the worker node. The worker node stores data in a Docker volume named mavedb-redis and the database stores data in a Docker volume named mavedb-data. Both these volumes will persist after running docker-compose down.

Notes

  1. The mavedb-api container requires the following environment variables, which are configured in docker-compose-local.yml:

    • DB_HOST
    • DB_PORT
    • DB_DATABASE_NAME
    • DB_USERNAME
    • DB_PASSWORD
    • NCBI_API_KEY
    • REDIS_IP
    • REDIS_PORT

    The database username and password should be edited for production deployments. NCBI_API_KEY will be removed in the future. TODO Move these to an .env file.

Running the API server in Docker for development

A similar procedure can be followed to run the API server in development mode on your local machine. There are a couple of differences:

  • Your local source code directory is mounted to the Docker container, instead of copying it into the container.
  • The Uvicorn web server is started with a --reload option, so that code changes will cause the application to be reloaded, and you will not have to restart the container.
  • The API uses HTTP, whereas in production it uses encrypted communication via HTTPS.

To start the Docker container for development, make sure that the mavedb-api directory is allowed to be shared with Docker. In Docker Desktop, this can be configured under Settings > Resources > File sharing.

To start the application, run

docker-compose -f docker-compose-dev.yml up --build -d

Docker integration can also be configured in IDEs like PyCharm.

Running the API server directly for development

Sometimes you may want to run the API server outside of Docker. There are two ways to do this:

Before using either of these methods, configure the environment variables described above.

  1. Run the server_main.py script. This script will create the FastAPI application, start up an instance of the Uvicorn, and pass the application to it.
export PYTHONPATH=${PYTHONPATH}:"`pwd`/src"
python src/mavedb/server_main.py
  1. Run Uvicorn and pass it the application. This method supports code change auto-reloading.
export PYTHONPATH=${PYTHONPATH}:"`pwd`/src"
uvicorn mavedb.server_main:app --reload

If you use PyCharm, the first method can be used in a Python run configuration, but the second method supports PyCharm's FastAPI run configuration.

Running the API server for production

We maintain deployment configuration options and steps within a private repository used for deploying this source code to the production MaveDB environment. The main difference between the production setup and these local setups is that the worker and api services are split into distinct environments, allowing them to scale up or down individually dependent on need.

mavedb-api's People

Contributors

bencap avatar jstone-uw avatar afrubin avatar ashsny avatar nickzoic avatar estelleda avatar harmatt avatar

Stargazers

Daniel Zeiberg avatar Valli Subasri avatar Erwan Delage avatar Mark Drummond avatar Weiyan Jia avatar Haley Bianchi avatar Francesco Patane, MSc avatar

Watchers

James Cloos avatar  avatar  avatar  avatar  avatar Polina Polunina avatar Haley Bianchi avatar

Forkers

plushz nickzoic

mavedb-api's Issues

Structured metadata for column descriptions

Currently score sets support arbitrarily-named data columns. It is recommended that uploaders describe these columns in the free text methods, but this is not portable or easily accessible to someone who is downloading data via API.

A useful future feature would be to scan the data table on upload and prompt the uploader to provide a brief description of each column. This can be as simple as "score for replicate 1" or "standard deviation" but it will give us something more structured to send along with the associated data.

Target sequence nomenclature

The MaveDB API uses the now-outdated term wt_sequence instead of target_sequence inside the models.

This should be updated to use the more generic target_sequence throughout the codebase, since many of the target sequences in the database are not "wild type" in that they don't occur naturally.

Supporting preprints

Currently MaveDB only formats publication information for PubMed IDs, but many of our datasets are released as preprints first. The only way users can enter preprint information is as a DOI for extra data, which is far from ideal.

We should support bioRxiv and medRxiv preprints the same way that we support PubMed IDs (e.g., nice formatting and availability to search as described in #9).

This will require that we can validate a bioRxiv identifier (noting that they have had multiple versions over the years) and that we can retrieve the relevant information from the bioRxiv API.

Additionally, the MaveDB server should periodically check to see if a preprint has been published in a journal, and provide that information as a note on the dataset (but should not replace the reference).

Handling differences between target sequences and reference sequences

MaveDB allows users to specify accession numbers from major genomic databases (Ensembl, RefSeq, UniProt) when depositing a target sequence. As we develop a validation framework for these accession numbers, it will be important to handle cases where a target sequence is similar but not identical to the reference sequence.

There are many cases where this is useful. For example, one of the TP53 datasets in MaveDB was performed on a non-reference allele (see: https://mavedb.org/#/experiment-sets/urn:mavedb:00000068). To address this, the target was entered as "TP53 (P72R)" (e.g. for https://mavedb.org/#/score-sets/urn:mavedb:00000068-a-1).

If we wanted to associate this target with a transcript from RefSeq we could:

  • State that there is no match
  • Choose the closest transcript
  • Choose the closest transcript and document any differences between the target and the transcript

Of these, it seems that the last option is clearly the best one.

We should be able to do this in the API by adding associated VRS objects that describe the differences between the given reference sequence and the target sequence. From there we can build the necessary UI elements to convey this information to the user concisely.

Storing and searching on publication data

Users should be able to search for MaveDB entries based on the authors on the associated publications. Currently the only people associated with a given record are the people who were involved in the upload, which has limited utility.

Other fields associated with the publications should also be available to search, such as title, abstract, year range, etc. We should store all the metadata from the PubMed/preprint records locally and make it available for search using the website search page or API.

Score sets with genomic coordinates and no target

To better support saturation genome editing (SGE) data and also support base editor/prime editing screens, we should add a new type of score set that doesn't require the user to upload a target sequence and instead includes fully-qualified genomic coordinates for the variants.

The variants should still use mavehgvs-style strings, but include a prefix that is either a fully-qualified chromosome or a transcript identifier from RefSeq, with version number (e.g. NC_000012.12 for chr12 or NM_007294.4 for the BRCA1 transcript sequence). Protein identifiers and datasets with only protein variants would not be permitted.

This will require implementing a genome-aware validation process on the server. We can use an existing software tool for this function:

We can implement this as either a special case of score set, or as a new type of score set entirely. It's worth thinking through the implications of each of these approaches.

It should also be possible to create a meta-analysis score set of this type that is based on one or more "normal" score sets that contain target sequences. I think this will have a lot of utility for clinically-focused re-annotation of existing MAVE data.

Error notifications

Add a mechanism for notifying team members when an error occurs.

Requirements

  1. Ensure that uncaught exceptions that otherwise reach FastAPI are logged with level error.
  2. Send a notification to Slack whenever a message is logged with level error or critical.

In some deployment scenarios, an external mechanism for sending notifications about log messages might be the preferred solution. But in our current production environment, direct support for Slack notifications will be helpful.

Upload invalid score file problem

For example, if the uploading score file is
hgvs_pro, score, err, ambler
p.Met1Ala, abc , 0.000294291714898736, 3.0
p.Met1Cys, 0.0030227552110695657, 0.000521690342044468, 3.0

Value abc will be converted to nan and the type is float.
However, the next value 0.0030227552110695657 will be string and raise error.

Rename SraIdentifier

The Experiment field name SraIdentifier is used to store the accession number for raw read data. Originally, this only supported accessions from SRA, but it has since expanded and become more general.

Proposed new name is RawReadIdentifier, RawDataIdentifier or similar.

Improving usage statistics

To demonstrate the impact of MaveDB, we are often called upon to provide detailed usage statistics. However, our current ability to collect this important information is limited.

We don't want to track individual users, but we should be collecting the following aggregated information:

  • Number of page views
  • Number of unique users
  • Number of downloads for each dataset
  • Number of external API requests
  • Other things?

We'll want to collect these statistics by date and by country so that we can show that we have an international user base and also produce breakdowns over various time periods.

Creating dump files for devs

It would be useful to have a better process for making dump files for development.

Currently, team members typically use a full dump of the production database for local dev and testing work. This is straightforward, but has two problems:

  • We cannot share these dump files with external collaborators since they contain non-public datasets, as well as user-supplied email addresses
  • The dump files can be quite large and a subset of real data is sufficient for most development and testing work

To address this, we should develop a new process for generating a dump file that includes only a subset of public data that is suitable for distribution to collaborators outside the core team.

The dump file will have to be re-generated whenever the database schema changes, and should be versioned with the software version it's compatible with. It might make sense to re-generate the dev dump file with each mavedb-api release, but this is probably not necessary for minor releases that don't touch the database schema.

License text shouldn't be included in the API

Requesting a score set currently includes the full license text, which doesn't seem necessary especially since we're already providing a web link to the license. This field should be kept internally but excluded from the API response.

Should extra metadata be optional?

The extra_metadata field is required in score sets and possibly elsewhere. If this is just for the extra experimental metadata that users can upload just in case they have additional structured data, it should probably be optional. Is it used for any other purposes?

Running the tests requires psycopg2

Trying to run the tests requires psycopg2, but ideally this should be optional for developers who don't want to run a local server.

ImportError while loading conftest '/home/afrubin/Projects/MAVEs/mavedb-api/tests/conftest.py'.
tests/conftest.py:7: in <module>
    from mavedb.server_main import app
src/mavedb/server_main.py:15: in <module>
    from mavedb.models import *
src/mavedb/models/experiment.py:9: in <module>
    from mavedb.deps import JSONB
src/mavedb/deps.py:8: in <module>
    from mavedb.db.session import SessionLocal
src/mavedb/db/session.py:16: in <module>
    engine = create_engine(
venv/lib/python3.10/site-packages/sqlalchemy/util/deprecations.py:375: in warned
    return fn(*args, **kwargs)
venv/lib/python3.10/site-packages/sqlalchemy/engine/create.py:548: in create_engine
    dbapi = dialect_cls.dbapi(**dbapi_args)
venv/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/psycopg2.py:811: in dbapi
    import psycopg2
E   ModuleNotFoundError: No module named 'psycopg2'

Cleaning up old constants

The repository contains quite a few constant definitions held over from the Django MaveDB codebase that are either no longer necessary or best obtained from dependencies. Most of these can be tidied up and removed.

Examples include:

NA_STRING = "NA"
null_values_list = (
"nan",
"na",
"none",
"",
"undefined",
"n/a",
"null",
"nil",
"-",
None,
)

and the semi-duplicated

NA_VALUE = 'NA'
NULL_VALUES = (
'',
'na',
'nan',
'nil',
'none',
'null',
'n/a',
'undefined',
NA_VALUE
)

as well as biological information that's included in fqfa like:

aa_dict_key_1 = {
"A": "Ala", "C": "Cys", "D": "Asp", "E": "Glu", "F": "Phe",
"G": "Gly", "H": "His", "I": "Ile", "K": "Lys", "L": "Leu",
"M": "Met", "N": "Asn", "P": "Pro", "Q": "Gln", "R": "Arg",
"S": "Ser", "T": "Thr", "V": "Val", "W": "Trp", "Y": "Tyr",
"*": "Ter"

Validation of Intronic Variants

Intronic variants are not well supported by HGVS validation, so are assumed to be valid and ingested as provided by the user. We should implement a system that calculates the accompanying genomic coordinates (perhaps with some sort of mapping service) for validation.

Inconsistent API endpoint capitalization

The valid API endpoints for experiment set and score set records are experimentSets and scoresets, respectively.

We should make the case of these consistent. My preference is for all lowercase but either is fine.

I think this new API endpoint is so new that we don't need to worry about making this change backwards-compatible.

API exposing internal id values

The API is returning the internal id field used in the database. We don't want people to rely on this being stable, so it should probably be removed from the response.

This seems to be the case for most (all?) models, including simple models like raw reads identifier and larger models like experiments.

Missing .env.prod File?

I am trying to use the mavedb-api for the first time (so there's a good chance I'm doing something wrong here), and am running into the below error:

(mavedb) bwittmann@bruce-ubuntu:~/GitRepos/mavedb-api$ sudo docker-compose -f docker-compose-prod.yml up
ERROR: Couldn't find env file: /home/bwittmann/GitRepos/mavedb-api/settings/.env.prod

Is the env.prod file supposed to come with the repository, or is this something that I should be configuring myself? The docker build command completed without issue before trying to run with docker-compose.

Uploading datasets with no keywords

It is currently possible to upload a new experiment with no 'keywords' field applied (the ExperimentCreate model permits this), but it causes an exception (TypeError: 'NoneType' object is not iterable) when the keywords get created here:

https://github.com/VariantEffect/mavedb-api/blob/45f9e7248a34b84b057776a98707adcc0c9a2ae9/src/mavedb/routers/experiments.py#LL146C1-L146C54

The server should check to make sure that there are keywords before iterating. This might get resolved as a side effect of overhauling the keyword system with the controlled vocabulary terms.

Replace reference genome field with TaxID organism

We should remove the reference genome field and replace it with an entry from the NCBI Taxonomy resource.

The relevant information to store in MaveDB is:

  • Taxonomy ID
  • Species name
  • Common name

We will want to prepopulate the database with relevant Taxonomy IDs and also allow users to enter a different valid Taxonomy ID if needed.

When a new Taxonomy ID is entered, the backend should fetch the details using an eutils API call and display the organism name in the form preview. We can create a new entry in the database once the dataset is submitted. This information can be retrieved using the NCBI Datasets REST API.

The field in the website UI should be able to autocomplete based on the current name, common name, or TaxID. The API should accept the current name, common name, or TaxID as exact matches (see the entry for human as an example - an API call should be able to provide "human" instead of "9606").

For synthetic sequences, we need to find out if Taxonomy has a suitable term for this, or if we need to define our own special term that's MaveDB-specific.

We can handle this issue in two steps:

  • Replace the Reference Genome model with a Target Organism model
  • Add support for adding new Target Organisms

Add contributor information

In the previous version of MaveDB, datasets could have multiple contributing users listed. This was important for managing access permissions for private datasets, but it was also a valuable record of who contributed to the curation of the various entries.

It seems like this information, as well as the ability to add additional contributors to a dataset, may have been lost in the transition from Django to FastAPI.

It's probably not necessary to re-implement the admin/editor/viewer roles, but uploaders should be able to add other MaveDB users to the records they create and have that information be displayed. Once a secondary user has been added, it's fine if they have full edit privileges over the dataset if that makes the implementation easier.

Once this feature is added, we'll need to repopulate the extra contributors from the last backup of the Django database if those tables weren't retained in the current version.

Validator raises error but score set is still saved in database

If sequence and sequence type don't match, validate_and_standardize_dataframe_pair in upload_score_set_variant_data function will raise an importing scores and counts error that leads score and count files can't be saved in database and website stays in creating new score set page. However, a score set without score_columns and count_columns is saved in database and it appears in user's Dashboard unpublished score set list.

Generating clinical variant IDs

MaveDB defines each variant with respect to the target sequence provided by the uploader. This is often a cDNA or a part of a gene or protein sequence. In order to share our variants with other resources, we will need to generate variant strings with respect to a reference genome.

The proposed solution for human target sequences is to generate an HGVS string relative to hg38 and use those HGVS strings to generate CAids from the ClinGen Allele Registry.

Variants in MaveDB will be individually addressable by CAid.

We will need to verify the target accession numbers for existing datasets and make sure that the target sequences match the reference. Datasets that do not have suitable accession numbers or that do not match the reference will be flagged for followup.

Cannot assign new experiment to an existing experiment set

It doesn't look like it's possible in the new version to specify an existing experiment set when creating a new experiment.

Previously, users could specify an experiment set URN when creating an experiment if they had one, otherwise a new experiment set would be automatically generated. Experiment sets are never created directly - only indirectly by creating a new experiment.

This feature needs to be re-added so that we can support upload of experiment sets with multiple experiments again.

Defining primary and secondary publications

Currently MaveDB supports any number of publications for each experiment or score set. The intention is that users can add related publications, such as those that describe a key reagent or cell line. However, this compromises our ability to return relevant search results for datasets published in a given paper.

The proposed solution is to swap from having a single set of publications to a single primary publication (where the dataset is described) and any number of secondary publications (related publications).

For existing datasets that have one publication specified, we can automatically assign this as the primary publication. The small number of datasets with multiple publications will need to be manually reviewed.

Dashboard Search function doesn't work.

After typing something in Dashboard Search box, Docker shows

File "/code/src/mavedb/lib/experiments.py", line 33, in search_experiments

    or_(

TypeError: or_ expected 2 arguments, got 9

Score sets with multiple targets

Researchers are starting to do more experiments that mix variants from multiple target sequences into a single population. To support this, we'll need to make some changes to our data model and validation. Currently this can only be done by having a single experiment with a different score set for each target.

One possible solution is to allow multiple target_gene objects to be attached to a single score set. If multiple target genes are present, all variants in the score and count tables will be required to have an identifier specified (e.g. "TargetA:p.Asp12Ala" where TargetA is the identifier). This syntax is already supported by mavehgvs, so adding this feature to the validation should be fairly straightforward. We will need to have users define the identifier for each target gene, which would be a new field.

Suggestions for alternate ways to implement this are welcome.

Support for validation/control data

Control variants from low-throughput investigations before performing a MAVE are important for calibration and interpretation of MAVE data.

We should add the ability to link low-throughput datasets (uploaded as an experiment + score set) and link them explicitly to an uploaded MAVE and assert that it's control data. This is potentially straightforward, since it's just a special kind of link between datasets that the user can add.

Having low-throughput control data and high-throughput data linked in the database would provide value for users and also some interesting prospects for visualization.

Keyword overhaul

Rationale

MaveDB currently uses a single keyword field that can be populated with user-specified keywords. However, many of the entries are blank or have keywords of little utility (e.g. target gene name). In order to improve MaveDB’s searchability and utility for modellers, we will replace the existing optional keyword system with a new system.

Instead of one keyword field with multiple keywords, each score set or experiment will have fixed keyword categories that each take a single option. Instead of free-text keywords specified by the user, the user will select from a controlled vocabulary of terms.

Implementation

The existing keyword field will be deprecated and no longer included in the forms or views, but the keyword data will be retained (at least for now).

New keyword fields will be created (exact names and fields to be finalized later):

  • Assay System
  • Variant Library Creation Method
  • Selection Type
  • Sequencing Modality
  • Variant Scoring Method

All but the last category are specific to experiments and the last is specific to score sets. However, to preserve future flexibility, these should be implemented at the DatasetModel class level rather than in the specific subclasses.

Although the forms will only support one keyword per entry initially, these should be implemented such that each DatasetModel can have many keywords.

Keywords should be defined in an editable JSON file, similar to the way that target reference genomes work now (see the ReferenceGenome class and its usage).

Deprecate requirements.txt

The Dockerfile still uses requirements.txt to define dependencies, but we should rely exclusively on pyproject.toml instead rather than trying to keep these files in sync.

Experiments should return their score set URNs

Right now requesting an experiment only returns the number of score sets but not their URNs or contents. Instead, the score set includes the full experiment object for its parent.

By contrast, experiment sets include the full experiment record for each of their child experiments.

The behavior of both of these should be harmonized in some way.

One option is to include all child records as part of the parent, which could make some experiment sets rather large (since they would also include all the score sets associated with the experiments).

Another option is to just include URNs for the parent and child records in each experiment set, experiment, and score set. This would allow API users to easily traverse the dataset without making each individual response too big.

@jstone-uw @EstelleDa would changing this have major implications for how https://github.com/VariantEffect/mavedb-ui functions?

Genetic background information

Users may want to specify genetic changes that are relevant to a MAVE (e.g. a haplotype with variation that's not at the mutagenized locus or a risk allele that's present in the cell line).

We can do this by extending our existing support for VRS objects, by allowing users to upload a set of VRS objects describing this variation to an experiment record.

This will also allow us to improve the utility of mapped variants, since we can provide VRS objects that include this genetic background variation along with the variant measured in the experiment.

Update publication data model

To enable future support of preprints (see #14), we should update the current data model and database table for publications that currently stores PubMed IDs as follows:

  • Rename the model (and similarly named models) from pubmed_identifier to publication_identifier
  • Change existing Experiment and Score Set publication list into two elements: one optional primary publication and any number of additional publications

When creating a new Score Set, the primary publication should be auto-completed based on the Experiment's primary publication. Additional publications should also be auto-completed, but we should put some kind of message in the UI asking the user to review them to make sure they are relevant for the Score Set. For example, a paper describing the cell line used for the functional assay might be cited in the Experiment but not the Score Set.

Problems validating correct variants

I'm trying to upload score files to a dataset and am getting a 400 error that the variants are inconsistent with the target sequence. However, the variants in the file and target sequence are in agreement and validate properly with mavehgvs's target sequence validation.

I'll be looking into this more and trying to figure out where the problem lies.

Outdated version of mavehgvs

The server is currently specifying an outdated version of mavehgvs (0.4.0, current is 0.6.0). This is causing some valid variants to not be validated correctly.

We can either bump this version or allow the server to pull the latest version of mavehgvs. The latter is probably OK since we control the mavehgvs release cycle, but I understand wanting to pin everything for stability.

Replace metapub with eutils

Right now we're using metapub to handle fetching PubMed records from NCBI. However, this project doesn't seem to be very active anymore and includes numerous features that we don't need at all for our purposes. We should swap over to using eutils, which metapub uses under the hood and write our own code for formatting references.

We will want to integrate eutils anyways, because this will allow us to access other NCBI data including RefSeq.

Another option that might be preferable is to query the Entrez APIs directly. @jstone-uw your thoughts on this?

Admin users

The server enforces several sensible constraints on who can add things on contribute to various records (e.g., only the owner can create a new experiment under an existing experiment set). There are circumstances where an admin user would want to violate these constraints and we should have a mechanism for doing so.

However, since users are authenticated using ORCID, it doesn't make sense to me to have multiple accounts (and therefore multiple ORCID profiles) per person. It also seems dangerous and error-prone for a user to always have admin privileges - I'm adding data as a normal user most of the time, and these safeguards are helpful in preventing me from uploading something incorrectly.

My proposed solution is to handle the admin status of a user by issuing multiple API keys per user - a normal key and an admin key. Actions that require admin privileges would need to provide the admin key. This would be selectable as a toggle in the profile on the website, and copyable for using the API directly.

From the UI side, if the user is logged in as admin, they should get a thin, easily noticeable banner at the top of the page warning them that they have admin privileges enabled. Buttons that interact with the API could also change color to make sure that it's clear this change is being done as an admin.

From the API side, this might require changing the current code that gets a user, since I think it mostly just resolves by user id (ORCID number) and expects a single row. These instances would have to be wrapped with a new function, and the permissions model would have to be updated to accept certain changes from admin users.

The two added benefits of doing this as API keys are that changes will be associated with the admin user's primary account (since we track modifications by ORCID) and that it should be straightforward to add additional levels of admin privileges if we decide we want more fine-grained control in the future.

Please weigh in if you have thoughts on how to do this - I'm sure it's a solved problem that I just haven't encountered before.

In the meantime, I will add additional tests to make sure we're enforcing more things that normal users can't do, but that we want admin users to be allowed to do.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.