Giter Site home page Giter Site logo

vampire's Introduction

VENUSAR

VENUSAR (Variant and Epigenetic anNotation for Underlying Significance and Regulation) is a suite of tools geared towards easy integration of multiple types of big data to make reasonable and interesting biological conclusions. Designed to be modular so that each tool is useful in its own right, but also capable of being a pipeline in and of itself, VENUSAR aims to help scientists and clinicians better understand the differences between their data sets. Specifically, VENUSAR was built to help identify functionally significant epigenetic lesions in cancer caused by genetic mutations in non-coding regulatory elements.

There are a number of potential uses for VENUSAR, such as:

  • Scanning VCF files for variants that perturb or create new TF motifs.
  • Identifying differential ChIP-seq peaks between samples.
    • Corroborating these differences with gene expression.
    • Creation of predicted gene regulatory networks.
  • Decomposition of SNV motif frequency with respect to their surrounding sequence to identify mutational signatures that may be attributed to specific generating processes.

These tasks can be done on an individual basis or as a pipeline.

Installation

None, cause it ain’t released yet.

New Stuff

Usage

$ venusar --help

vampire's People

Contributors

crumbs350 avatar j-andrews7 avatar eepfeifer avatar

Stargazers

 avatar  avatar Mo avatar

Watchers

James Cloos avatar  avatar  avatar  avatar

vampire's Issues

Add additional motif databases.

Including several of them in the package data would likely be a good idea. Currently we have the following in formats that are usable:

JASPAR CORE 201 VERTEVRATES
HOCOMOCOv10 HUMAN - I had to do some things with this to get it into a format that was usable. Namely, the HOCOMOCO motif IDs were removed and the TF names in the file were replaced with the actual **gene* names so that they could be corroborated with expression data, etc. This will be an issue we likely face with many databases.

Remove numpy dependence.

Change the use of numpy for methods like mean and std, which already have python equivalents in the statistics module. This should only be an issue in tfsites_activity.py.

activity.py bugs, tests & refactors

Bugs

  • Z-scores not calculated, leading to no samples meeting the threshold. Not sure what the issue is, haven't delved into it deeply.
  • Output is quite messy and doesn't look quite right. Extra commas in certain places, etc. Still need to round some of the large floats to a reasonable number of decimals.
  • Bed output does not filter out results that don't have a sample meet the z-score threshold. So every Locus that overlaps a variant will be printed to output. Also z-scores for every sample are printed for each individual variant sample while it should be only the z-score for that sample.

Tested

  • Runs with basic arguments (-i, -a, -th, -ov, -ob) and (probably) functions correctly.
  • Inclusion options appear to work correctly (-ib, -iv).
  • Calculates z-scores correctly, output values all match up with motifs and loci correctly, etc.
  • Validate output is proper VCF format - this tool should work for it.

Refactors/Enhancements

  • Move the Variant, Position, and Locus classes into another module so that they can be used as generics for other functions. Subclass them if additional attributes, methods, etc are needed for a given module (such as in this case).
  • Add option much like -fan for the VCF output file so that only variants that significantly affect a Locus in however many specified samples are printed to output.
  • Change z-score to modified z-score using Median Absolute Deviation (MAD).

tf_expression.py bugs, tests & refactors

Bugs

  • Runs, but doesn't filter properly. Fixed.
  • It can't handle multiple motifs for a given TF, as it relies on getting an item's index from a list. Only the first index will be returned with how it's done currently, so multiple motifs for a given TF may not be filtered correctly. Fixed. Currently utilizes the name of the motif and performs a check for it to get other indices, but this is just another reason to address #26.
  • Motif INFO fields are printed to output in a different order than they are in the input file. Would be nice if they were in the same order.
  • Likely related to above, MOTIFN field sometimes gets split up, where a seemingly random number of motif names get chucked in a new field that has no name. The resulting file will then throw errors when run through activity.py.

Tested

  • Runs and appears to work correctly. Adjusting threshold (-th) also appears to work.
  • Validate output is proper VCF format - this tool should work for it.

Refactors/Enhancements

  • Pretty much no flexibility currently for motifs of TFs not found in the expression file. Just blanket removes them, so we should probably add an option to retain them if wanted.
  • Try to handle TF complexes (ATR::SP1, etc) and such, it will always just remove them as it stands. Maybe just check if either of the TFs meets the threshold and retain the complex if so.
  • Many motif databases use the TF name rather than the gene symbol (e.g. PU.1 rather than SPI1). Some of them have tables that allow for these to be easily converted to the actual gene symbol (HOMER, for example), but others may not. Need to manually check how prevalent this is and make sure users recognize this potential problem. The gene symbol is what's used to cross-reference the gene expression data, so it's necessary.

Change all current scripts to modules.

Current scripts are not proper modules. None of them have main functions and cannot be easily imported and ran.

Main functions for each need to be created so that a core scripts can be written and subcommands set up using these modules as appropriate.

gene_expression.py bugs, tests & refactors

Bugs

None known.

Tested

  • Runs with basic arguments and functions correctly.
  • Calculates z-scores correctly, output values all match up with motifs and loci correctly, etc.
  • Validate output is proper VCF format - this tool should work for it.

Refactors/Enhancements

  • Move the Variant, Position, and Locus classes into another module so that they can be used as generics for other functions. Subclass them if additional attributes, methods, etc are needed for a given module (such as in this case).
  • Change z-score to robust z-score using Median Absolute Deviation (MAD).

Allow iterative or parallel processing for ChIP-seq data.

The current idea is to only utilize one type of ChIP-seq data to analyze "enhancer activity", but it would be useful if multiple data sets could be used in the same way. The challenge would be ensuring unique, yet easily identifiable INFO fields for each to print to output.

Computationally, the most efficient way to do this would be to allow users to provide multiple datasets at the command line for a common argument:

-d dataset1.bed dataset2.bed dataset3.bed or -d dataset1 -d dataset2 or such. Not sure which is more straightforward to implement in python. The backend processing would be the same for each, but would need to dynamically name INFO fields, perhaps based on a set piece of the filename. For example:

-d K27AC.data.bed FAIRE.data.bed would be split by '.' and use the first element as a prefix to the INFO field. K27ACZ: (z-scores for each sample with variant); FAIREZ: (z-scores for each sample with variant).

Add flexibility for gene identification when checking gene expression.

Some people may use RNA-seq data, so ideally, it should be able to be done on transcript by transcript basis. At a minimum, allow Entrez/Ensemble, etc, gene IDs to be used as well.

Currently, everything is based on Gene Symbols, which are sometimes changed, may vary based on what annotations you use, and are overall just very finicky.

get_motifs length all not individual TF

get_motifs max_length is used to track length off all TF Loaded from motif file, thus wing length for variant is too large for most TF and they are ignored. get_motifs should also set/return valid flag by checking that all ACGT have same length matrix.

'join' used as method name in motif.py

join is a default method for iterables in python, it's almost certainly not a good idea to have a method named similarly. I can't find a case where it's actually used, but it should be removed or renamed as appropriate.

Offending method starts on line 79 of motif.py.

Handle multiple motifs for same TF.

Some motif databases have multiples motifs for the same transcription factor. Would like to check both motifs, but need a way to differentiate the two. An easy solution would be to just tack on a suffix in the motif database file when calculating thresholds.

For example:

124 AIRE
16 12 14 0 0 1 9
7 8 9 1 0 7 16
1 2 5 0 0 9 22
4 5 11 36 37 26 1

125 AIRE
counts

Would become:

124 AIRE
counts

125 AIRE.2
counts

Or something along those lines.

Bed output for motifs.py may be incorrect.

This might be affecting ChIP-seq peak checking as well, need to investigate.

In the bed output, motifs/scores are repeated for all peaks that are overlapping. Need to check how this is being done and fix. Also need to ensure it isn't impacting the VCF output.

Add gene expression checking.

Need to look at expression of genes within given distance of a variant. Give user option to set distance to search for genes on each side of variant. Calculate a z-score for each sample with a variant compared to those without.

Many motif databases provide PFMs rather than count matrices

Some of the motif databases (ENCODE, HOMER) provide actual position frequency matrices (PFMs) rather than count matrices. This isn't a big issue in itself, but we will need to build in checks for the modules that expect count matrices that are then converted into position frequency matrices and then into position weight matrices / position specific score matrices (PWMs/PSSMs, they are the same thing).

In particular, summarize.py and motifs.py do this. Options include:

  • Changing it so our motif format simply uses PFMs and convert the motif files to them during the conversion process (not difficult and reduces need for pseudocount option for certain programs).
  • Add checks to summarize.py and motifs.py that check for this for first motif in file (if all elements of line start with '0.', it's a PFM or if all elements of line are INTs, it's a count matrix).

I'm leaning towards the former currently, as it's likely a cleaner solution.

Motif clustering as a potential feature.

Many motifs are quite similar, leading to many hits for a given sequence. In addition, many of the motif databases conglomerate data from many sources, leading to numerous motifs for the same TF (see #26).

One way to handle this would be to cluster motifs based on similarity to each other (for which several different methods exist), and use the best (or most common) representation for actual scanning. Unsure how to implement, but these papers/tools address it to a degree:

http://www.benoslab.pitt.edu/stamp/
http://goldenlab.org/projects/gmacs/index.html
GMACS Paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4384390/

control and documented runs

configuration input file that specifies attributes/settings of run: propose: file format flat text or xml
output file that documents run time/date, attributes used, output files, and primary conclusions
possible format rmarkdown, json, or xml

variant code duplication

gene_expression and activity both use the same or similar Variant class. Also sequence.py has similar logic which is used by motifs.py and others.
Need to merge these for combined vcf read utility. See vcf.py for start.

thresholds.py bugs, tests, and refactors

Bugs

None currently.

Tested

  • Runs with basic arguments (-m, -o, -pv, -ow) and functions properly.
  • Need to text other options more extensively (-a, -c, -g, -t, -pc).

Refactors/Enhancements

  • Pick a consistent database format or institute the ability the detect the different formats appropriately. The Biopython motifs module can utilize several different formats, so maybe the latter option is the better one. The meme format is probably the best choice if going for a consistent one, as they already have databases of various different types for many, many organisms available and they're already PWMs, cutting down the time it will take for us to determine the thresholds for each motifs. Also see #9.
  • Need to wrap up global variables and standardize documentation. Use doc strings from activity.py for standard format.
  • Rename multiple motifs for the same TF or utilize the motif IDs somehow (currently ignored for the JASPAR data and all changed to '>1' for the HOCOMOCO database). Unique names are required now. Need to add check to get_put_motifs in motif.py module that will add a number to end if necessary.
  • Switch to using p-values to determine thresholds. Utilize TFMPvalue R package.
    • Write python wrapper around TFM-Pvalue C++ package directly. Will remove dependencies on R and speed things up even further.

Include ENCODE data files.

Download and conglomerate TF ChIP-seq files from ENCODE for major cell lines. Include in package data.

Reference homotypic matches aren't output currently.

Homotypic matches for motifs in the reference sequence currently aren't output when using motifs.py.

Worst case, this can just be removed from output - we don't check that variants upstream/downstream create/perturb these matches anyway, so eh. Might be useful for indels, I suppose.

Better handling of compound variants.

Initial motif search in motifs.py takes variants in close proximity to each other into account, but they are largely ignored in later steps. Will want to try to address these in a more uniform fashion.

Additionally, I'm unsure how if/how well this works since all samples are included in a single file now. Almost certainly needs a closer look.

Sequence longer than PWM bug.

In motifs.py, score_motif function sometimes throws an error. Basically, put in a noisy bypass for now, but I'm sure it's skipping motifs in the database because of it. Not sure what the issue is, need to debug.

Change annotation format to use VAMP as INFO field.

Output all other information as subfields delimited by |. Come up with standard subfield format. Should make it easier to parse for filtering and make outputting it easier.

See this page for an example of what I mean. Can deal with each motif on a more personal basis.

Implement VCF output for enhancer activity command.

activity.py currently does not print a new file for each VCF. It gives a singular output with a unique format based around the enhancer itself instead.

Will want to add this info to VCF output as well. Will be a pain.

motifs.py output order right investigation

Initial review I did not see where the dictionary error was possibly occurring.

Debugging the motifs.py code based on output shows the flagged RFX5 only occurs in 5 lines with the suggested value.

  • grep 'RFX5' output.motif.20170114.vcf | grep '5.4636' | wc -l
  • grep 'RFX5' output.motif.20170114.vcf | grep '5.4636' > temp_investigate.txt

Searching temp_investigate.txt for RFX5 shows only 3 lines with duplicate entries:

  • chr1 145039922
  • chr22 25005718
  • chr5 46363751

Suggest Debug Procedure Against Small Data Subset

  1. Create a motif file with only RFX5 and maybe 1 or two other Motifs
  2. Create an input file with only the three lines referenced above and the header lines
  • actually I'd start with 1 offending line then expand to include the other 2
  1. Process and Review the small subset

Position of variant within motif.

After analyzing results more closely, I've realized it'd be really nice to know the position(s) of the motif that are affected by the variant. Can usually figure it out if the motif isn't too degenerate, but would be easy enough to include.

Other motif scanning/comparison tools.

@Crumbs350, since you're relatively unfamiliar with the field, I'm going to link a bunch of other tools that do motif scanning/comparison and functional variant annotation, both for my own reference for writing and if you want to see common methods used.

Impact Assessment of Regulatory Variants

This paper is an extremely useful review of methods used to do the sort of thing we're attempting. It hits on things that many tools don't integrate, the pros and cons of various methods, etc. If you ignore everything else on this page, at least take a read through this.

This table from that paper also gives an excellent, succinct summary of many tools, some of which are listed below. Most of them will have more info (breakdown of analysis and methods) on their site somewhere.

FunSeq2 - Standalone/Web Server
Web server
Paper

RAVEN - Web Server
Web server

sTRAP - Standalone/Web Server
Web server
C++/R Packages

DEEPSEA - Standalone/Web Server
Web server
Source code

motifBreakR - R Package
Source code

GERV - Standalone
Source code, training sets, etc

deltaSVM - Standalone/R-Package
Paper
Source code, training sets, etc

There are many others (with links), in the table above.


Motif Scanning/Comparison Only

These tools really only focus on motif scanning or comparison and don't integrate variant information explicitly.

STAMP - Standalone/Web Server
Paper
Source code
Web server

RTFBSDB - R package
Paper
Source code

FIMO - Standalone/Web Server
This is part of a large software suite called MEME.

Paper
Web server
Source code

CIS-BP - Database/Web server
This is primarily a database of TF motifs, but it also has some TF tools.

Web server

Write check to get sample names from VCF header.

Currently relying on user using GATK's CombineVariants tool to combine VCFs together prior to using this suite. As is, we just use the set INFO field to determine which samples have the variant and which do not. This will work for our data, but is not nearly robust enough for production due to the assumptions it makes. Really need to settle on a standard format for sample naming in column headers (split by '.' and using everything in first element might be a good idea). Would still allow additional info in column headers.

Set up documentation pages.

Create a readthedocs site for the package and populate it with a basic guide. Use sphinx to generate API reference.

Remove OptionsList use.

Class for OptionsList needs to be removed - it will interfere down the line when trying to run each module as a separate sub-command.

Change the current code to serve as modules and use click to pass in arguments.

summarize.py bugs, tests, & refactors

Bugs

  • Trying to use multiprocessing creates a RuntimeError: Concurrent access to R is not allowed. issue. This is likely due to rpy2 creating a lock on the R instance, as it doesn't create new R instances or allow for concurrency. Just another reason to write a python wrapper around TFMPvalue directly.

Tested

  • Calculates p-values and distances correctly, output values all match up with motifs and loci correctly, etc.

Refactors/Enhancements

  • Move the Variant class into another module so that it can be used as a generic for other functions. Subclass it if additional attributes, methods, etc are needed for a given module (such as in this case).

TF name/gene symbol discordance

Many motif databases use the TF name rather than the gene symbol (e.g. PU.1 rather than SPI1). Some of them have tables that allow for these to be easily converted to the actual gene symbol (HOMER, for example), but others may not. Need to manually check how prevalent this is and make sure users recognize this potential problem. The gene symbol is what's used to cross-reference the gene expression data, so it's necessary.

For the HOCOMOCO set, I used such a table provided by the database to swap the UNIPROT protein IDs with the gene symbols via a script, and then manually checked each name individually to ensure the same as the gene symbols in our expression file.

Potential ways around this:

  • Curate the datasets and provide them as package data. Most straightforward method. Could still allow users to provide their own motif lists, but they likely wouldn't be able to use the tf_expression module with any amount of reliability unless they manually curated their gene expression and motif file themselves to match up.
  • Try to utilize the HGNC's REST API to guess the GENE symbol from whatever TF name is given. More involved and more assumptions made. I think the first approach is probably better.

Remove extraneous documention.

Remove/clarify/reformat block comments in all files. Moderate overkill as far as comments go right now, try to keep documentation in doc_strings.

motifs.py bugs, tests & refactors

Bugs

  • Bed output file doesn't handle overlapping regions properly, just prints the first motif's output until an overlap with previous line no longer occurs. Such as lines 3-5 below:
chr1    12233913    12234377    TARDBP  CPEB1,0.4707,1.7086;BCL11A,9.8773,10.9563;EHF,9.3166,8.6945;ELF2,5.8123,6.5737;ETS2,7.5269,5.4783;IRF1,8.2849,4.4558;IRF3,6.421,2.9616;IRF4,9.367,9.1753;IRF5,7.7549,2.8007;IRF8,5.7816,4.6789;MAZ,3.8977,0.595;MNT,11.4016,9.5642;NR2F6,10.2863,10.2863;PRDM1,4.3469,-1.0544;RXRA,6.571,5.0389;SP1,5.3478,4.1506;SPI1,4.0425,4.6874;SPIB,9.6535,10.8093;SPIC,10.2752,-0.4592;WT1,6.6021,2.7608;ZNF148,6.662,4.7838;ZNF713,16.4239,16.4239
chr1    12336347    12336707    CTCF    BHLHE41,5.7013,6.0162;NHLH1,-5.6823,3.2276;MYOG,1.5368,5.9291;BCL11A,3.7113,6.9359;CTCFL,6.5574,9.8007;CTCF,7.8951,11.7162;EOMES,4.1982,7.1215;ETS2,5.962,7.5233;NHLH1,-3.3856,5.7203;IRF4,5.4594,9.1875;MYOG,3.3927,5.6769;SP3,5.7795,6.7673
chr1    14028964    14029428    TARDBP  KLF3,6.3293,4.8345;POU5F1B,2.0157,5.664;PBX1,5.5521,7.1685;POU2F2,1.6477,4.8154;POU3F4,3.4838,5.1457;RFX2,2.0759,-0.4307;RFX5,6.2369,7.0469;ZBTB7B,2.6463,0.8758;NANOG,4.8166,9.916;NR1I3,8.1888,1.1553;PAX1,4.1869,5.4813;PAX5,6.9013,6.7399;POU5F1,3.471,10.7001;SOX2,-0.9456,6.7054;SOX4,0.0444,6.1941;ZNF282,7.113,3.5338
chr1    14029021    14029705    PML KLF3,6.3293,4.8345;POU5F1B,2.0157,5.664;PBX1,5.5521,7.1685;POU2F2,1.6477,4.8154;POU3F4,3.4838,5.1457;RFX2,2.0759,-0.4307;RFX5,6.2369,7.0469;ZBTB7B,2.6463,0.8758;NANOG,4.8166,9.916;NR1I3,8.1888,1.1553;PAX1,4.1869,5.4813;PAX5,6.9013,6.7399;POU5F1,3.471,10.7001;SOX2,-0.9456,6.7054;SOX4,0.0444,6.1941;ZNF282,7.113,3.5338
chr1    14029401    14029592    PAX5    KLF3,6.3293,4.8345;POU5F1B,2.0157,5.664;PBX1,5.5521,7.1685;POU2F2,1.6477,4.8154;POU3F4,3.4838,5.1457;RFX2,2.0759,-0.4307;RFX5,6.2369,7.0469;ZBTB7B,2.6463,0.8758;NANOG,4.8166,9.916;NR1I3,8.1888,1.1553;PAX1,4.1869,5.4813;PAX5,6.9013,6.7399;POU5F1,3.471,10.7001;SOX2,-0.9456,6.7054;SOX4,0.0444,6.1941;ZNF282,7.113,3.5338
chr1    15738061    15738597    SIN3A   TFAP2D,7.0767,5.5819;EBF1,6.1442,5.687;ESRRA,7.9735,9.7136;ESR1,6.8764,5.9683;ESR1,7.5544,5.2967;ZFX,1.4692,5.4618;TFAP2A,6.9283,5.922;ESR2,6.6741,4.299;NR3C2,6.4958,6.8992;NR1H4,12.9202,4.1355;NR1I2,10.0822,1.2976;NR1I3,10.6472,2.2765;NR5A2,7.9451,-0.8719;NR6A1,4.5753,-4.3642;PURA,7.1546,8.0875;RARB,7.24,-0.4811;RARG,8.0175,-0.0647;RORC,5.3152,-0.8346;VDR,9.3969,4.2596

The way this is currently implemented is, frankly, just mindbogglingly over complex. Should be pretty straight forward, no need for the mess it is now.

  • Reference homotypic matches aren't output currently. Really, this could probably just be removed as a whole unless we try to seamlessly address nearby variants as well, as it shouldn't be any different from the variant homotypic matches.
  • Multiallelic calls are an issue. Currently, not sure how (if?) they're handled properly. Easiest approach would be to require users to decompose their input files, perhaps using this tool. Otherwise, we'll have to re-think how to handle the output so that it's not a mess.

Tested

  • Runs with minimal options (-i, -m, -r, -o, -fm only).
    • Used the following command:
      python ./SCRIPTS/motifs.py -i tester.vcf -r /scratch/jandrews/Ref/hg19.fa -m HOCOMOCOv10.JASPAR_FORMAT.fpr_001.TF_IDS.txt -o tester.hocomoco_motifs.vcf -fm
      where tester.vcf was just the first 1000 lines of the large VCF file.
  • Creates fasta index file if needed.
  • Can utilize ChIP peaks to corroborate sites (-ci, -co args).
    • Check to ensure the bed output issues aren't also affecting genomic ranges used for checking if a variant lies in them.
  • Test other options (-bp, -pc, -th, -ws, -sk)
  • Run on full file.
    • Also check how it expects chromosomes to be sorted, I think it expects a lexicographic sort (1, 10, 11) for chromosomes currently and will silently fail if they aren't in that order.
  • Validate output is proper VCF format - this tool should work for it.

Refactors/Enhancements

  • I think more stats are needed. Reporting the log liklihood scores for the variant and reference is find, but determining if there's a significant difference between them is left up to the user. At a minimum, we should implement an option so that the user can use the magnitude of change in score to help filter results. A stat for significance (p-value, e-value) would be ideal. See here for a potential way of estimating p-values.
  • Bed output in general. Right now, it doesn't yield much additional info about the variant itself. It only spits out the original entry and the variant/reference scores for each motif found in the given genomic range as shown above. Should at least report variant position and the samples it's called in.
  • Should really utilize the Python multiprocessing module to improve performance. There are multiple places they could be utilized, particularly since many end users will have access to computing clusters with high numbers of CPU cores available. Looks fairly straightforward to implement.
  • Remove global variables, extra documentation, random debugging statements, etc. Remove OptionsList use. Really want the main function to be importable.
  • Consider adding motif IDs to output in addition to the motif names currently used. Would clarify those instances with two motifs for the same TF are in the output. Would have to make thresholds.py more intelligent or settle on a common motif database format first. See #33.

tf_expression: bed file format

I got tf_expression.py: get_genes function to work but I don't think the function is looking for a bed file per the definition of bed file format in the following reference:
https://genome.ucsc.edu/FAQ/FAQformat.html#format1

The file it successfully reads is: ALL_ARRAYS_NORMALIZED_MAXPROBE_LOG2_COORDS.sorted.txt

Whose format is:
CHR START STOP GENE <sample 1> <sample 2> ...

The bed format does not seem to allow multiple scores, but rather expects 1 score and no more than 12 columns of data. Each column is a distinctly different piece of information rather than the same piece for different samples.

Add data files as necessary.

Need to add the motif files, GM ChIP file, etc, to a data folder and make sure they can be easily read/found by the program.

Print command to header.

Print used command to the header. Needs to be done for motifs.py. Look at activity.py for format.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.