Giter Site home page Giter Site logo

ssl-hep / servicex Goto Github PK

View Code? Open in Web Editor NEW
19.0 14.0 21.0 18.88 MB

ServiceX - a data delivery service pilot for IRIS-HEP DOMA

License: BSD 3-Clause "New" or "Revised" License

Python 81.37% Shell 1.22% Mustache 0.19% Dockerfile 1.15% Jupyter Notebook 12.87% Mako 0.07% CSS 0.07% HTML 3.07%

servicex's Introduction

ServiceX - Data Delivery for the HEP Community

Uproot Status xAOD Status

ServiceX is an on-demand service that delivers data straight from the grid to high energy physics analysts in an easy, flexible, and highly performant manner.

Features

  • Experiment-agnostic: Supports both the ATLAS and CMS collaborations on the LHC.
  • Custom filters and pre-processing: Easily filter events, request specific columns, and unpack compressed formats using the func-adl analysis description language.
  • Choose your format: ServiceX can deliver data in a variety of columnar formats, including streams of ROOT data, small ROOT files, HDF5, and Apache Arrow buffers.
  • No hassle: ServiceX uses Rucio to find and access data wherever it lives, so users don't have to worry about these details.
  • Simple and Pythonic: Using ServiceX takes only a few lines of code in any Python environment, such as a script or Jupyter notebook.

Getting Started

Check out our quick start guide for instructions on how to obtain credentials, install the ServiceX Python library, and make your first ServiceX transformation request.

Documentation

Documentation Status

The ServiceX documentation is hosted on Read the Docs.

Self-Hosting

The Scalable Systems Laboratory (SSL) at IRIS-HEP maintains multiple instances of ServiceX to transform several input formats from different experiments.

In addition, ServiceX is an open-source project, and you are welcome to host your own deployment. Instructions on how to configure and deploy ServiceX can be found in our deployment guide.

Contributing

The ServiceX team welcomes community contributions. If you'd like to get involved, please check out our contributor guide.

License

ServiceX is distributed under a BSD 3-Clause License.

Acknowledgements

ServiceX is a component of the IRIS-HEP Intelligent Data Delivery Service, and is supported by National Science Foundation under Cooperative Agreement OAC-1836650. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

servicex's People

Contributors

andreweckart avatar bengalewsky avatar dependabot[bot] avatar draghuram avatar eguiraud avatar fengpinghu avatar gordonwatts avatar holzman avatar ivukotic avatar masonproffitt avatar matthewfeickert avatar michael-d-johnson avatar mweinberg2718 avatar oshadura avatar ponyisi avatar prajwalkkumar avatar shriram192 avatar sthapa avatar sudo-panda avatar wookiee2187 avatar zorache avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

servicex's Issues

Query the size of resulting request

Story

As an analyzer I want to query the size of a requested dataset so I can plan

Acceptance Criteria

  1. Submit token
  2. Receive an estimate of the total size (Megabytes, events, how much fetched, how much ready to stream, how much has been streamed)

Assumptions

  1. List assumptions behind this story

NanoAOD Transformer

Story

As a CMS analyzer I want to extract columns from NanoAOD files so I can reduce the size of my analysis

Assumptions

  1. DID Finder will just have some hard coded dataset ID to CMS file mappings
  2. Implemented in uproot
  3. Column names in the transform request will match nanoAOD naming

Acceptance Criteria

  1. DID Finder returns the hard coded NanoAOD files paths
  2. Sample transform request that uses correct column names and one of the hard coded dataset IDs
  3. pyArrow Tables with the correct columns published to Kafka

Contributors Guide

Story

As a contributor to ServiceX I want to know how to contribute so I can help this project

Acceptance Criteria

  1. How will we know when this story is complete

Assumptions

  1. List assumptions behind this story

Downloadable HDF5 Files

Story

As a Machine Learning Developer I want to download an HDF5 file with my features so I can train my model on selected Datasets

Assumptions

  1. Assume doing this story after #34
  2. Need to figure out how to save awkward array or PyArrow table to h5

Acceptance Criteria

  1. Given I have submitted a request and the request specifies
    HDF5 output When I run the transform then I should be able to download the file and verify it is correct HDF5

Generate trivial Transformer code from FuncADL Select

Story

As an analyzer I want the transformer to select columns based on my selection so I can generate n-tuples for my research

Acceptance Criteria

  1. Given a valid selection in the text language when I run the transformer then I should get the selected columns back
  2. Given a selection text and the text contains syntax errors when I run the transformer then I should receive an error

Assumptions

  1. The ADL code is passed into the serviceX request as a JSON property
  2. Parser and code generator added to the transformer code
  3. The generated code looks mostly like the current code that reads attributes from the selected branches
  4. The rest of the transformer runs as previously

Create projection from ROOT file

Story

As an analyzer I want to create a projection of a root file to reduce the size of my analysis

Acceptance Criteria

  1. C++ transformer creates projection with just the listed columns
  2. Output revised list of columns

Assumptions

  1. List assumptions behind this story

List Branches for Tree

Story

As an analyzer I want to preview all of the branches available in a tree for my DID so I can construct a correct transformation request

Assumptions

  1. Depends on story #59
  2. JSON Request includes DID and Tree name

Acceptance Criteria

  1. Given I have a DID and a tree name When I submit a branches for tree request then I should receive a JSON document listing all of the branches

Deployment Guide

Story

As a serviceX user I want to be able to deploy the system on a Kubernetes Cluster so I can try it out

Assumptions

  1. There is a public xAOD file that we can hardcode as a static file to serve
  2. User already has a kubernetes cluster

Acceptance Criteria

  1. Starting from a bare kubernetes cluster I can deploy serviceX, submit a transformation request and download a resulting PyArrow file

Service to Distribute X509 Proxy

Story

As an analyzer I want the service to use a proxy for a service X509 cert so the components can access experiment resources

Assumptions

  1. Service account credentials and keys deployed as part of helm chart
  2. A new service will be created to generate the proxy and update a common secret

Acceptance Criteria

  1. Given I have deployed ServiceX with an Atlas user credentials the following services should be able to access ATLAS:
  • DID Finder
  • Preflight Check
  • Transformer

C++ EventLoop Transformer

_ As an analyzer I want to use the EventLoop framework to transform and filter my xAOD files so I can use experiment approved framework to get the data I need_

Assumptions

  1. Depends on story #31 to generate code and create ConfigMap containing C++ and python code
  2. Will use transformer package that will be exported from ServiceX_Transform project
  3. Will only support upload of ROOT files to object store

Currently

The current two transformer containers are built to run in the func_adl server environment (func_adl_server).

Why Change

So they will successfully participate in the ServiceX environment.

Change To

This can't proceed until #24, #27, and #26, and #25 have all been completed. The current RabbitMQ message passing will have to be altered to make more sense in the new scheme of things. In particular, where the decision to split into multiple jobs is different in ServiceX and in func_adl's backend.

This is an integration step.

C++ Code Generator Service

Story

As an analyzer I want ServiceX to generate C++ code from my submitted analysis description language so I can use the experiment approved framework without writing C++ code

Assumptions

  1. This will be a new Microservice with a REST interface
  2. The service will generate C++ and python code to represent the submitted func_adl selection
  3. The generated files will be encoded into a zip file which will be returned to the caller in the response body
  4. Based on the simplified LISPy language

Steps to Implement

  1. Create REST server in Code Generator Repo
  2. Model Dockerfiles on the one in ServiceX_App
  3. Add config setting for Code Generator endpoint to app.conf
  4. Modify the transform request handler to include a select property which will be a func_adl select statement
  5. In Preflight Check Handler invoke the service to translate Select statement to columns
  6. Save the resulting columns in the TransformRequest columns property

Acceptance Criteria

  1. Given I have a valid ADL select when I run the transformer then the generated code will be mounted into the transformers as a configMap
  2. Given I have an ADL select AND there is a syntax error in the select WHEN I submit the transform THEN I should receive an error

Downloadable flat ROOT File

Story

As an analyzer I want to be able to download a flat ROOT File for my request so I can continue my analysis with familiar tools

Assumptions

  1. Based on story #34 which has all of the machinery for the object store and generating downloadable files

Acceptance Criteria

  1. Given I have submitted a request and the request specifies a ROOT file as the result When I run the transform then I should be able to download a ROOT File and verify that it can be opened and inspected with uproot

Read a ROOT File

Story

As an analyzer I want to be able to read ROOT files so I can extract data from there

Acceptance Criteria

  1. Launch a C++ ROOT job with a path and set of columns
  2. Output simple logging of branch names

Assumptions

  1. Runs inside a Docker container
  2. Launched as Kubernetes job
  3. Only one file at a time

Autoscale ServiceX

Story

As a ServiceX administrator I want critical processes to scale so I can meet demand

Acceptance Criteria

  1. How will we know when this story is complete

Assumptions

  1. List assumptions behind this story

Do not use a PVC between the C++ writer and runner containers

Currently

The transformer is two containers as python 3 and the ATLAS runtime do not get along well. The first container writes the C++ code, and the second compiles and runs it. To communicate between the two everything is written to a Kubernetes PVC.

Why change

PVC's are generally a pain in the but and add another point of configuration. Don't use them unless there is a real reason. Given it takes ~100's of ms to write these files, caching is really not needed.

Change to

Two options were considered:

  • Write a zip of the files to a database
  • Write a zip of the files into the message that is sent to the runner container.

The latter seems to be the most simple solution. The files are quite small. Especially for something like ServiceX, where the queries are not building 100's of histograms. I expect the C++ zip to be well under 100K and probably more like 10K or so.

Schedule data delivery request

Story

As an analyzer I want my data delivery request to be scheduled so my request can be honored

Acceptance Criteria

  1. Data deliver request pulled from queue
  2. No-op transform started in kubernetes

Assumptions

  1. Queue implemented in redis

Secure access to REST Endpoint

Story

As an analyzer I want to securely connect to ServiceX so I can make use of the resources

Assumptions

  1. Will use a globus auth JWT
  2. We'll have a valid user database to whitelist specific users
  3. We'll need some sort of user interface to approve users that come along and request access
  4. The non-user facing endpoints are not exposed via ingress controller
  5. The user facing endpoints are exposed via ingress controller
  6. New endpoint for accessing my user record

Acceptance Criteria

  1. Given I have a valid GlobusAuth token and I have been granted access to the system when I submit a transform request Then it should be accepted
  2. Given I have an invalid Globus Auth token when I submit a transform request then it should be rejected
  3. Given I have not been granted access to the service and I have a valid token when I submit a transform request it will be rejected
  4. Given I have a valid token and I've been granted access to the service When I attempt to connect any of the non-user facing endpoints then I should receive an error
  5. Given I have a valid token when I access /user endpoint then I receive a JSON document describing my user record

The AST that selects the columns should be in text format

As an Analyzer I want to specify a list of columns in an extensible language representation so I can extract the columns I need for my analysis with a future path to more extensive transformations

Assumptions

Acceptance Criteria

  1. A README documenting how a selection is specified
  2. A real world example of selecting columns

Currently

The user builds a python3 Abstract Syntax Tree (AST). This AST is translated into C++ code. The AST contains information about the dataset, selection cuts, and columns to be eventually extracted. The AST is currently pickled, and then base64 encoded when it is sent to the server.

Why Change

There are a host of reasons. This was chosen because it was the simplest as we were building up an experimental version. The big ones driving this change now are:

  • Pickle is a known security hole
  • You can't figure out what is being written by looking at the message being sent to a server.
  • Extra data that python needs, but a model like this do not, gets carried along.

Change To

We will use a text representation of the AST. For example, the Select operation might look like Select(source, (lambda a (code)) or similar.

Hello ServiceX

Story

As an analyzer I want a serviceX service so I can obtain the data I need where I need it to perform analysis

Acceptance Criteria

  • Repository is set up
  • Skeletal directory structure
  • Travis CI build
  • README for the project
  • Domain name with SSL Cert
  • Heartbeat endpoint
  • Dockerfile
  • Kubernetes deployments
  • Helm chart

Assumptions

  1. Service implemented in Flask
  2. RAML for Heartbeat endpoint

Cache N-tuples

Story

As an analyzer I want to cache n-tuples so I can easily re-run my analysis

Acceptance Criteria

  1. Projection written to cache

Assumptions

  1. AVRO records written to Kafka

Prometheus Instrumentation for Services

Story

As a ServiceX Administrator I want the services to publish metrics to Prometheus so I can monitor and tune the system correctly

Assumptions

  1. Use the prometheus client

Acceptance Criteria

  1. Given I log into Prometheus when I run a transform I can see the following metrics for the Transformer:
  • Number of events transformed
  • Number of bytes input
  • Number of bytes output
  • Request ID
  • Events/Second
  1. Given I log into Prometheus when I run a transform I can see the following metrics for the DID Finder:
  • Number of files found

Create a 0.1 Release

We are ready for a 0.1 release this will mean:

  • Tagging the Current ElasticSearch backed code
  • Deleting unused code
  • Making good defaults in the helm chart values.yaml that will allow trying out the system
  • Create a CHANGELOG
  • Tag with v0.1
  • Deploy helm package
  • Tag docker images

Stream into Spark

Story

As an analyzer I want to stream all of the events from my delivery request into spark so I can scale up my analysis

Acceptance Criteria

  1. Data is streamed into spark datafrmaes

Assumptions

  1. List assumptions behind this story

Stream First n events into Pandas

Story

As an analyzer I want the n-tuples to be streamed into my analysis so I can perform my calculations

Acceptance Criteria

  1. Provide token
  2. Provide number of events to retrieve
  3. Dataframe is filled

Assumptions

  1. Streamed to Pandas Dataframe in my local environment
  2. Doesn't remember where I am. This always gives me the first n events

Make sure the transformer can run on the DID specification generated by ServiceX's DID Finder

Currently

The transformer can run on anything that the EventLoop framework (Analysis) can take. This includes lots of options, including http, xroot, and, of course, file.

Why Change

The DID finder, given a dataset, generates a set of files to access via XCache. I'm nervous something my code might not support it. So it needs to be tested.

Change To

Get a valid XCache file that can be accessed and test. If it does not work, fix it.

Query status of delivery request

Story

As an analyzer I want to know the status of my request so I can plan my time

Acceptance Criteria

  1. Submit token
  2. Receive a status object

Assumptions

  1. List assumptions behind this story

Excessive Access Level Required for Helm Chart

Story

As an SSL administrator I want to be able to deploy ServiceX while granting minimal privileges so I can keep the cluster secure

Assumptions

  1. The ServiceX Helm chart creates a role for deploying transformer jobs into the cluster. This role only needs access to the ["jobs"] resource

Acceptance Criteria

  1. I am able to deploy serviceX helm chart to river

Integration Tests

Story

As a developer I want integration tests for ServiceX so I can verify that a deployment works correcrtly

Assumptions

Acceptance Criteria

  1. Docker image with test
  2. Ability to optionally include it in the Helm Chart
  3. Test runs to conclusion and gives some indication that the service works

Stream next n events into Pandas

Story

As an analyzer I want all of the events from my request so I can complete my analsys

Acceptance Criteria

  1. Provide token
  2. Receive next n events

Assumptions

  1. List assumptions behind this story

Authenticate into ServiceX

Story

As an analyzer I want to authenticate into the service so I can securely access data I'm entitled to

Acceptance Criteria

  1. User completes log in process
  2. System can tell if user is authorized for Atlas or CMS data

Assumptions

  1. Log on with Globus Auth

How do we know if the user is ATLAS or CMS

Spike: Root Cause Analysis of Failed Transformer Jobs

Problem

In processing the 10TB Dataset, we encountered several ROOT files which appear to be corrupt.

Approach

Determine if any of these files are corrupt at the source and document a process for requesting corrections.

See if any of the problems are caused by corruption between XCache and the transformer that can simply be corrected by starting over with the file.

See if any of them are caused by corrupted downloads into XCache and if they can be corrected by flushing them from XCache and reloading them.

Assumptions

This story is best tackled after we have the dead letter queue implemented in #23

Move current github repos containing the C++ transformer into new homes

Currently

All repos exist under the user @gordonwatts.

Why Change

Now that the code will be used by more than just me, it is time to move it.

Change To

  • Repo func_adl_cpp_writer will move to the ssl-hep repo. This contains the top-level driver that will communicate with the ServiceX app.
  • Repo func_adl_cpp_runner will move to the ssl-hep repo. This contains teh top-level driver that will communicate with the ServiceX app.
  • Repo functional_adl will be split into three seperate repo's, and will be in the iris-hep namespace/organization:
    • func_adl - client code that is included by someone that wants to access data from ServiceX, and a second namespace package that gives access to lots of ast utilities. This is code likely useful for many backends.
    • func_adl_xAOD - Contains the code that converts the simplified ast into C++ code and script files appropriate for running on xAOD files in an analysis container produced by the experiment (e.g. runner, above). It also contains the single bit of front-end code that is used to submit a request over the wire. It will, of course, need to be adapted for ServiceX's API.

Remove hardcoded passwords from system

Problem

Most of the passwords for the dependent helm charted services are externalized into values.yaml, there are still a few references to hardcoded passwords in the python code.

Acceptance Criteria

When I change the passwords for each of the dependent services in the helm chart's values.yaml the system should continue to work

Log requests to data warehouse

Story

As a service X administrator I want all requests to be logged so I can analyze usage patterns

Acceptance Criteria

  1. Each delivery request is logged
  2. Requests are timestamped

Assumptions

  1. List assumptions behind this story

Downloadable Parquet File Results

Story

As an analyzer I want to download my smaller transformed datasets directly so I can use familiar tools

Assumptions

  1. Will use Minio object store to hold the parquet files
  2. This feature can be triggered In the REST request.
  3. For now the only file option will be parquet. Future file formats will be needed
  4. The new bucket will be named after the request-id
  5. Each ROOT file will be transformed into a parquet file object in the bucket
  6. It will be easy to construct a URL for downloading the parquet files using http

Acceptance Criteria

  1. Given I have a valid transformation request and I have specified parquet persistence when the transformation is complete then I should see a bucket with my request ID and objects for each incoming ROOT file and I can download the parquet Files and prove they are valid

Submit data delivery Request

Story

As an analyzer I want to be able to submit a delivery request so I can obtain the data I want

Acceptance Criteria

  1. Python package
  2. Connect to ServiceX
  3. Create a data delivery request object with single DID (file or dataset)
  4. Add list of columns
  5. Submit request
  6. Receive token back

Assumptions

  1. Using Redis queue to queue requests

Build a Python based ROOT -> Arrow buffer transformer

Currently

The xAOD code writes out a simple ROOT file. Here, simple is defined as a ROOT file that is not only readable by a vanilla version of ROOT, but contains no object columns (not event TLorentzVectors). The ROOT file is copied to a xrootd server after it is generated.

Why Change

ServiceX uses arrow buffers to communicate the output of the transformer to the next stage of the system (kafka).

Change To

The code that generates the ROOT file runs very fast (less than one minute for an 8 GB input file, for example, when no calibrations are required). The fastest way to get this feature in is to write a small python 2 program that reads the ROOT file with uproot, and then writes with appropriate chunk size to the arrow buffer, and ships the data to the appropriate place.

A future work item might be to write arrow buffers while running the C++ code, but that is more work as it will require writing and integrating the arrow buffer filling in C++.

Flat Root File Transformer

Story

As an Analyzer I want to extract columns out of a flat root dataset so I can efficiently perform my analysis

Assumptions

  1. Flat Root files can be read using uproot

Acceptance Criteria

  1. Submit a transformation request with a nanoAOD dataset and a list of CMS columns. Receive the requested data in Kafka or Minio object store

Tasks

  • Chunk Uproot Awkward Arrays
  • Update Preflight check
  • Refactor common transformer code into package
  • Publish library of common transformer code on PyPI
  • Develop process to build flat root docker images for transformer

Kafka on River Cluster

Problem

Currently we are only running Kafka on the GKE cluster. We need to make sure it can easily and reproducibly be deployed on SSL infrastructure.

Assumptions

  1. Make use of local-volumes storage class
  2. Deploy three brokers
  3. Add DNS entries to slateci.net
  4. Deploy Kafka manager dashboard too

Acceptance Criteria

  1. Deploy Kafka to river from helm chart with values.yaml
  2. Log into Kafka manager dashboard
  3. Use Kafka shell script to create topic, publish a message and consume
  4. Documentation in Markdown in ServiceX repo

Analysis of Completed Transformation Jobs

Story

As a sponsor of ServiceX I want to be able to analyze metrics for completed transformation jobs so I can understand the performance and benefits of the Service

Acceptance Criteria

  1. There is a post request service which provides a convenient, easy to consume summary of
  • the execution, including:
  • files correctly processed (success)
  • failures
  • % success
  • bytes transformed
  • bytes written
  • files written
  • min, max, avg transformer pods invoked
  • cpu time used
  • elapsed time
  • cpu efficiency
  • min, average, and maximum I/O per transform and in aggregate

.. and such

Assumptions

  1. Data is stored in central ElasticSearch

Unit tests for ServiceX App

Story

As a serviceX developer I want unit tests and CI job for the app so I can be sure my changes are correct

Acceptance Criteria

  1. CI Job in Travis with
    a. Flake8 checks
    b. Unit tests
    c. Code Coverage report

Assumptions

  1. List assumptions behind this story

Retry ROOT files on Failure

  • Remove xcache and try again
  • Keep track of last event sent to Kafka
  • Improve reporting of root cause of failure

Dead Letter Queue for Error Causing Root Files

Story

As an analyzer I want bad root files to be reported on a dead letter queue so I can complete my analysis and know which files were not completely included

Acceptance Criteria

  1. Given I have a job, when it is submitted then I should see a dead-letter queue created for the request_id
  2. Given I have a submitted job, when an exception is found in a root file, then I should see an entry for the root file in the dead letter queue and the offending file entry is removed from the transformer queue

Assumptions

Query Trees Available in Dataset

Story

As an analyzer I want to be able to view the list of trees available in a dataset so I can correctly form a transformation request

Assumptions

  1. Update the DID Finder to just return a single sample root file reference
  2. New REST endpoint
  3. DID specified in the JSON body of the request
  4. Response is a JSON document with all of the tree names (no cycle IDs)

Acceptance Criteria

  1. Given I have a DID when I submit a tree list request then I should get back a JSON response with all of the tree names

Elastic Search Logging of Events and Errors

Story

As a ServiceX Administrator I want to be able to view and analyze events and error reports from the ServiceX transformers so I manage my system

Assumptions

  1. All events and error reports go to Elastic Search instance
  2. Elastic Search is optional to a ServiceX deployment
  3. Events are presented as JSON docs

Acceptance Criteria

  1. Transform Start messages are logged to ES with timestamps
  2. Transform complete events logged to ES
  3. Transform exceptions are logged to ES
  4. Ability to generate basic graphs and reports using Kabana

Autoscale Transformer Jobs

Story

As an analyzer I want the number of pods allocated to my job to scale as work becomes available so I can efficiently perform my analysis

Assumptions

  1. Users will continue to provide num-workers as a property in the request. This will now be interpreted as the maximum number of workers.
  2. Job will start with one node and then scale up to max as workers get engaged
  3. Will need to convert the Job object to a Deployment in order to take advantage of the Kubernetes Autoscaler
  4. There will be a new value in the helm chart setting the absolute max value for workers. This will be enforced at transform submit time

Acceptance Criteria

  1. Given I have a job with our sample 100GB dataset and a max number of workers set to 17
    When I submit the job Then I should see only one worker.
  2. Given I have submitted our sample 100GB dataset and a max number of workers set to 17
    When the DID finder has found multiple Root files Then the number of workers should increase but never go over 17 workers

DID Finder For CMS

As a CMS analyzer I want to be able to transform CMS datasets so I can produce results

Assumptions

  1. This will connect to the CMS Rucio Server
  2. It will require an X509 proxy that works with the CMS virtual org

Acceptance Criteria

  1. Given I have CMS credentials and a transform that references a CMS DID When I submit the request Then the DID finder should report all of the Root files associated with that DID

Provide paths to files

Story

_As an analyzer I want to paths to local cached files so I can begin transformation _

Acceptance Criteria

  1. Scheduler picks next file to process

Assumptions

  1. Files served out of xCache
  2. Files prefetched into Xcache using external command line tools
  3. Only need one file at a time

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.