Giter Site home page Giter Site logo

cms-enterprise / sbom-harbor Goto Github PK

View Code? Open in Web Editor NEW
17.0 6.0 10.0 14.04 MB

Repository for the SBOM Harbor.

Home Page: https://github.com/CMS-Enterprise/sbom-harbor

License: Other

Shell 0.32% JavaScript 2.46% Rust 96.79% Mustache 0.39% Dockerfile 0.04%
sbom software-bill-of-materials software-supply-chain software-supply-chain-security

sbom-harbor's Introduction

Decommissioned!

⚠️ This project has been decommissioned and is no longer maintained or supported.


TESTS

Overview

This project contains the Harbor application and enrichment providers that support it. Please refer to our book for a comprehensive explanation.

Status

Harbor is under active development. We are currently working at a 0.1.0 pre-release semver level. There is no guarantee of stable interfaces or backward compatability at this time. We would be thrilled to have additional contributors and end-users, but we want to make sure you are aware of that before you decide to invest your time and resources.

Security

For more information about our Security, Vulnerability, and Responsible Disclosure Policies, see SECURITY.md.

Developer System Requirements

Environment

The following environment variables are referenced in code. When possible, defaults are provided that support the docker-compose configuration found in the sdk/devenv folder.

  • SNYK_TOKEN - A valid Snyk API token. Required if using the Snyk integrations.
  • HARBOR_FILE_STORE - Path specification for file storage. When using an S3StorageProvider this should be the bucket name with path prefix where you wish to store generated files. When using a FileSystemStorageProvider this should be a valid directory or volume on the host machine or container running the job.
  • DOCDB_CONFIG - DocumentDB connection configuration. If not set, tests will default to the configuration that supports the docker-compose.yaml environment specified in the sdk/devenv folder. The primary Harbor installation is backed by DocumentDB, but any MongoDB 5.0 compliant database should be usable. Dynamic configuration is not yet implemented, but pull requests are welcome if community members need this capability before we can get to it. The current DocumentDB config expects a JSON document with the following schema:
{
  "password":"<redacted>",
  "engine":"mongo",
  "port":27017,
  "dbInstanceIdentifier":"<documentdb-instance-identifier>",
  "host":"<documentdb-host-name>",
  "ssl":true,
  "username":"<redacted>"
}

Secrets are programattically pulled into the environment via direnv and the script in sdk/devenv/.envrc. On the terminal, when you cd sdk/devenv, the direnv shell extension will automatically load the secrets into the necessary environment variables. Once you change to another directory they will be automatically unloaded.

  1. Copy sdk/devenv/.env.example into sdk/devenv/.env
  2. Add values for the aws profile and secret names
  3. cd sdk/devenv
  4. direnv allow .

Getting Started as a Contributor

  1. Clone the repository and cd into its directory.
git clone [email protected]:cms-enterprise/sbom-harbor`

cd sbom-harbor
  1. Install git pre-commit hooks.
pre-commit install

Project Documentation

Project documentation and additional developer guidance can be found on our GitPage.

Crate Documentation

The documentation for each crate can be generated from source using cargo or rustdoc. We plan to integrate the rustdoc output with theproject documentation in time. However, that requires additional tooling that we haven't gotten to yet. That would make a great first contribution. If you are willing, a PR will definitely be considered.

To generate the crate documentation, clone the repository, and then run the following command from the root directory.

cargo doc --no-deps

Documentation for each crate will be generated and output to the target/doc subdirectory.

Building

To build all workspace targets on the local machine run the following from the root directory.

cargo build

To build a single crate run the following from the root directory.

cargo build --workspace -p <crate-name> # e.g. use harbor-api or harbor-cli as the final argument.

By default, this will produce a debug build in the target/debug directory. To produce a release binary run the following.

cargo build --release

The release build can be found in the target/release directory.

Try it out

There are several use cases addressed by this repository. The following sections detail how to try out each one.

Local Development Environment

If you wish to run Harbor locally using the development environment found in the sdk/devenv directory, open a new terminal and run the following command.

cd sdk/devenv && docker compose up

Sbom Ingestion & Enrichment

Many teams at CMS have been onboarded to Snyk. That fact made a Snyk integration an appealing first target. Currently, Harbor supports ingesting SBOMs using the Snyk API. A generic GitHub ingestion provider is imminent. Similarly, an enrichment provider based on an Open Source vulnerability data provider is on the short-term roadmap. Stay tuned for updates on how to get started with purely Open Source tools.

Make sure all environment variables are set and then run the following command.

Note: this assumes you are running the command from the root directory of the repository and that you have run a release build as described above.

./target/release/harbor sbom -p snyk

Once you have ingested the SBOMs from the Snyk API, you can then use Harbor to call the API for all identified packages, and store any known vulnerability findings for each package.

./target/release/harbor enrich -p snyk

If you wish to run the above commands against the local development environment provided in the sdk/devenv directory, add the --debug flag.

./target/release/harbor sbom --debug -p snyk

sbom-harbor's People

Contributors

dependabot[bot] avatar derekstrickland avatar dynamike avatar jonshattuck avatar qtpeters avatar rileydakota avatar sbolel avatar talentedmrjones avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sbom-harbor's Issues

Task commit can fail with invalid map keys

When a task runs and stores errors in the task ref map, the task commit update can fail if the map key contains special characters Document DB considers invalid. This happens with purls all the time.

To Reproduce

This does not fail on Mongo locally in the devenv. To see the problem in action just review daily task logs in the console.

Expected behavior

Task documents should get updated with summary statistics, errs, and references when tasks complete.

Add a CPE Enrichment Provider

Target Audience
The Harbor system.

What’s the Value
The Harbor system itself needs a CPE ID to correlate vulnerability data across enrichment sources.

Details
The Harbor system needs a way to correlate CPEs to PURLs because different enrichment sources in the vulnerability ecosystem use one or the other IDs. For example, Ion Channel and NVD use CPE, whereas Snyk uses PURL.

Definition of Done
An enrichment provider that runs each day that downloads public PURL to CPE data sources.
The enrichment provider will then attempt to resolve Harbor PURL index against the CPE index.
It should report back basic statistics (total number, percentage) regarding how many PURLs cannot be resolved to a CPE.

View Basic Vulnerability Metrics through SDL Queries

Target Audience

  • ACT Auditors
  • Harbor Team
  • ISSOs & ADO Teams
  • SaaSG Users

What’s the Value

  • Ability to identify the blast radius of identified Zero-Days
  • Identify and remediate systems that are affected by out of policy dependencies

Details

The goal is to have a set of standard queries that can answer baseline questions such as:

  • What is the blast radius of a dependency (e.g. affected Repositories or Vendor Products).
  • What dependencies are currently in use that do not comply with policy.
  • What remediations are known to address out of policy dependencies.

Definition of Done

  • Must be able to run a blast radius query for a specific dependency
  • Must be able to identify dependencies with vulnerabilities that exceed a CVSS score threshold
  • Must be able to identify dependencies with vulnerabilities that match a specified severity
  • Must be able to identify dependencies with vulnerabilities that exceed an EPSS score threshold
  • Must be able to tie SBOM targets to a FISMA system when FISMA ID is available

Create, View, and Edit Vendor Products

Target Audience

  • SaaSG team members

What’s the Value

Provides SaaSG team the ability to achieve compliance with the EO.

Details

As a power user, I can view and manage a list of products associated with a vendors.

Definition of Done

  • Create a Product associated with a Vendor.
  • Edit the name or version
  • Delete a product.
  • Filter and sort the list of products for a vendor by name, version, last SBOM upload date.

Add Xrefs to GitHub Provider data

Currently there are no Xrefs in the GitHub Provider Packages. Now that we have the necessary models in place, we can wrap up the Provider data and put it in Prod

Automate daily production check

Target Audience
SBOM Harbor team

What’s the Value
Knowing the state of production. It provides insight into whether scheduled processes ran, and the volume and quality of data being injested and produced

Details
Will schedule a task to query DocDB, and logs, and send message to slack notifications

Definition of Done
That @qtpeters can stop doing manual checks

Implement dagger for local dynamic secrets injection

Target Audience
SBOM Harbor engineers

Details

The codebase needs to access secrets such as the snyk api token among others, in order to run tests and develop locally. Instead of storing this in plain text, or support bespoke scripts for each platform, we can leverage dagger as a portable, declarative pipeline for executing a bootstrapping pipeline to fill in environment variables.

Value Add
The ability to pull secrets from AWS Secrets Manager and securely inject into local runtime environment without storing them locally (plain text or otherwise). Also, the ability to define a database name as an env var so each engineer can develop against their own database in the dev documentdb instance which will help remove the differences between docdb and mongo.

Definition of Done
Engineers can reliably, across platforms, run dagger do test and see their rust code execute locally using secrets pulled from AWS secrets manager.

E2E Test Automation

Target Audience

  • The Harbor system
  • The Harbor team

What’s the Value

  • Provide confidence that code changes can safely be auto-promoted all the way to prod.
  • Identify and fix defects earlier in the cycle, when they are less expensive to address.
  • Achieve a high level of functional test coverage with a pragmatic view on cost.

Details

CI should prove that any code changes will be able to run against a schema that matches current production.

Definition of Done

  • CI uses the test GitHub org to limit the data volume required to run all pipelines and tests.
  • All tasks are run in a sequence that matches production.
  • All tasks that are currently run in a manual debug mode only, must be modified to run automatically as part of CI.
  • Once all tasks have been run, all automated tests are run, and must pass.
  • All tasks are run a second time, and data consistency checks are run to ensure that no duplication of data occurs.

Update Snyk Enrichment to filter out non-Snyk packages

Description

  • When the Snyk Enrichment provider was originally written it was the only source of Package data.
  • Now that the GitHub Provider is running against multiple GitHub orgs, the Snyk provider is attempting to process Package entries that are not relevant to it.
  • These end up in the logs as errors which is not ideal from an observability and monitoring perspective.
  • The Snyk Enrichment Task process loop needs to detect that the Package doesn't have a Snyk Xref, log an info level message that the Package is being skipped for this reason, and then continue to the next package without ever calling the Snyk service.
  • We should NOT modify the Snyk service, since we want to maintain this error handling as a way to detect any future similar logic problems.

Add metadata map to Task type

Describe the bug
Currently, there is no way to differentiate the different tasks run by the same provider in DocDB. This leaves the nightly production report with no way to say which organization the job was executed on. The major issue right now is that the GitHub Provider runs on three separate organizations with no way to tell which was which.

To Reproduce
Run daily report and observe that there are 3 harborSyft runs in it without being able to tell which org they ran on

Expected behavior
All task runs are differentiated so the report is more useful

Ion Channel Research

  • Where are the missing metrics we expected to see?
  • Did I do something wrong (in general code review)?
  • What does org_package_count mean?
  • Will the SBOM endpoint include transitive dependencies?
  • Dig into some errors

Add Go suport to the Github Provider

Target Audience

Users of the GitHub Provider

What’s the Value
Projects written in Go with a go.mod file will be processed with the correct Syft cataloger. Repos with multiple projects besides a Go project (such as JavaScript), will be correctly processed.

Details
This issue adds the ability for the GitHub Provider to look for go.mod files, move to the directory containing the file and run Syft with the correct cataloger for Go.

Definition of Done
This ticket is finished when Go projects are handled correctly every time.

Add Open Telemetry instrumentation for New Relic

Target Audience
SBOM Harbor engineering team and customer

Details
New Relic is the metrics, tracing, and logging platform of choice, and open telemetry is used to feed that data from the app.

Value Add
Operational observability helps engineers identify issues with running the application in production.

Definition of Done
As a first pass, metrics can be recorded in New Relic

Analytic pipeline not being cleared after each report is created

Describe the bug

Analytic pipeline is suffering from being unable to clear after each analysis. There are many examples of this error message in the Cloudwatch logs: "Pipeline stages exceeded maximum supported number: 500". The pipeline currently exists in the sdk/core/src/services/analytics/sboms/service.rs file

image

and is loaded up when the generate_detail() and get_primary_purls() functions are run. On the Pipeline struct, there is a clear() method.

image

This method is supposed to clear the pipeline after each report is created and should be doing so because it is currently being run in the task in sdk/platform/src/persistence/mongodb/analytics.rs, however it doesn't seem to be working.

To Reproduce
Run the detail analytic report and watch for the error messages described above.

Expected behavior
The errors described above will be visible.

Automated Data Validation

Target Audience

  • Harbor engineers
  • Harbor stakeholders
  • SDL team

What’s the Value

  • Eliminate human toil and inaccuracies related to manually validating data output of automated jobs.
  • Provide stakeholder transparency relative to the current quality posture of the system.
  • Help operational team members more rapidly troubleshoot and identify inconsistencies between Harbor and any export targets.

Details

The solution should provide a way of validating the data accuracy, consistency, volume, and currency.

Definition of Done

  • Data integrity
    • For any given SBOM prove that:
      • Harbor and the SDL have the same number of dependencies.
      • If vulnerability data exists for a given dependency, Harbor and the SDL match.
      • epssScores match
      • CPE IDs match
      • CVE IDs match
  • Data volume
    • Total SBOM count matches
    • Total Package count matches by PackageKind
    • Total Vulnerability count matches
  • Data currency
    • Check timestamps to ensure the SDL is being updated daily.

NVD Enrichment Provider

Target Audience

Consumers of Vulnerabilities:

  • SaaS Governance
  • ACT
  • CSCRM

What’s the Value

  • Ability to directly pull NVD vulnerability data dependencies that have vulnerabilities registered in the NVD.
  • Multiple existing enrichment providers have vulnerability APIs, but wrap the NVD data in their own proprietary schema. Having a native NVD enrichment providers will allow us to:
    • Standardize on the NVD schema.
    • Treat proprietary enrichment sources and as add-on data.
    • Ship a core feature set that that does not require organizations wishing to use Harbor, to have a commercial vulnerability data subscription. (True Open Source/Open Data).

Details

Most of the enrichment providers use NVD for vulnerabilities, so Harbor will have its own integration directly with the NVD and use in addition to, or in place of, existing vulnerability enrichments, where it makes sense.

Definition of Done

  • The ability to correlate package URLs to CPE IDs, when possible.
  • Daily scheduled task that refreshes/syncs a copy of the NVD vulnerability data set.
  • Daily scheduled enrichment task that uses NVD as a data source for vulnerability data.
  • At the data store level, the Vulnerability collection should be standardized on the NVD schema.
    • This will require updates to the daily analytics export.
    • Commercial vulnerability data should be segmented by vendor and treated as ancillary data.
    • We will need to analyze and design how and/or if we want to include commercial vulnerability data in the daily exports.

Tasks

  • Develop construction provider to download NVD Vulnerabilities and create usable data set in DocumentDB

    • Create construction provider framework in CLI
    • Create NVD service in Core
      • Create functions to get NVD CVE Metadata and check the NVD CVE collection to see if the data is up to date
      • Create function to download raw NVD CVE data in archive (gz or zip) to local file system
      • Create functions to unzip the archives, parse the data and populate the CVE collection
    • Create NVD data construction Task provider to use the functionality in the NVD service to populate the CVE collection
  • Develop enrichment provider to evaluate dependencies against the NVD dataset

    • Create enrichment provider framework in CLI
    • Add functionality to support dependency evaluation into NVD Core Service
      • Create functions in NVD Service to lookup CVEs by CPE and populate Vulnerability structs.
        • Add code to Analytic Service to find dependant Packages that have CPEs
        • Create functions that use the found CPEs to extract CVE data and massage into Vulnerability structs
        • Create functions to add the Vulnerability data to the Vulnerability collection in DOCDB
      • Create functions to lookup CVEs by other parameters if no CVE exists. This functionality will be best effort
        • Add code to Analytic Service to find "unknown" CPEs.
        • Create functions that extract parameters from the dependent package to attempt to find matches in the NVD data set
        • Use existing functionality to add Vulnerabilities If we can identify applicable Vulnerabilities that match the package parameters.
      • Create NVD enrichment Task provider to use functionality in the NVD Service to evaluate dependent Packages for vulnerabilities using the NVD data set

GitHub Provider Post processor for projects with separate modules

The observation was made that Snyk only generates a single SBOM for all of the pom.xml files where the GH Provider generates one for each file. What needs to happen is that all of the SBOMs are generated for every build target in a repo and then compared. the build targets that have matching SBOMs except for the build target location need to be referred to as a single "project" and only one SBOM is stored

Create a DB migration for the TaskReport

Target Audience

  • Harbor engineers
  • Harbor open source contributors and operators.

What’s the Value

  • Maintains portability of Harbor to any infrastructure.

Details

  • Harbor has a DB migrations capability in the platform crate, but that has not yet been leveraged.
  • One member of the team is currently working on observability, and has defined a mongo/docdb view that gets them the data they want from the Task collection.
  • They are exposing the ability to query this view from the CLI so that they can hook the results into their automation.
  • In order to not gate forward progress, they plan to manually create this view, however, this means that a feature of Harbor now depends on a database object that is not guaranteed by running the source alone.
  • This provides an excellent use case to work on the database migrations capability so that we can ensure Harbor is portable to any developer or production environment.

Definition of Done

  • A database migration exists to ensure that the view in question is present
  • The CLI exposes a way to manually run database migrations
  • The main CLI & API binaries will shut down/log/alert if the database version does not match the expected version.

Create a purl to cpe enrichment provider

Target Audience

SBOM Harbor users

What’s the Value

Using the purl(s) included in each SBOM to derive the associated cpe(s) will allow SBOM Harbor to extract vulnerability information directly from NVD and allow the Harbor Team to develop an NVD vulnerability enrichment source.

Details

A CPE ID will only exist for dependencies that have a corresponding CPE ID in the NVD.

Definition of Done

This ticket is finished when:

  • A task exists to update the existing SBOM metadata in DocumentDB with the CPE ID(s) associated to the existing purl(s)
  • That task is running on schedule.
  • CPEs are queryable from the SDL, when they exist.

Correlate and dedupe SBOM overlap between Snyk and GitHub Providers

Target Audience

  • Harbor system
  • SDL consumers
  • Harbor engineers

What’s the Value

  • Ensures that we don't over report vulnerabilities
  • Ensures that we don't store duplicate SBOMs with different names/ids that actually represent the same entity.

Details

As consumers of the data Harbor produces, we would like all distinct SBOM targets and related data to be:

  • Accurately correlated across data sources.
  • Uniquely identifiable and deduplicated.

Use Case

  • The ab2d repository exists in the CMSGov GitHub Organization.
  • It also exists in Snyk.
  • Both providers will detect the repo and attempt to ingest an SBOM for it using their specific methodology.
  • To date, we believe that there is not an obvious shared unique attribute that will allow us to explicitly correlate the 2.
  • We need a way to resolve that these 2 incoming data streams relate to the same SBOM target, and not create duplicate entries in the Package, Sbom, and Vulnerability collections.
  • Similarly, we should create S3 output only for a single resolved SBOM target.

Definition of Done

  • If an ingestion target is identified by both the Snyk and GitHub ingestion tasks (and any future tasks), unique Package, Sbom, and Vulnerability collection entries should be created.
  • The Primary Package and related Sbom entries should have a Xref to both the Snyk Project and the Harbor BuildTarget.
  • From the SDL perspective, all combinations of Primary Package.purl to Dependency.purl are unique.

Add support for both API token and JWT in upload endpoint

Summary

Combine authorizers and build in logic to differentiate between token types granting authorization to either the User or the Team depending.

We will differentiate the token types by:

  1. Detecting the word "Bearer": Token is a JWT
  2. Missing the word "Bearer": Token is an API upload token

Fisma ID is showing NULL Analytic Report

Describe the bug
The FismaID enrichment task tags Primary Packages with a FISMA ID if one can be determined. This field should be exported to the Analytic report and available for query. It shows up as null for all records.

To Reproduce

Query the data for FISAMID NOT NULL.

Expected behavior

Should get results.

EPSS Score is showing `null` in Analytic Report

Describe the bug

The EPSS enrichment task tags Vulnerabilities with an EPSS Score if one can be determined. This field should be exported to the Analytic report and available for query. It shows up as a json null for all records.

To Reproduce
Query the data for epssScore != 'null'

Expected behavior
Should receive results.

Pre-commit Hooks are missing from Github Actions checks

Description
When I submitted #169, I ran pre-commit run -a against the repo, and it blew up with a much of markdown changes. I think it would make sense that we include pre-commit checks to be validated before we can merge PRs into main.

Steps to reproduce the issue

  1. Clone repo
  2. Run pre-commit install
  3. Run pre-commit run -a
➜  sbom-harbor git:(mk-license-secruity-repo) pre-commit run -a 

fmt......................................................................Passed
cargo clippy.............................................................Passed
cargo check..............................................................Passed
check json...............................................................Passed
check yaml...............................................................Passed
check for case conflicts.................................................Passed
check for merge conflicts................................................Passed
check for broken symlinks............................(no files to check)Skipped
detect destroyed symlinks................................................Passed
fix utf-8 byte order marker..............................................Passed
fix end of files.........................................................Failed
- hook id: end-of-file-fixer
- exit code: 1
- files were modified by this hook

Fixing docs/src/overview/what-is-an-sbom.md
Fixing docs/src/contributing/design-guidelines.md
Fixing sdk/generators/templates/controllers/{{ project-name | snake_case }}.rs
Fixing docs/src/overview/continuous-monitoring.md
Fixing docs/src/overview/how-does-harbor-fit-in.md
Fixing docs/src/contributing/overview.md
Fixing docs/src/SDK/platform/overview.md
Fixing docs/src/SDK/overview.md
Fixing docs/src/SDK/platform/mongodb/service.md
Fixing adr/adr-template.md
Fixing sdk/README.md
Fixing sdk/core/src/services/sboms/scorecard/test_files/dropwizard.json
Fixing docs/src/SDK/platform/mongodb/migrations.md

trim trailing whitespace.................................................Failed
- hook id: trailing-whitespace
- exit code: 1
- files were modified by this hook

Fixing adr/0000-server-side-language.md
Fixing sdk/generators/harbor/.yarn/releases/yarn-3.6.1.cjs
Fixing docs/src/SDK/core/overview.md
Fixing docs/src/use-cases/overview.md
Fixing docs/src/overview/what-is-an-sbom.md
Fixing docs/src/overview/harbor-user-interface.md
Fixing docs/src/use-cases/vendor-management.md
Fixing docs/src/contributing/design-guidelines.md
Fixing docs/src/data-model/sboms.md
Fixing docs/src/introduction.md
Fixing docs/src/deployment/overview.md
Fixing docs/src/overview/ingestion.md
Fixing docs/src/data-model/overview.md
Fixing docs/src/SDK/platform/authz.md
Fixing docs/src/diagrams.md
Fixing docs/src/overview/continuous-monitoring.md
Fixing docs/src/SDK/core/providers.md
Fixing docs/src/SDK/platform/mongodb/store.md
Fixing docs/src/overview/how-does-harbor-fit-in.md
Fixing docs/src/contributing/overview.md
Fixing sdk/generators/templates/README.md
Fixing docs/src/SDK/core/entities.md
Fixing docs/src/SDK/platform/overview.md
Fixing docs/src/SDK/core/services.md
Fixing docs/src/SDK/core/tasks.md
Fixing docs/src/SDK/overview.md
Fixing docs/src/SDK/platform/mongodb/overview.md
Fixing docs/src/SDK/platform/mongodb/service.md
Fixing docs/src/overview/how-are-sboms-useful.md
Fixing docs/src/overview/how-does-harbor-work.md
Fixing sdk/README.md
Fixing sdk/platform/src/persistence/s3/mod.rs
Fixing docs/src/SDK/platform/mongodb/migrations.md

mixed line ending........................................................Passed
detect aws credentials...................................................Passed
detect private key.......................................................Passed
Detect hardcoded secrets.................................................Passed

Task count should be updated as soon as task items are loaded.

Describe the bug

If TaskProvider::complete fails, the Task record is never updated with the count of items.

  • TaskProviders share a lifecycle defined by the trait, with one main section of variable logic.
  • Specifically, they vary on how they query for the batch items to process and how they process each item.
  • The TaskProvider trait provides default logic for inserting a Task record when a Task starts, and for updating it with summary statistics and error logging when a Task ends.
  • Unfortunately, if the update at the end fails, the count field never gets updated.
  • This is useful information to have when debugging, and the Task should get updated with a count as soon as that information is available.

To Reproduce

Run any task locally and stop it before it completes.

Expected behavior

The Task document's count field should be updated in each task, as soon as the primary batch query successfully retrieves the set of items to process.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.