Giter Site home page Giter Site logo

opentdf / platform Goto Github PK

View Code? Open in Web Editor NEW
13.0 14.0 4.0 29.17 MB

OpenTDF Platform monorepo enabling the development and integration of _forever control_ of data into new and existing applications. The concept of forever control stems from an increasingly common concept known as zero trust.

License: BSD 3-Clause Clear License

Dockerfile 0.17% Makefile 0.30% Go 98.32% Open Policy Agent 0.40% PLpgSQL 0.81%
data-encryption data-tagging drm end-to-end-encryption file-encryption go golang open-source opensource opentdf tdf zero-trust zero-trust-security

platform's Introduction

OpenTDF

Vulnerability Check

Note

It is advised to familiarize yourself with the terms and concepts used in the OpenTDF platform.

Documentation

Prerequisites for Project Consumers & Contributors

  • Go (see go.mod for specific version)
  • Container runtime
  • Compose - used to manage multi-container applications
  • Buf is used for managing protobuf files. Required for developing services.

On macOS, these can be installed with brew

brew install buf go

Optional tools

  • Optional Air is used for hot-reload development
    • install with go install github.com/cosmtrek/air@latest
  • Optional golangci-lint is used for ensuring good coding practices
    • install with brew install golangci-lint
  • Optional grpcurl is used for testing gRPC services
    • install with brew install grpcurl
  • Optional openssl is used for generating certificates
    • install with brew install openssl

Audience

There are two primary audiences for this project. Consumers and Contributors

  1. Consuming Consumers of the OpenTDF platform should begin their journey here.

  2. Contributing To contribute to the OpenTDF platform, you'll need bit more set setup and should start here.

Additional info for Project Consumers & Contributors

For Consumers

The OpenTDF service is the main entry point for the OpenTDF platform. See service documentation for more information.

Quick Start

Warning

This quickstart guide is intended for development and testing purposes only. The OpenTDF platform team does not provide recommendations for production deployments.

To get started with the OpenTDF platform make sure you are running the same Go version found in the go.mod file.

https://github.com/opentdf/platform/blob/main/service/go.mod#L3

Start the required infrastructure with compose-spec.

# Note this might be `podman compose` on some systems
docker compose -f docker-compose.yml up

Copy the configuration file from the example and update it with your own values.

cp opentdf-example.yaml opentdf.yaml

Provision default configurations.

# Provision keycloak with the default configuration.
go run ./service provision keycloak
# Generate the temporary keys for KAS
./.github/scripts/init-temp-keys.sh

Run the OpenTDF platform service.

go run ./service start

For Contributors

This section is focused on the development of the OpenTDF platform.

Libraries

Libraries ./lib are shared libraries that are used across the OpenTDF platform. These libraries are used to provide common functionality between the various sub-modules of the platform monorepo. Specifically, these libraries are shared between the services and the SDKs.

Services

Services ./services are the core building blocks of the OpenTDF platform. Generally, each service is one or more gRPC services that are scoped to a namespace. The essence of the service is that it takes a modular binary architecture approach enabling multiple deployment models.

SDKs

SDKs ./sdk are the contracts which the platform uses to ensure that developers and services can interact with the platform. The SDKs contain a native Go SDK and generated Go service SDKs. A full list of SDKs can be found at github.com/opentdf.

How To Add a New Go Module

Within this repo, todefine a new, distinct go module, for example to provide shared functionality between several existing modules, or to define new and unique functionality follow these steps. For this example, we will call our new module lib/foo.

mkdir -p lib/foo
cd lib/foo
go mod init github.com/opentdf/platform/lib/foo
go work use .

In this folder, create your go code as usual.

Add a README.md and a LICENSE File

A README is recommended to assist with orientation to use of your package. Remember, this will be published to https://pkg.go.dev/ as part of the module documentation.

Make sure to add a LICENSE file to your module to support automated license checks. Feel free to copy the existing (BSD-clear) LICENSE file for most new modules.

Updating the Makefile

  1. Add your module to the MODS variable:

    MODS=protocol/go sdk . examples lib/foo
  2. If required If your project does not generate a built artifact, add a phony binary target to the .PHONY declaration.

    .PHONY: ...existing phony targets... lib/foo/foo
  3. Add your build target to the build phony target.

    build: ...existing targets... lib/foo/foo
  4. Add your build target and rule

    lib/foo/foo: $(shell find lib/foo)
     (cd lib/foo && go build ./...)

Updating the Docker Images

Add any required COPY directives to ./Dockerfile:

COPY lib/foo/ lib/foo/

Updating the Workflow Files

  1. Add your new go.mod directory to the .github/workflows/checks.yaml's go job's matrix.strategry.directory line.
  2. Add the module to the license job in the checks workflow as well, especially if you declare any dependencies.
  3. Do the same for any other workflows that should be running on your folder, such as vuln-check and lint.

Terms and Concepts

Common terms used in the OpenTDF platform.

Service is the core service of the OpenTDF platform as well as the sub-services that make up the platform. The main service follows a modular binary architecture, while the sub-services are gRPC services with HTTP gateways.

Policy is the set of rules that govern access to the platform.

OIDC is the OpenID Connect protocol used solely for authentication within the OpenTDF platform.

  • IdP - Identity Provider. This is the service that authenticates the user.
  • Keycloak is the turn-key OIDC provider used within the platform for proof-of-value, but should be replaced with a production-grade OIDC provider or deployment.

Attribute Based Access Control (ABAC) is the policy-based access control model used within the OpenTDF platform.

  • PEP - A Policy Enforcement Point. This is a service that enforces access control policies.
  • PDP - A Policy Decision Point. This is a service that makes access control decisions.

Entities are the main actors within the OpenTDF platform. These include people and systems.

  • Person Entity (PE) - A person entity is a person that is interacting with the platform.
  • Non Person Entity (NPE) - A non-person entity is a service or system that is interacting with the platform.

SDKs are the contracts which the platform uses to ensure that developers and services can interact with the platform.

  • SDK - The native Go OpenTDF SDK (other languages are outside the platform repo).
  • Service SDK - The SDK generated from the service proto definitions.
    • The proto definitions are maintained by each service.

platform's People

Contributors

dmihalcik-virtru avatar strantalis avatar jakedoublev avatar jrschumacher avatar pflynn-virtru avatar opentdf-automation[bot] avatar dependabot[bot] avatar elizabethhealy avatar mkleene avatar suchak1 avatar patmantru avatar sujankota avatar ttschampel avatar imdominicreed avatar mustyantsev avatar pbacon-blaber avatar b-long avatar danleb avatar damorris25 avatar cakeholedc avatar ntrevino-virtru avatar opentdf-atlantis-terraform[bot] avatar sievdokymov-virtru avatar

Stargazers

Ryan Schumacher avatar  avatar Nigel Sheridan-Smith avatar  avatar  avatar Cassandra Zimmerman avatar  avatar Charles avatar  avatar Tyler Biscoe avatar Lee Horton avatar  avatar  avatar

Watchers

 avatar  avatar Tyler Biscoe avatar  avatar  avatar Sergii Turchaninov avatar Cody Lettau avatar Casey O'Kelly avatar Ben Girone avatar  avatar Peter Nancarrow avatar  avatar Artem A. avatar Joel C. avatar

platform's Issues

Refactor: write tests for the resource-mapping db interface

For resource mapping:

DB interface doesn't have tests implemented yet

Acceptance Criteria

  • write unit tests that test the SQL creation for each of the CRUD operations
  • write integration tests that test the DB commands for each of the CRUD operations

bug: possible to create `attribute_value` with nonexistent `attribute_value.id` in `members[]`

The DB layer for AttributeValues is missing validation logic to ensure creation of attribute value with a members list containing nonexistent or invalid id's is impossible. The current behavior is that an attribute value can be successfully created with a members list containing any string value.

If this is expected behavior or there is a reason not to validate attribute value id's when adding them to an attribute value's members list, please close this issue out.

ability to define allowed namespaces

At the platform level we need a way to define what namespaces are allowed. The use case here is if one instance of the platform needs to import attributes from another platform instance.

This potentially sets us up for the concept of an attribute authority where we want verify defined attributes.

RPC to dump policy config as json

As a PEP developer I need the ability to dump all policy config for a platform so I can function offline.

Acceptance Criteria

  • draft proto to capture the services, messages and rpc to enable dumping policy config
  • implement GRPC service and db handlers
  • implement integration tests
  • respect authorization policies

Define release schedule and automated deployment

To align with our 2024 Q1 OKR we need to define a release schedule and automate our deployments.

Warning

Acceptance criteria is a WIP

Acceptance Criteria

  • determine release process
  • set test quality gates
  • implement Github actions to automate deployment
    • bundle and publish code
    • build and publish container
    • build and publish docs

Refactor: Write tests for attribute values db interface

For attribute values:

DB interface doesn't have tests implemented yet

Acceptance Criteria

  • write unit tests that test the SQL creation for each of the CRUD operations
  • write integration tests that test the DB commands for each of the CRUD operations

DPOP and OAuth2 GRPC interceptor

Currently the platform doesn't have a way to enforce authn and now that we are starting to talk about rbac within the platform for controlling policy we need to build out an interceptor/middleware to authenticate an access token and dpop proof header.

Providing the Resource Definition back on Create/Update

Problem

As an API consumer, there are times not receiving back a Resource Definition back in response from a CREATE/UPDATE rpc is an impediment.

Background

For example, in a flow where I want to CREATE an Attribute, I currently receive nothing back in response. While a lack of error is helpful in indicating successful creation, I have no concept of the Id for the resources I just created (which is generated at the db-level) unless I make a subsequent ListAttributes rpc and look at the latest sequential attribute in the response.

In an ideal world, in whatever GUI/CLI/other-CRUD-manager I'm working with, I'd be able to iteratively CRUD a single attribute without needing to List the total of all resources of the given type (in this case Attributes) in between every other verb and rely on last in sequence as my work-in-progress.

Proposal

We update our proto definitions and db queries to always provide the new/newly-updated resource back in response to Create and Update rpc's.

Add state to the attributes, namespaces, and attribute values

While we haven't determined if attributes, namespaces, and attribute values should be deletable, we have determined that they need a status with the ability to "soft-delete" as a possible requirement.

Acceptance Criteria

  • define attribute status enum (i.e. UNSPECIFIED, ACTIVE, INACTIVE)
  • add status to the migrations for namespaces, attribute_definitions, and attribute_values
    • default value is an active status (see first bullet for actual value)
  • update protos to support status
  • add list filtering to return only the active status
  • investigate if additional index needs to be added for status
  • create issues to add selector message for list to enable fetching resources in a non-active status

Is there a way other than looking at `.proto` definitions to see what values are generated at the server/db level (and therefore not clientside concerns)?

There are some values that seem like they could always be sole concerns of the service/DB layer and not exposed to the clientside, like resourceVersion (auto-incrementing?) and fqn that are now part of at least attribute management calls.

If there are indeed values that are created or accessed purely within the services that shouldn't be passed in Creates/Updates by Clients, is there an easier way to tell than looking at what types are strictly required within the .proto definitions? It seems like there will also be edge cases for optional field zero values that we'll need to handle carefully.

Proposal: unify service create/update response behavior across services/rpc's

In a RESTful paradigm, we have a variety of "good" response headers that are frequently utilized to shape Create, Update, and Delete behavior:

  1. 200 - okay
  2. 201 - created
  3. 204 - no content

Most often that means a creation is POSTed with a 201 response, an update is PUT/PATCHed with a 200 response, and a DELETE comes back with no content in a 204.

In the gRPC paradigm, the lack of varied "success" response codes can introduce confusion if services/rpc's do not respond consistently.

I think we should standardize behavior with responses that look like the following:

  1. CREATE - created resource
  2. UPDATE - resource after update
  3. DELETE - deleted resource

message FooValue {
    string id = 1;
    string bar = 2;
}

message CreateFooRequest {
    string bar = 1;
}
message CreateFooResponse {
    // created value
    FooValue foo = 1;
}

message UpdateFooRequest {
    string id = 1;
    string bar = 1;
}
message UpdateFooResponse {
    // updated value
    FooValue foo = 1;
}

message DeleteFooRequest {
    string id = 1;
}
message DeleteFooResponse {
    // deleted value
    FooValue foo = 1;
}

Some other options would be:

  1. a success response means the client should expect the rpc resulted in exactly the C/U/D result expected (not a big fan of this, personally) and all 3 responses contain the resource's id and nothing else
  2. any CREATE should result in a success response, and UPDATE should result in the resource before it was updated
  3. an UPDATE should return the old & new state of the resource
  4. a myriad of other options

The popular best practices guides like this and this do not argue for or against what to return from an rpc, just how to return response data.

Actions

  • decompose work into issues across Policy service

duplicate requirement of `attribute_id` on creating new attribute value

When creating a new attribute value it is required to define attribute_id twice.

{
    "attribute_id": "a7870156-fdb5-4c14-b33b-7c4006411f96",
    "value": {
        "attribute_id": "a7870156-fdb5-4c14-b33b-7c4006411f96",
        "value": "value1"
    }
}

We should be able to update the proto and reference the attribute_id within value in the path. Something similar to this

Resource descriptors feel a little too generalized

It's hard to determine when a property should be part of the resource definition vs a resource descriptor.

Currently, resource descriptors contain the following properties:

  • type: type of resource
  • id: unique resource identifier
  • version: resource version
  • name: resource name
  • namespace: resource namespace for partitioning resources
  • fqn: fully qualified name
  • labels: labels
  • description: long description of the resource
  • dependencies: resource dependencies

Attributes have the following props:

  • rule
  • name
  • values
  • group_by
  • descriptor

It is understandable that resources share common properties, but some of the properties feel like they need to be top-level definition properties. I would say anything required should be top-level.

Attributes require:

  • id // auto-generated
  • name
  • description
  • namespace
  • values
  • group_by

Refactor: write tests for the attributes db interface

For attributes:

DB interface doesn't have tests implemented yet

Acceptance Criteria

  • write unit tests that test the SQL creation for each of the CRUD operations
  • write integration tests that test the DB commands for each of the CRUD operations

Track policy config that is used in TDF operations

Disseminating policy configuration in a distributed system has some challenges with lag time of policy configuration change. To handle reconciliation it would be advisable to track which policy configuration is used for a TDF operation like encrypt and decrypt.

Some areas to track the policy configuration:

  • In the TDF manifest
  • In the Audit logs from PEPs and PDPs. Note many versions can exist in one audit event.

Nit: Use `hierarchy` instead of `hierarchical` for attribute rule type?

Attribute Rule Types "enum"

Client-side, we're using human-readable strings and accepting them as input. This is definitely a nit, but maybe we could use hierarchy because it's easier to spell and pronounce correctly compared to the adjective form hierarchical? While hierarchical may be the word that's truly grammatically correct, it seems like we could swap that for hierarchy instead with low effort. ๐Ÿคท

Just a thought that can reasonably be ignored but felt worthwhile to share, regardless.

Determine naming and conventions around special RPCs

During the conversations around #63 and #62 we determined that we might need special RPCs to support destructive commands and to avoid adding destructive side effects into the basic CRUD RPCs.

Acceptance Criteria

  • determine naming of these destructive RPCs which will be used as release valves for decisions that were made early on
  • determine code convention as we build these RPCs into our services

Decision

There will be a single UnsafeService under Policy which will have all the unsafe operations. This will provide a more consolidated experience for devs and admins as well as supporting better AuthZ support via middleware.

Policy API: Enable admins to perform unsafe mutations on data in their platform

Implement the unsafe mutation RPCs so that admins can have an audit trail when they need to make unsafe changes to their platform

Depends on

Acceptance Criteria

  • Create an UnsafeService which captures the unsafe update RPC to empower platform admins
    • e.g. UnsafeService.UnsafeUpdateAttributeName()
    • e.g. UnsafeService.UnsafeDeleteAttribute()
  • Every unsafe property should require passing a value that would provoke the user to question what they are doing or why the platform requires it

Create an attribute_fqn table to index FQNs to Attributes, Attribute Values, and Attribute Namespaces

The current PKs for our attribute tables (attribute, attribute_values, and attribute_namespaces) is a UUID type. This is beneficial for databases since the storage and performance is/might be impacted by using a text type.

Historically, attribute values have been identified by an FQN. It's unknown as of this time if this is needed in the future, but it is known that TDF manifests use the FQN to identify the attributes associated. This means that KAS will need to be able to look up attributes based on the FQN.

In order to solve the problem of adding a property to the various tables which may not be entirely necessary, we are proposing that the FQNs be stored in an adjacent table:

CREATE TABLE attribute_fqn
(
    fqn TEXT PRIMARY_KEY,
    namespace_id UUID NOT NULL REFERENCES namespaces(id),
    attribute_id UUID REFERENCES attribute_definitions(id),
    attribute_value_id UUID REFERENCES attribute_value(id)
)

namespace create response empty `name` field

When creating a namespace the response contains an empty name field. This should contain the new namespace name in the response.

{
    "namespace": {
        "id": "c85d126a-c2f2-4bb6-bc6d-a513015363cb",
        "name": ""
    }
}

Research how we could use a db driver approach and abstraction

Problem

Some customers might want DB support on existing DB servers and not spin up another resource

Solution

Implement a driver approach to abstract the postgres specific ops and enable optionality for supporting other database servers

Acceptance Criteria

  • timebox to 16 hrs
  • research what kinds of postgres specific ops we are using
  • research which versions of alternate db servers support functional parity
    • mysql
    • sqlserver
  • psudocode the driver approach

undefined `CreateValue` when creating attribute value

There are some name mismatches between the proto definitions that don't allow us to create a new attribute value In this case we have CreateAttributeValue in the implementation but have CreateValue in the proto definition.

usage of `ResourceGroup` and should it be removed

When looking at the proto definitions for resource encodings I am struggling to understand the use case of when something like ResourceGroup would be leveraged and how an AttributeGroup wouldn't suffice for something like this.

/*
  Example:
     value: NATO
     members: [USA, GBR, etc.]
*/
message ResourceGroup {
  common.ResourceDescriptor descriptor = 1;
  //group value
  string value = 2;
  //List of member values
  repeated string members = 3;
}

In my opinion we should try to be driving encodings to attributes otherwise a resource like this isn't clear to me how it should be leveraged and should maybe be removed.

Define Platform Wellknown Endpoint

When initializing an sdk we will take as input the control plane endpoint but will need to derive things like a customers idp endpoint info or key access servers.

add health check

We need to implement the grpc health check functionality as described here

The healthz functionality should also be enabled on servermux.

Generate openapi spec doc using the google.api.http plugin

We have tied the google.api.http plugin into the protobuf framework, but we haven't tried to generate a spec yet.

Acceptance Criteria

  • concern around the order of ops when a rpc defines /resource/{id} and another defines /resource/subresource
  • ensure we know limitations
  • test visualization with swagger or similar tools

Out of scope

  • we are generating service SDKs so we don't need to validate that a client can be generated from a OpenAPI spec

Policy API: Add soft delete feature

ADR: Soft deletes should cascade from namespaces -> attribute definitions -> attribute values

Taken from comment below #108 (comment)

Background

In our Policy Config table schema, we have a Foreign Key (FK) relationship from namespaces to attribute definitions, and another FK relationship from attribute definitions to attribute values. We have decided that due to the scenario above in the description of this issue, we want to rely on soft-deletes to avoid accidental or malicious creations of attributes/values in the place of their deleted counterparts.

If we were relying on hard deletes, we would be given certain benefits by the relational FK constraint when deleting so that we could either:

  1. cascade a delete from an attribute definition to its values, OR
  2. prevent deleting an attribute unless its associated values had been deleted first

These benefits of our schema and chosen DB would prevent unintended side effects and require thoughtful behavior on the part of platform admins. However, now that we are restricting hard deletes to dangerous/special rpc's and specific "superadmin-esque" functionalities for known dangerous mutations by adding active/inactive state to these three tables, we need to decide the cascading nature of soft deletes with inactive state.

Chosen Option:

Considered Options: Rely on PostgreSQL triggers on UPDATEs to state to cascade down

  1. Rely on PostgreSQL triggers on UPDATEs to state to cascade down
  2. Rely on the server's db layer to make DB queries that cascade the soft deletion down
  3. Allow INACTIVE namespaces with active attribute definitions/values, and INACTIVE definitions with ACTIVE namespaces and values

Option 1: Rely on PostgreSQL triggers on UPDATEs to state to cascade down

Postgres triggers allow us to define the cascade behavior as the platform maintainers. Keeping the functionality within Postgres and not the server has additional benefits.

  • ๐ŸŸฉ Good, because cascading behavior of inactive state makes the most sense when the user intention is to delete (which is still going to be a relatively dangerous mutation)
  • ๐ŸŸฉ Good, because keeping the cascade in the DB is always going to be more optimal than multiple queries
  • ๐ŸŸฉ Good, because we are indexing on the state column in the three tables for speed of lookup/update
  • ๐ŸŸฉ Good, because it has already been proven out with an integration test for repeatability in this branch
    • ๐ŸŸฉ Good, because this does not block any superadmin/dangerous/special deletion capability and will be fully distinct from any cascade/constraint handling there
  • ๐ŸŸจ Neutral, because triggers are a Postgres feature, but we haven't made any firm decisions yet about what other SQL databases/versions we'll support or if we'll require customers to use the latest PostgreSQL
  • ๐ŸŸฅ Bad, because it's a less well-known feature of Postgres
  • ๐ŸŸฅ Bad, because we will only be able to ALWAYS cascade the INACTIVE UPDATE down the tree and will not get the foreign key constraint of a one-off deletion if that's what the user really intended. We'll need to make it clear to them what their change will do.

Option 2: Rely on the server's db layer to make DB queries that cascade the soft deletion down

The same as option 1, but with the cascading logic put into server-driven queries and not Postgres triggers.

  • ๐ŸŸฉ Good, because it does not tie us to any Postgres-specific feature and can be reused across SQL db's
  • ๐ŸŸฉ Good, because of all the other good benefits of option 1
  • ๐ŸŸฅ Bad, because performance: anything being soft deleted will mean multiple round trips
  • ๐ŸŸฅ Bad, because more room for bugs: anything being soft deleted will mean multiple queries
  • ๐ŸŸฅ Bad, because we can more easily end up in a bad state where the server fails or a secondary/tertiary query fails but the first succeeded

Option 3. Allow INACTIVE namespaces with active attribute definitions/values, and INACTIVE definitions with ACTIVE namespaces and values

  • ๐ŸŸฉ Good, because it gives maximum control to the user
  • ๐ŸŸฅ Bad, because that maximized control is actually more confusing
  • ๐ŸŸฅ Bad, because is most likely to cause a bad state where access is not allowed but an unknown reason
  • ๐ŸŸฅ Bad, because it is unintuitive from an Engineering/maintenance perspective

As a platform maintainer, I want to make sure that data which is deleted is soft-deleted so that I can prevent dangerous side effects and restore accidental deletes.

There are situations where the side effect of a delete could result in data leak if two admins are maintaining the platform. Example:

  • Admin A adds attribute demo.com/attr/Classification/value/TopTopSecret
    • Creates subject mapping with Deep Secret Spy
  • User A creates TDF SecretSpy-SecretSantas-MailingList.csv.tdf with demo.com/attr/Classification/value/TopTopSecret
  • Admin A deletes attribute demo.com/attr/Classification/value/TopTopSecret
  • Admin B add attribute demo.com/attr/Classification/value/TopTopSecret
    • and creates subject mapping with Top Secret Toy Inventor of Tops
  • User B with Top Secret Toy Inventor of Tops subject attribute accesses SecretSpy-SecretSantas-MailingList.csv.tdf

The soft-delete feature will prevent the recreation of the attribute with the same name on the same namespace.

Acceptance Criteria

Refactor: write tests for the namespace db interface

For namespace:

DB interface doesn't have tests implemented yet

Acceptance Criteria

  • write unit tests that test the SQL creation for each of the CRUD operations
  • write integration tests that test the DB commands for each of the CRUD operations

Refactor: write tests for the subject mapping db interface

For subject mapping

DB interface doesn't have tests implemented yet

Acceptance Criteria

  • write unit tests that test the SQL creation for each of the CRUD operations
  • write integration tests that test the DB commands for each of the CRUD operations

Proposal: settle on "encoding" OR "mapping" and do not use both interchangeably

When working on the CLI, I ran into confusion with acse (access control subject encodings) and the interchangeable usage of SubjectMapping and SubjectEncoding within naming conventions. The package is acse but the types/method names are SubjectMapping and the service name is SubjectEncodingServiceServer.

To reduce confusion and complexity, if both names are indeed referring to the same concept (where a subjectAttributeName : subjectOperator : subjectAttributeValue as in the pseudocode person.name of jakedoublev : IN : engineers.personsList) then I propose settling on a single name instead of using both.

Merriam-Webster relevant definitions:

I personally prefer mapping, but universally using either one would be helpful compared to using both.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.