Giter Site home page Giter Site logo

recap-build / recap Goto Github PK

View Code? Open in Web Editor NEW
321.0 10.0 24.0 1.44 MB

Work with your web service, database, and streaming schemas in a single format.

Home Page: https://recap.build

License: MIT License

Python 99.94% Dockerfile 0.06%
data-catalog metadata data-discovery recap data-engineering data-integration data-pipelines etl

recap's Introduction

recap

What is Recap?

Recap reads and writes schemas from web services, databases, and schema registries in a standard format.

⭐️ If you like this project, please give it a star! It helps the project get more visibility.

Table of Contents

Supported Formats

Format Read Write
Avro
BigQuery
Confluent Schema Registry
Hive Metastore
JSON Schema
MySQL
PostgreSQL
Protobuf
Snowflake
SQLite

Install

Install Recap and all of its optional dependencies:

pip install 'recap-core[all]'

You can also select specific dependencies:

pip install 'recap-core[avro,kafka]'

See pyproject.toml for a list of optional dependencies.

Usage

CLI

Recap comes with a command line interface that can list and read schemas from external systems.

List the children of a URL:

recap ls postgresql://user:pass@host:port/testdb
[
  "pg_toast",
  "pg_catalog",
  "public",
  "information_schema"
]

Keep drilling down:

recap ls postgresql://user:pass@host:port/testdb/public
[
  "test_types"
]

Read the schema for the test_types table as a Recap struct:

recap schema postgresql://user:pass@host:port/testdb/public/test_types
{
  "type": "struct",
  "fields": [
    {
      "type": "int64",
      "name": "test_bigint",
      "optional": true
    }
  ]
}

Gateway

Recap comes with a stateless HTTP/JSON gateway that can list and read schemas from data catalogs and databases.

Start the server at http://localhost:8000:

recap serve

List the schemas in a PostgreSQL database:

curl http://localhost:8000/gateway/ls/postgresql://user:pass@host:port/testdb
["pg_toast","pg_catalog","public","information_schema"]

And read a schema:

curl http://localhost:8000/gateway/schema/postgresql://user:pass@host:port/testdb/public/test_types
{"type":"struct","fields":[{"type":"int64","name":"test_bigint","optional":true}]}

The gateway fetches schemas from external systems in realtime and returns them as Recap schemas.

An OpenAPI schema is available at http://localhost:8000/docs.

Registry

You can store schemas in Recap's schema registry.

Start the server at http://localhost:8000:

recap serve

Put a schema in the registry:

curl -X POST \
    -H "Content-Type: application/x-recap+json" \
    -d '{"type":"struct","fields":[{"type":"int64","name":"test_bigint","optional":true}]}' \
    http://localhost:8000/registry/some_schema

Get the schema (and version) from the registry:

curl http://localhost:8000/registry/some_schema
[{"type":"struct","fields":[{"type":"int64","name":"test_bigint","optional":true}]},1]

Put a new version of the schema in the registry:

curl -X POST \
    -H "Content-Type: application/x-recap+json" \
    -d '{"type":"struct","fields":[{"type":"int32","name":"test_int","optional":true}]}' \
    http://localhost:8000/registry/some_schema

List schema versions:

curl http://localhost:8000/registry/some_schema/versions
[1,2]

Get a specific version of the schema:

curl http://localhost:8000/registry/some_schema/versions/1
[{"type":"struct","fields":[{"type":"int64","name":"test_bigint","optional":true}]},1]

The registry uses fsspec to store schemas in a variety of filesystems like S3, GCS, ABS, and the local filesystem. See the registry docs for more details.

An OpenAPI schema is available at http://localhost:8000/docs.

API

Recap has recap.converters and recap.clients packages.

  • Converters convert schemas to and from Recap schemas.
  • Clients read schemas from external systems (databases, schema registries, and so on) and use converters to return Recap schemas.

Read a schema from PostgreSQL:

from recap.clients import create_client

with create_client("postgresql://user:pass@host:port/testdb") as c:
    c.schema("testdb", "public", "test_types")

Convert the schema to Avro, Protobuf, and JSON schemas:

from recap.converters.avro import AvroConverter
from recap.converters.protobuf import ProtobufConverter
from recap.converters.json_schema import JSONSchemaConverter

avro_schema = AvroConverter().from_recap(struct)
protobuf_schema = ProtobufConverter().from_recap(struct)
json_schema = JSONSchemaConverter().from_recap(struct)

Transpile schemas from one format to another:

from recap.converters.json_schema import JSONSchemaConverter
from recap.converters.avro import AvroConverter

json_schema = """
{
    "type": "object",
    "$id": "https://recap.build/person.schema.json",
    "properties": {
        "name": {"type": "string"}
    }
}
"""

# Use Recap as an intermediate format to convert JSON schema to Avro
struct = JSONSchemaConverter().to_recap(json_schema)
avro_schema = AvroConverter().from_recap(struct)

Store schemas in Recap's schema registry:

from recap.storage.registry import RegistryStorage
from recap.types import StructType, IntType

storage = RegistryStorage("file:///tmp/recap-registry-storage")
version = storage.put(
    "postgresql://localhost:5432/testdb/public/test_table",
    StructType(fields=[IntType(32)])
)
storage.get("postgresql://localhost:5432/testdb/public/test_table")

# Get all versions of a schema
versions = storage.versions("postgresql://localhost:5432/testdb/public/test_table")

# List all schemas in the registry
schemas = storage.ls()

Docker

Recap's gateway and registry are also available as a Docker image:

docker run \
    -p 8000:8000 \
    -e RECAP_URLS=["postgresql://user:pass@localhost:5432/testdb"]' \
    ghcr.io/recap-build/recap:latest

See Recap's Docker documentation for more details.

Schema

See Recap's type spec for details on Recap's type system.

Documentation

Recap's documentation is available at recap.build.

recap's People

Contributors

adrianisk avatar alexdemeo avatar criccomini avatar gunnarmorling avatar gwukelic avatar jakthom avatar joshuacoris avatar khuara17 avatar mjperrone avatar nahumsa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

recap's Issues

Add REST plugin support

This is a companion ticket to #51. While #51 added support for CLI plugins, developers might want to add REST API plugins as well (similar to Airflow REST API plugins).

I'm not 100% convinced this feature is needed. I'm mostly concerned about the complexity of this feature vs. how many developers actually want it/how realistic the use cases for it are. Still, I'm logging it here.

Cache plugins

The CLI is starting to get a bit slow because it's calling recap.plugins.load_*_plugins over and over again. These methods should cache their results during their first invocation.

Create initial unit tests

Recap has no tests right now. 😭 I want to make sure Recap is useful before I go all-in on tests. Some basic unit tests for the crawlers, storage, and API should suffice.

Replace CatalogPath with URL

I've been thinking about replacing CatalogPath with a URL string. CatalogPaths are:

  1. Too verbose
  2. Not really intuitive to users

Everyone knows what a URL is.

This would be a pretty significant change that would affect the catalog, browser, server, and analyzer APIs.

    def analyze(self, url: str) -> BaseMetadataModel | None:

    def children(self, url: str) -> list[str] | None

    def read(
        self,
        url: str,
        time: datetime | None = None,
    ) -> dict[str, Any] | None:

For the server, I would probably change the paths to look like this:

/catalog/directory?url=postgresql://localhost/some_db
/catalog/metadata?url=postgresql://localhost/some_db/some_table
/catalog/metadata/schema?url=gs://some-bucket/some/file.json
/catalog/metadata/indexes?url=postgresql://localhost@localhost/some_db/some_table
/catalog/search?query=foo

@nahumsa @nehiljain What do you think?

Add live browsing and analyzing to CLI

The recap catalog command reads from the data catalog storage layer. It would be cool to add a --live switch that read from live DBs instead. For example:

recap catalog list /databases/postgresql/instances/localhost/schemas --live

Would list the schemas directly from the PG DB, not from the catalog cache. I'm undecided about whether this should be a --live option in catalog, or whether it should be its own top-level command:

recap browse /databases/postgresql/instances/localhost/schemas

You could also do live analyzing:

recap analyze /databases/postgresql/instances/localhost/schemas/my_db/tables/some_table

One could also make the argument that analyzing should be done via recap crawl. If you want up to date info, crawl it, then query the catalog.

Lastly, the examples above use paths (/databases/postgresql/instances/localhost/schemas/my_db/tables/some_table). Perhaps we want to support URLs as well, like recap crawl does? e.g. recap browse postgresql://chrisriccomini@localhost/my_db. This is a bit sketchy since there's no standard path for DB URLs that includes schema, table, etc.

Rename `db` analyzers

I want to get more specific when defining analyzer keys. Right now, the analyzers/db/*.py analyzers use generic names like access and columns. I'm going to rename these keys to sqlalchemy.access, sqlalchemy.columns, and so on. The motivation is to allow different analyzers the opportunity to return the same types of metadata with different models. For example, a BigQuery-specific access analyzer might return READER, WRITER, and so on. It might also return users and groups. Giving such an analyzer a key like bigquery.access would be good. Then readers can decide to use which access model to use (or even fall back to less specific ones).

Remove psycopg2 dependency

psycopg2 isn't needed out of the box anymore. I'm going to remove it. I will update the documentation to make a note that psycopg2 will need to be installed in order for SQLAlchemy, DatabaseBrowser, and the table analyzers to work with PostgreSQL.

Replace `--exclude` with `--analyzer`

I'm finding --exclude cumbersome as I add more analyzers. I'd prefer to just provide a list of analyzers that I want to run.

I want the default (when no analyzers are specified) to exclude expensive or slow analyzers.

I also want a special --analyzer all command that runs everything.

Store type metadata separately in DatabaseCatalog

I keep going back and forth on this.

Type metadata is currently stored in a single metadata field in catalog. This makes search easier, but complicates writes (and maybe metadata version history).

I want to explore storing the metadata by type:

  • parent
  • name
  • type (Add this column)
  • metadata

And see how the ergonomics feel.

Convert analyzers to entrypoint plugins

Analyzers are currently pluggable via a --analyzers CLI command (or via settings.toml):

recap refresh postgresql://chrisriccomini@localhost/sticker_space_dev \
    --analyzer=recap.crawlers.db.analyzers.TableColumnAnalyzer \
    --analyzer=recap.crawlers.db.analyzers.TableAccessAnalyzer

@ananthdurai pointed out that this style differs from the CLI entry-point approach implemented in #52. It is more user-friendly to have everything be standard. I will convert analyzers to be defined via entry points, just like CLI plugins.

I'm thinking the refresh command will change slightly, to something like:

recap refresh [URL]
recap refresh all [URL] # alias for the command above
recap refresh access [URL]
recap refresh schema [URL]
... and so on

I would also like to support multiple refreshes in one command (e.g. recap refresh access,schema [URL].

A feature of this approach is that different refresh subcommands can specify different CLI params and arguments.

A draw-back of this approach is that I'm not sure how all and multiple-subcommands-at-once would work for subcommands that REQUIRE configuration.

Add CLI extension support

Can the extension framework [like hooks] be added along with the rest and the cli interface? Something like Recap as a microkernel with a plugin-style system. One possible extension is generating a static website from the Recap metadata [like dbt docs], which can be reused by many.

Add pydocs to typing.py and typed.py

My most recent round of REST updates left the typing.py and typed.py files with a bunch of code that's not documented. It's magical and complicated, so it absolutely needs some notes. This ticket tracks adding pydocs to these classes.

Recap requires 3.10, but pyproject.toml says 3.9

Discussed in #87

Originally posted by mct0006 January 11, 2023
Hi there! I'm interested in test driving Recap. I installed it with pip and am running python 3.9.13. As far as I can tell from the documentation, this is all I need to do to get Recap set up on my machine, but it seems that I must be missing something, as all cli commands fail with the following error:

Traceback (most recent call last):
  File "/usr/local/anaconda3/bin/recap", line 5, in <module>
    from recap.cli import app
  File "/usr/local/anaconda3/lib/python3.9/site-packages/recap/cli.py", line 3, in <module>
    from .plugins import init_command_plugins
  File "/usr/local/anaconda3/lib/python3.9/site-packages/recap/plugins/__init__.py", line 6, in <module>
    from .catalogs.abstract import AbstractCatalog
  File "/usr/local/anaconda3/lib/python3.9/site-packages/recap/plugins/catalogs/__init__.py", line 1, in <module>
    from .abstract import AbstractCatalog
  File "/usr/local/anaconda3/lib/python3.9/site-packages/recap/plugins/catalogs/abstract.py", line 7, in <module>
    class AbstractCatalog(ABC):
  File "/usr/local/anaconda3/lib/python3.9/site-packages/recap/plugins/catalogs/abstract.py", line 46, in AbstractCatalog
    type: str | None = None,
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'

It seems like I must be missing something important here but I'm at a loss as to what it is. I'd appreciate any help!

Update REST docs

#108 changed the REST API format, but didn't update the docs. The paths and examples need to get updated. Typed vs. untyped routers should also be described.

Add data models for metadata types

Metadata is currently stored as a JSON blob in Recap's catalog implementation. The JSON is strongly typed with something like a Pydantic data model or Python data class. It would be nice to allow analyzer plugins to define their metadata types and schemas. Then, The Recap HTTP/JSON server could use the schemas in its JSON schema.

Add dosctrings

The Recap API doesn't have any docstrings right now. I need to add some.

Usage of Code Linters and Formatters

As discussed in #149, it would be good for the development of the library to have a specific code linter (Pylint, flake8) and/or code formatter (black).

Would you be interested in setting up a linter and formatter for the project?

hello from Intake

I was pointed to your project by a colleague who saw your blog post in some news stream.

Let me introduce you to Intake https://intake.readthedocs.io/en/latest/

We seem to have a lot of overlapping interests and functionality. Intake has been around for some time and already supports many of the things you mention in your blog post:

  • a simply python library which only does cataloging
  • many data formats and limited in-memory data types (dataframe, array, itarable, xarray, stream)
  • support for reading over any remote backend supported by fsspec (http, s3, gcs, azure, ftp, gdrive...)
  • a REST server and protocol
  • a CLI
  • a simply python library which only does cataloging and load and then gets out of the way
  • crawlers for remote data services with catalogs of their own like SQL
  • a simple YAML-based catalog format for building your own
  • streaming data
  • plugin systems for many of the components

Rather than reinventing all these things here, I mildly suggest you might consider making use of the ones already existing. I haven't yet got my head around what differentiates recap from Intake - do you think you can summarize? Maybe we can learn something from you, or we can collaborate to make something better for everyone.

Replace `analyze` and `browse` with `recap catalog --live`

I think it might be more intuitive to replace the recap analyze and recap browse commands with a --live switch in recap catalog [read|list] (or some similar name).

The only oddity is that analyze takes a specific analyzer plugin name right now, but read returns the entire metadata dictionary. I think two things should be done for this:

  1. Add an optional analyze plugin name option to read that filters out and returns only the specified plugin metadata.
  2. Write a LiveCatalog or PassthroughCatalog that implements only read and ls.

(2) is really interesting to me because it could be used for the crawler as well. The basic idea is that read/ls calls would pass through directly to the underlying infrastructure (via their browser and analyzer(s)). This would unify the live/catalog commands and also make the crawler code much simpler (since it'd just read/ls the PassthroughCatalog and then put the response into the catalog specified in settings.toml).

Support sampling in Pandas `ProfileAnalyzer`

The Pandas ProfileAnalyzer currently analyzes entire files or tables. This is not ideal for larger datasets; users should be able to configure sampling.

I need to think about the best way to do this, since the config will be set for a URL (e.g. gs://foo-bar-bucket. It should be able to adapt to small, medium, and large files. I think allowing users to specify either is best; let the user pick:

sample_rows: int | None = None,
sample_percent: int | None = None,

If both are set, the profiler will use whichever setting results in a larger sample on a given dataset. For example:

sample_rows = 1000
sample_percentile = .5

For dataset rows = 500, all data is analyzed. For rows = 1500, 1000 rows will be analyzed. For rows = 3000, 1500 rows will be used.

No need to let the user define the sampling strategy.

This is a follow-on to #157.

Add `pyright` to `style` group

We've added isort, black, and pylint. I'm using pyright for static type checking in VSCode. I'd like to add that to the style dev-dependencies group, CI, and DEVELOPERS.md as well.

Add BigQuery lineage using Dataplex Lineage API

Seems like the BigQuery lineage API hasn't been implemented server-side yet:

from google.cloud.datacatalog.lineage import LineageClient, SearchLinksRequest, EntityReference
client = LineageClient()
er = EntityReference(fully_qualified_name='bigquery:some-project-1234.austin_311.311_service_requests')
sr = SearchLinksRequest(parent='some-project-1234', source=er)
client.search_links(request=sr)

Gives me:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/chrisriccomini/Code/recap/.venv/lib/python3.10/site-packages/google/cloud/datacatalog/lineage_v1/services/lineage/client.py", line 2168, in search_links
    response = rpc(
  File "/Users/chrisriccomini/Code/recap/.venv/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
    return wrapped_func(*args, **kwargs)
  File "/Users/chrisriccomini/Code/recap/.venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 74, in error_remapped_callable
    raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.MethodNotImplemented: 501 Received http2 header with status: 404

Doesn't seem like the API has been implemented server side? Odd...

I'm not 100% sure the objects I made are correctly set up. I'm following:

https://cloud.google.com/python/docs/reference/lineage/latest/google.cloud.datacatalog.lineage_v1.services.lineage.LineageClient
https://cloud.google.com/python/docs/reference/lineage/latest/google.cloud.datacatalog.lineage_v1.types.EntityReference
https://cloud.google.com/python/docs/reference/lineage/latest/google.cloud.datacatalog.lineage_v1.types.SearchLinksRequest

Add a `browse` command plugin

It would be useful to analyze live data with:

recap browse <url> <path>

Like:

recap browse postgresql://username@localhost/some_db /schemas/public/tables

And get back the results in JSON form. The first two params after browse would always be url and path.

Snowflake crawl optimization

During initial exploration, Snowflake crawler was indexing ~8 tables/minute.

Due to the speed of Snowflake's metadata layer, this could probably benefit from some combination of:

  • Source-specific crawl specialization
  • Options to "bulk crawl" via e.g. the SNOWFLAKE database
  • Easy invocation of parallel crawls

P.S. love the project direction

Add `pii` analyzer

Add a recap/analyzers/tokern/pii.py analyzer that looks for PII in supported databases.

A separate Google DLP detector might be useful at some point, but it's a bit much right now since it requires GCP. The Tokern library is easier for me to implement at the moment.

Add analyzer docs

Now that I have mkdocstring set up, I'm noticing that the analyzers don't have good docstrings. I'm need to remedy that.

Replace `catalogs.duckdb` with `catalogs.db` using SQLAlchemy

Switch Recap to use a SQLAlchemy catalogs.db implementation that works with SQLAlchemy dialects. This will give Recap more flexibility, and let developers deploy with Postgres (or even Yugabyte et al) for a little more scale.

Recap should continue to use DuckDB as the default DB out of the box, but there's no reason we shouldn't support other DBs for catalog storage.

For the time being, I don't want to add Alembic (db migration) support. I'll just use SQLAlchemy to create DBs/tables. If the schema gets more complex, I will revisit the topic of Alembic. Right now it's so simple the added dependency isn't warranted.

Replace `db.location` with `catalog.info` analyzer

The db.location analyzer has always annoyed me a little bit. It's kind of special case. I had the idea to have a recap.analyzers.catalog package with an info analyzer inside that contained:

class Info(BaseMetadataModel):
    path: str
    """
    Full catalog path.
    """

I thought I might also include some kind of insertion/updated timestamp, but I now think that should be done per-analyzer since analyzers can update independently of one another.

Make router plugins configurable

Recap's catalog.py router currently loads all available router plugins. This causes both the typed and untyped catalog routers to be loaded, which might not be desirable. Users might also wish to turn other routers on or off. Users should be able to specify which routers are active in an api.routers setting.

Expose metadata versioned history

Recap only exposes the metadata that was discovered during the last crawl. It would be really useful to be able to see how metadata evolved over time.

Technically, you can kind of do this now by using the FilesystemCatalog with a versioned object store like S3 or GCS, but Recap doesn't expose the older versions in any way.

I think the right way to implement this is to bake the versioning into the Recap API, and force catalogs to implement it as they see fit. For RDBMS, we'll have to keep all metadata as separate rows. For FilesystemCatalogs we might be able to leverage S3/GCS versioning if that scheme is used. Since we're using fsspec, I'm not 100% sure it's possible to do that in a generic way. Something to think about.

exclude tag in 0.4.0 is throwing unexpected keyword error

 /Users/slin/.pyenv/versions/3.10.9/envs/recap-ply/lib/python3.10/site-packages/recap/crawler.py: │
│ 179 in create_crawler                                                                            │
│                                                                                                  │
│   176 │   **config,                                                                              │
│   177 ) -> Generator['Crawler', None, None]:                                                     │
│   178 │   with create_browser(url=url, **config) as browser:                                     │
│ ❱ 179 │   │   yield Crawler(browser, catalog, **config)                                          │
│   180                                                                                            │
│                                                                                                  │
│ ╭──────────────────────────────────── locals ────────────────────────────────────╮               │
│ │ browser = <recap.browsers.analyzing.AnalyzingBrowser object at 0x110cb9390>    │               │
│ │ catalog = <recap.catalogs.db.DatabaseCatalog object at 0x10c733760>            │               │
│ │  config = {                                                                    │               │
│ │           │   'excludes': <BoxList: ['sqlalchemy.profile', 'pandas.profile']>, │               │
│ │           │   'recursive': True                                                │               │
│ │           }                                                                    │               │
│ │     url = 'bigquery://xxx'                                        │               │
│ ╰────────────────────────────────────────────────────────────────────────────────╯               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: Crawler.__init__() got an unexpected keyword argument 'excludes'

Getting the above error from running recap crawl bigquery://xxx --exclude sqlalchemy.profile --exclude pandas.profile

Add logging

Recap doesn't have any logging right now. Kinda a bummer. Should add some, huh. I'll start with the API, DB crawler, and storage implementations.

Add an `analyze` command plugin

It would be useful to analyze live data with:

recap analyze <plugin> <url> <path>

Like:

recap analyze db.column postgresql://username@localhost/some_db /schema/public/tables/some_table

And get back the results in JSON form. The first param after db.column would always be url subsequent params could be forwarded as **configs to the create_analyzer method.

Crawler deletes filtered directories

If the user specifies filters when crawling, the crawler will delete any directories that it comes across in the catalog that still exist, but were filtered out. I think the crawler shouldn't delete directories from the catalog if a filter is specified.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.