Giter Site home page Giter Site logo

cert-manager / cert-manager Goto Github PK

View Code? Open in Web Editor NEW
11.5K 11.5K 2.0K 86.1 MB

Automatically provision and manage TLS certificates in Kubernetes

Home Page: https://cert-manager.io

License: Apache License 2.0

Makefile 2.57% Go 95.91% Shell 1.24% Mustache 0.17% Dockerfile 0.10%
certificate crd hacktoberfest kubernetes letsencrypt tls

cert-manager's Introduction

cert-manager project logo

Build Status Go Report Card
Artifact Hub Scorecard score CLOMonitor

cert-manager

cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates.

It supports issuing certificates from a variety of sources, including Let's Encrypt (ACME), HashiCorp Vault, and Venafi TPP / TLS Protect Cloud, as well as local in-cluster issuance.

cert-manager also ensures certificates remain valid and up to date, attempting to renew certificates at an appropriate time before expiry to reduce the risk of outages and remove toil.

cert-manager high level overview diagram

Documentation

Documentation for cert-manager can be found at cert-manager.io.

For the common use-case of automatically issuing TLS certificates for Ingress resources, see the cert-manager nginx-ingress quick start guide.

For a more comprensive guide to issuing your first certificate, see our getting started guide.

Installation

Installation is documented on the website, with a variety of supported methods.

Troubleshooting

If you encounter any issues whilst using cert-manager, we have a number of ways to get help:

If you believe you've found a bug and cannot find an existing issue, feel free to open a new issue! Be sure to include as much information as you can about your environment.

Community

The cert-manager-dev Google Group is used for project wide announcements and development coordination. Anybody can join the group by visiting here and clicking "Join Group". A Google account is required to join the group.

Meetings

We have several public meetings which any member of our Google Group is more than welcome to join!

Check out the details on our website. Feel free to drop in and ask questions, chat with us or just to say hi!

Contributing

We welcome pull requests with open arms! There's a lot of work to do here, and we're especially concerned with ensuring the longevity and reliability of the project. The contributing guide will help you get started.

Coding Conventions

Code style guidelines are documented on the coding conventions page of the cert-manager website. Please try to follow those guidelines if you're submitting a pull request for cert-manager.

Importing cert-manager as a Module

โš ๏ธ Please note that cert-manager does not currently provide a Go module compatibility guarantee. That means that most code under pkg/ is subject to change in a breaking way, even between minor or patch releases and even if the code is currently publicly exported.

The lack of a Go module compatibility guarantee does not affect API version guarantees under the Kubernetes Deprecation Policy.

For more details see Importing cert-manager in Go on the cert-manager website.

The import path for cert-manager versions 1.8 and later is github.com/cert-manager/cert-manager.

For all versions of cert-manager before 1.8, including minor and patch releases, the import path is github.com/jetstack/cert-manager.

Security Reporting

Security is the number one priority for cert-manager. If you think you've found a security vulnerability, we'd love to hear from you.

Follow the instructions in SECURITY.md to make a report.

Changelog

Every release on GitHub has a changelog, and we also publish release notes on the website.

History

cert-manager is loosely based upon the work of kube-lego and has borrowed some wisdom from other similar projects such as kube-cert-manager.

Logo design by Zoe Paterson

cert-manager's People

Contributors

ccojocar avatar cheukwing avatar dippynark avatar erikgb avatar euank avatar florianliebhart avatar fuel-wlightning avatar hzhou97 avatar inteon avatar irbekrm avatar jahrlin avatar jakexks avatar jetstack-bot avatar jetstack-ci-bot avatar joshvanl avatar jsoref avatar kragniz avatar lucacome avatar maelvls avatar meyskens avatar munnerz avatar queuecumber avatar rinkiyakedad avatar sgtcodfish avatar snorwin avatar tamalsaha avatar tanujd11 avatar thatsmrtalbot avatar vdesjardins avatar wallrj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cert-manager's Issues

Add support for modifying named ingress in Certificate

A user should be able to specify a particular ingress resource to use/update for solving http-01 challenges, eg.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: certmanager-k8s-co
spec:
  secretName: certmanager-k8s-co
  issuer: letsencrypt-staging
  domains:
  - certmanager.k8s.co
  acme:
    config:
    - domains:
      - certmanager.k8s.co
      http-01:
        # note: using an ingress doesn't currently work
        ingress: certmanager-k8s-co
status:
  acme: {}

If the Ingress already exists, we should simply modify the existing one, in order to support GCE ingress controllers.

Support 'IsCA' flag on Certificate resource

In order to allow for auto-setup of a CA based issuer using self signed certificates, we should allow the Certificate specification to specify whether this Certificate is an issuing CA.

ref #83 #84

/area api
/kind feature

Rate limiting ACME provider requests

ACME servers each set their own rate limits. We should be aware of these rate limits so as to not exhaust them too quickly.

My biggest issue here is that I am not aware of any way to read the current quotas from an ACME server, meaning there's no way for cert-manager to actually know how much of it's quotas remain. If anyone has any insight here I'd very much appreciate it.

Otherwise, I think the next best thing is to increase the exponential backoff delay so we do just make attempts at solving slower.. perhaps starting the backoff at 1 minute?

Add 'selfsigned' issuer

In order to supported #83, we could create a self signed certificate issuer.

This issuing backend would simply sign certificates using themselves. This will allow us to easily 'seed' a CA issuer with a signed CA keypair.

/kind feature

[ACME] Solving http-01 challenges with Ingress resources

Currently, both kube-lego and kube-cert-manager use ingress resources directed at themselves to solve http-01 acme challenges.

This works, however runs into issues when running across multiple namespaces. This means managing Pod endpoints in separate namespaces, which consequently often breaks health checks performed by the ingress controller (ref jetstack/kube-lego#68).

I've been thinking about the best way to achieve this, and am beginning to think a Job resource created in the target namespace that will listen for acme challenges and respond with a key as defined in it's spec could be used. This way, a normal service and then ingress resource can be used to direct traffic to the http-01 solver.

The downside to this is a requirement that additional pods are created in the target namespace to solve challenges, but this seems like it may be a fair tradeoff.

Consistent log messages

Right now we don't have a consistent log format.

Previously, kube-lego has used structured logs provided by logrus throughout the codebase. I'm happy to use this sort of approach, but we need to come up with a consistent strategy/policy for how we log, to save multiple similar log messages/nested logs too much.

Create use-case focused tutorials on using cert-manager

We should have some clear tutorials on how to use cert-manager in common configurations.

It'd be good if these tutorials could link out to the documentation on some topics, so that users can learn more about the features of cert-manager (eg. linking out to alternative config options)

ref #28

/cc @ahmetb

Standardising secretRef's

Currently there are a number of places that we reference either whole secrets, or individual keys in a secret resource.

We should standardise how this is done, and a naming convention for the keys. Right now we have:

clouddns:
  serviceAccount:
    name: secret-name
    key: secret-key

should this become serviceAccountSecretKeyRef?
& if we don't reference a particular key in the secret, what should our convention for naming the field be here? Right now we have secretName on a Certificate resource that is a simple string, which I based off of how Ingress resources specify secrets. Should this be something more like secretNameSecretName (without the ridiculous naming in this example, obviously!)

Create ExternalAdmissionHook & Initializers for CM types

Instead of creating our own API server, it may be possible for us to use a combination of dynamic admission control (alpha in 1.7) and CRD validation (alpha in 1.8).

In order to create an External Admission Hook, we need to serve HTTPS with a signed TLS certificate. In order to automate this process and not make this a pain-point for users, I propose we use the simple CA backend to create a cert-manager 'system' CA, which it can then use to request a Certificate to serve with.

/kind enhancement
/area api

How does cert-manager interplay with SPIFFE?

SPIFFE provides a framework to manage identity within an environment. Looking at the existing Kubernetes Certificates API, there are fields for the 'requester' of a certificate, so that some kind of audit log can be maintained and certificate requests can be attributed to individual users/applications.

It'd be great if we can somehow integrate this with cert-manager. There should be some way to identify after-the-fact that a user has issued a certificate from a given issuer.

An example flow:

  1. Application pod starts
  2. Pod creates a Certificate resource that is somehow attributed to the individual pod
  3. Pod then waits for a change to the status of that Certificate (issued or denied)
  4. The Certificate resource is either fulfilled, or denied depending on some kind of policy
  5. Either the application starts as usual, using this new certificate, or fails to start if it's been failed.

Another option is to implement some form of Flex volume that can detect a pods identity, communicate with cert-manager to retrieve a certificate and then inject that secret into the pod through some volume path (similar to how https://github.com/fcantournet/kubernetes-flexvolume-vault-plugin operates, but with actual certificates instead of just a vault token). Worth noting that with this option, cert-manager is involving itself in the delivery of certificates to applications, which previously it has not attempted to do.

Periodically invalidate Conditions in Status block of Issuer/Certificate

At the moment we perform account validations against the ACME server every time the Prepare method is called on the ACME issuer.

This is unnecessary and could lead to us hitting rate limits. Instead, record when we last observed that the account is invalid, and only after a predefined 'recheck period' should we perform that validation again.

This is once again similar to other Kubernetes APIs, and allows for finer control over when calls are made.

Improve testing

Currently we have very few tests throughout the codebase.

We should spend some time picking up this backlog of work now. For now, unit tests will suffice.

I'll open a separate issue for creating an actual e2e suite in future.

/area test

Enable leader election

We need to use leader election to make sure only one instance of cert-manager is running at a time. This has recently been merged into client-go.

Determine how best to store KCM persistent data

Currently, as far as I am aware, KCM stores some persistent state in boltdb.

Given kube-lego's current state of not needing persistent storage attached, it'd be preferable to create this as part of our API group and store it as a ThirdPartyResource.

What data exactly is being stored in BoltDB, and how/how frequently is it accessed? ๐Ÿ˜„

/cc @luna-duclos @whereisaaron

Is it okay to store ACME authorisation URIs in non-secret places?

We currently store ACME authorisation URIs for domains on the Certificate resources status block so they can be reused for later issuing certificates. It's my understanding that these URIs are useless without access to the private key of the user account they were obtained by, hence these values are not secret.

I'm not 100% sure this hypothesis is true however, and would appreciate someone that knows the ACME protocol a bit better to give some input!

/cc @whereisaaron as you questioned this!

Review of API schema

Right now our API schema is relatively flexible. We only define a single external version, v1alpha1, and we give no guarantees on the stability of this API.

Now that we're at a point where the project does function, we should review our schema to make sure it's consistent and makes sense. This specifically relates to the 'Issuer' and 'Certificate' resource type.

An example for ACME can be found here: https://github.com/jetstack-experimental/cert-manager/blob/master/docs/acme-cert.yaml and here: https://github.com/jetstack-experimental/cert-manager/blob/master/docs/acme-issuer.yaml. The full type definitions can be found in types.go.

Support switching Issuer on Certificate resource

Currently, the check for Certificate validity only takes into account the start/expiry date on the Certificate. Therefore, if a user were to update the Issuer field on a Certificate resource (e.g. to switch from letsencrypt-staging to letsencrypt-prod), cert-manager would still see the Certificate as valid.

By adding a Verify(Certificate) (bool, error) method to the Issuer interface, we should be able to support arbitrary additional certificate verification on a per-issuer basis. This would allow an Issuer to 'force' a Certificate to be re-issued.

Relevant tweet: https://twitter.com/lmarsden/status/901095129045499904

Add support for watching multiple individual namespaces

In order to support better scaling and isolation of cert-manager, a user should be able to specify a single namespace, or list of namespaces to watch. This does not necessarily have to be the same namespace as cert-manager is running in, just sufficient roles defined such that they can access the appropriate resources.

Ideally, we would multiplex multiple event streams to only watch events in the namespaces that are named as monitored, to save receiving the entire event stream for all namespaces.

/cc @whereisaaron @simonswine @luna-duclos

Allow filtering resources to watch based on a label

Both kube-lego and kube-cert-manager have in some form held a concept of a class label that can be added to resources to indicate which instance of kube-lego/kube-cert-manager should process what resource. This is a ticket to discuss the introduction of a similar concept to cert-manager.

This distinction is only relevant if the user wants to run multiple instances of cert-manager, and so for the 'normal case' (ie. user sets up one instance of cert-manager watching one namespace) I'd rather keep the configuration surface at a minimum.

However, there are a few cases where this becomes especially important:

  • If a user wants one 'global' instance of cert-manager that watches all namespaces, in addition to an instance of cert-manager configured to watch just one namespace. Without this feature, both instances would process the same Issuer/Certificate resources making this a currently unsupported configuration.

  • If a user plans to run multiple instances of cert-manager within a single namespace, or multiple instances watching all namespaces.

These are potential scenario's for when such functionality may be useful, however I'd still like to hear if these are convincing user stories that we should address. I'm adverse to complicating the configuration surface to users if not required.

One proposed, non intrusive way this could be implemented is by allowing arbitrary key-value pairs be provided to the cert-manager CLI that filter what resources are watched. This way, we are not opinionated about what labels should be added to resources, and it saves us having 'class' as a first-class concept in our API.

Related discussion in #19 and #23

/cc @whereisaaron

Support alternative certificate usages

In order to use cert-manager for more than just TLS serving certificates, it'd be useful to support an arbitrary list of usages that should be put onto the Certificate.

For convenience, this could default to standard serving usages.

ref #13

/area api
/kind enhancement

Implement dns-01 challenge solver

We need to implement the dns-01 challenge solver. We should be able to reuse the providers as defined in github.com/xenolf/lego/providers. Some may need to be vendored and forked into our project in order to support the options we need.

Implement control loop to implement `kubernetes.io/tls-acme: "true"` behaviour on Ingress resources

Both kube-lego and kube-cert-manager support some form of annotation on Ingress resources to trigger an ACME registration flow for the domains listed on the Ingress.

cert-manager should support something like this too for it's various issuer backends. I'm thinking something like the following:

certmanager.kubernetes.io/enabled: "true"
certmanager.kubernetes.io/issuer: "letsencrypt-prod"

This may become more complex as we add more issuers, as some may require additional configuration. This could either be solved with defaults in Issuer specifications, or additional annotations (which I'd rather avoid as it becomes confusing to keep track of)

I'm also unsure whether to make these labels or annotations. If they are labels, we can create a SharedIndexInformer that filters based on a label selector of certmanager.kubernetes.io/enabled: "true", saving watching all ingress resources. I don't think it's possible to do the same thing with annotations? /cc @whereisaaron

Build ThirdPartyResource API group

We need to begin creating an API group to hold the contents of the old stable.k8s.psg.io/v1 group.

We can automatically generate the API group for a single types.go file, like in other repositories that have implemented ThirdPartyResources. We've got an example of this over in our navigator repo here: https://github.com/jetstack-experimental/navigator/tree/master/pkg/apis/navigator

Based on the naming of the service-catalog API group, I propose certmanager.k8s.io be used.

Add Prometheus metrics endpoint

We should expose a metrics endpoint in cert-manager that can be scraped by monitoring software such as Prometheus.

This is an overall tracking issue that'll probably be broken down into a number of smaller issues through the implementation of this.

We'll need to take special consideration to avoid storing all the metric data in memory, as that'd cause an instance restart to lose information.

It may still be valuable to expose metrics for just the current instance however, as that could be a useful graph to see ๐Ÿ˜„

/cc @ahmetb
/area monitoring
/kind feature

Support Kubernetes 1.6 and below

On startup, the cert-manager-controller will attempt to create CRD types in the target API server. This consequently broke support for Kubernetes 1.6 and below.

We should aim to support Kubernetes 1.6, either through detection of a lack of CRDs (and potentially creating TPRs instead), or through implementing our own API server (and providing instructions on how to communicate directly to this API server for users of Kubernetes 1.6 and below)

Set up automatic push of docker images

Right now docker images built from the master branch are not automatically pushed. They should be set up to push to quay.io automatically on successful builds. master should push the canary tag, and any tags should be pushed to quay.io too.

Use Condition in Status block to represent the 'state' of Issuer/Certificate

Throughout the Kubernetes API, and in service-catalog, 'Conditions' are used within the Status block of a resource to track changes to the state of a resource.

We can use this same pattern to record information about resources. This will allow us to see a history of changes, as well as tidy up the way we structure the resource Status block.

Consider replacing 'domains' field with 'commonName' and 'altNames' fields

At the moment we have a domains field that is used to set the requested domains on the Certificate.

In order to allow greater flexibility in the certificates that we issue, we could move to using commonName and altNames in the v1alpha1.Certificate.Spec block.

This has the converse effect of increasing the complexity for a first time user (they'll need to understand TLS enough to know they want to set the CommonName). If we had our own API server, we could consider a write-only domains field that automatically sets it (like how Secret resources have a StringData field: https://github.com/kubernetes/api/blob/f30e293246921de7f4ee46bb65b8762b2f890fc4/core/v1/types.go#L4466) but until then, we cannot.

This would make Certificate resources look more like this:

## Example Certificate that uses multiple challenge mechanisms to obtain
## a SAN certificate for multiple domains from the letsencrypt-staging issuer.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: test-k8s-group-san
spec:
  secretName: test-k8s-group-san
  issuer: letsencrypt-staging
  commonName: test.k8s.group
  altNames:
  - test2.k8s.group
  acme:
    config:
    - http-01:
        ingressClass: nginx
      domains:
      - test.k8s.group
      - test2.k8s.group

Create documentation

At the moment we have very little documentation.

We should provide both developer facing and user facing documentation. The develop docs should clearly explain different concepts within the code (eg. the interface for Issuer and how each method should behave), as well as user facing docs that explain how to use cert-manager and explain the Issuer/Certificate concept in a user-friendly way.

Add support for creating Ingress resource for http-01 authorization

If a user specifies an ingress class to use for http-01 challenges, we should create a temporary ingress resource with the given class to be used just for solving the challenge.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: certmanager-k8s-co
spec:
  secretName: certmanager-k8s-co
  issuer: letsencrypt-staging
  domains:
  - certmanager.k8s.co
  acme:
    config:
    - domains:
      - certmanager.k8s.co
      http-01:
        ingressClass: nginx
status:
  acme: {}

Supporting auto-generation of CA issuer keypair

#79 adds a basic CA issuer that reads a signing keypair from a Secret in the Kubernetes API server in order to issue certificates.

For convenience, it may be desirable to support an 'automatically generate a signing keypair' mode.

Add 'simple' CA issuer

We should add a simple CA issuer that issues certificates given a named secret containing a CA public/private key.

This shows we can support different backends, and is also quite useful for those running internal CAs with no automation.

Create RBAC policy for cert-manager

cert-manager needs to perform a number of different cluster actions in order to validate ACME challenge requests, as well as generally function.

We should write up an RBAC policy. A non-exhaustive list of things I know it'll need, off the top of my head:

certmanager.k8s.io/v1alpha1 Certificate: list, watch, update, create
certmanager.k8s.io/v1alpha1 Issuer: list, watch, update, create
core/v1 Secret: list, watch, update, create
core/v1 Service: list, watch, update, create, delete
extensions/v1beta1 Ingress: list, watch, update, create, delete
batch/v1 Job: list, watch, update, create, delete

For leader election, I think we'll also need core/v1 Endpoint: list, watch, update, create too.

Plumb in the main stopCh to Issuers so they can exit early & CleanUp in-flight requests

Right now, if ctrl+c is hit, workers do not gracefully drain their requests.

I have begun work to make this happen, however the workers will still finish processing any in-flight certificate requests, which in the ACME HTTP01 case has a timeout of 10 minutes (as GCLB take a long time to update).

We should somehow make issuers stop their current attempt at obtaining a certificate (if any), call their CleanUp methods to ensure cert-manager cleans up the resource it's modified/created for the request, and then exit their worker loops.

[ACME] Support a mode where authorizations are attached to Issuers

In the ACME protocol, the domain authorisations are tied to a user account. It's possible to obtain a certificate/renewal without performing an actual challenge step iff a valid authorisation is already held for the user account.

In order to save additional challenge flows, cert-manager stores the successful authorisation URIs on the Certificate resources status.ACME block.

In cert-manager, the 'user account' resource is effectively the Issuer resource. An ACME Issuer specifies a private key to use for communicating with the ACME server. However, the authorisations are stored on the Certificate resources themselves, so additional authorisations may be obtained for a new Certificate despite the fact another Certificate with the same Issuer has already obtained an Authorisation.

This was by design, so that a user would not be issued a certificate on day 0, upon creating the Certificate resource only to then have the renewal fail in 60 days time as they had configured their DNS/HTTP settings incorrectly.

Some users may want to switch to an 'Issuer' based authorisation store, where authorisations are stored against the Issuer resource instead of the Certificate, and are thus 'shared' between Certificates issued by an Issuer.

Improve logging in golang acme library

The WaitAuthorization function in golang.org/x/crypto/acme/acme.go hides a lot of useful error messages when it decodes into a wireAuthz structure.

For example, the full response on a failed request for a DNS authorisation is:

{
  "type": "dns-01",
  "status": "invalid",
  "error": {
    "type": "urn:acme:error:unauthorized",
    "detail": "Correct value not found for DNS challenge",
    "status": 403
  },
  "uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/challengeuid",
  "token": "challengetoken",
  "keyAuthorization": "challengekey"
}

whereas the actual function returns the error message acme: authorization error for :.

For now, I think we'll need to fork and vendor the library ourselves. I'd like to look into upstreaming a fix for this however.

Mark Issuer as not ready if DNS provider credentials are invalid

Right now we begin attempting authorisations for domains that need authorising before we've validated the configuration in the Issuer/Certificate resource.

We should make sure we perform the least interaction possible with the ACME server to prevent hitting any rate limits. In the case of DNS01, this could be done during the Issuers Setup step (where the status.ready field can be set to false if DNS provider credentials are invalid/not specified).

cert-manager will not run if invalid Certificate/Issuer resource exists in API (setup CRD validation)

Currently, due to the way cert-manager uses Informers, if an Certificate or Issuer resource exists in the API server that is not of the correct 'shape' (ie. a field is of the wrong type), cert-manager will not be able to process any resources.

I think the real solution to this is having a schema defined on the API server side, which is currently being implemented by @nikhita (design proposal here: https://github.com/nikhita/community/blob/6188a9bc52c8e5792cc50041897b66edf0f1620f/contributors/design-proposals/customresources-validation.md)

In the meantime however, and in order to support Kubernetes version <1.7, we should consider the value of creating our own API server and registering it via the api aggregator. This will allow us to also start allowing transitions between API versions, thus giving us a clear upgrade path if we need to make breaking changes. (ref #25)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.