Giter Site home page Giter Site logo

csi-cloudscale's Introduction

csi-cloudscale

test

A Container Storage Interface (CSI) driver for cloudscale.ch volumes. The CSI plugin allows you to use cloudscale.ch volumes with your preferred Container Orchestrator.

The cloudscale.ch CSI plugin is mostly tested on Kubernetes. In theory, it should also work on other Container Orchestrators like Mesos or Cloud Foundry. Feel free to test it on other COs and give us feedback.

TL;DR

# Add a cloudscale.ch API token as secret, replace the placeholder string starting with `a05...` with your own secret
$ kubectl -n kube-system create secret generic cloudscale --from-literal=access-token=a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da
# Add repository
$ helm repo add csi-cloudscale https://cloudscale-ch.github.io/csi-cloudscale
# Install driver
$ helm install -n kube-system -g csi-cloudscale/csi-cloudscale

Volume parameters

This plugin supports the following volume parameters (in case of Kubernetes: parameters on the StorageClass object):

  • csi.cloudscale.ch/volume-type: ssd or bulk; defaults to ssd if not set

For LUKS encryption:

  • csi.cloudscale.ch/luks-encrypted: set to the string "true" if the volume should be encrypted with LUKS
  • csi.cloudscale.ch/luks-cipher: cipher to use; must be supported by the kernel and LUKS, we suggest aes-xts-plain64
  • csi.cloudscale.ch/luks-key-size: key-size to use; we suggest 512 for aes-xts-plain64

For LUKS encrypted volumes, a secret that contains the LUKS key needs to be referenced through the csi.storage.k8s.io/node-stage-secret-name and csi.storage.k8s.io/node-stage-secret-namespace parameter. See the included StorageClass definitions and the examples/kubernetes/luks-encrypted-volumes folder for examples.

Pre-defined storage classes

The default deployment bundled in the deploy/kubernetes/releases folder includes the following storage classes:

  • cloudscale-volume-ssd - the default storage class; uses an ssd volume, no LUKS encryption
  • cloudscale-volume-bulk - uses a bulk volume, no LUKS encryption
  • cloudscale-volume-ssd-luks - uses an ssd volume that will be encrypted with LUKS; a luks-key must be supplied
  • cloudscale-volume-bulk-luks - uses a bulk volume that will be encrypted with LUKS; a luks-key must be supplied

To use one of the shipped LUKS storage classes, you need to create a secret named ${pvc.name}-luks-key in the same namespace as the persistent volume claim. The secret must contain an element called luksKey that will be used as the LUKS encryption key.

Example: If you create a persistent volume claim with the name my-pvc, you need to create a secret my-pvc-luks-key.

Releases

The cloudscale.ch CSI plugin follows semantic versioning. The current version is: v3.5.6.

  • Bug fixes will be released as a PATCH update.
  • New features (such as CSI spec bumps) will be released as a MINOR update.
  • Significant breaking changes makes a MAJOR update.

Installing to Kubernetes

Kubernetes Compatibility

The following table describes the required cloudscale.ch driver version per Kubernetes release. We recommend using the latest cloudscale.ch CSI driver compatible with your Kubernetes release.

Kubernetes Release Minimum cloudscale.ch CSI driver Maximum cloudscale.ch CSI driver
<= 1.16 v1.3.1
1.17 v1.3.1 v3.0.0
1.18 v1.3.1 v3.3.0
1.19 v1.3.1 v3.3.0
1.20 v2.0.0 v3.5.2
1.21 v2.0.0 v3.5.2
1.22 v3.1.0 v3.5.2
1.23 v3.1.0 v3.5.2
1.24 v3.1.0 v3.5.6
1.25 v3.3.0 v3.5.6
1.26 v3.3.0 v3.5.6
1.27 v3.3.0 v3.5.6
1.28 v3.3.0 v3.5.6
1.29 v3.3.0 v3.5.6
1.30 v3.3.0 v3.5.6

Requirements:

  • Nodes must be able to access the metadata service at 169.254.169.254 using HTTP. The required route is pushed by DHCP.
  • --allow-privileged flag must be set to true for both the API server and the kubelet
  • (if you use Docker) the Docker daemon of the cluster nodes must allow shared mounts
  • If you want to use LUKS encrypted volumes, the kernel on your nodes must have support for device mapper infrastructure with the crypt target and the appropriate cryptographic APIs

1. Create a secret with your cloudscale.ch API Access Token:

Replace the placeholder string starting with a05... with your own secret and save it as secret.yml:

apiVersion: v1
kind: Secret
metadata:
  name: cloudscale
  namespace: kube-system
stringData:
  access-token: "a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da"

and create the secret using kubectl:

$ kubectl create -f ./secret.yml
secret "cloudscale" created

You should now see the cloudscale secret in the kube-system namespace along with other secrets

$ kubectl -n kube-system get secrets
NAME                  TYPE                                  DATA      AGE
default-token-jskxx   kubernetes.io/service-account-token   3         18h
cloudscale            Opaque                                1         18h

2. Deploy the CSI plugin and sidecars:

You can install the CSI plugin and sidecars using one of the following methods:

  • Helm (requires a Helm installation)
  • YAML Manifests (only kubectl required)

2a. Using Helm:

Before you can install the csi-cloudscale chart, you need to add the helm repository:

$ helm repo add csi-cloudscale  https://cloudscale-ch.github.io/csi-cloudscale

Then install the latest stable version:

$ helm install -n kube-system -g csi-cloudscale/csi-cloudscale

Advanced users can customize the installation by specifying custom values. The following table summarizes the most-frequently used parameters. For a complete list please refer to values.yaml

Parameter Default Description
attacher.resources {} Resource limits and requests for the attacher side-car.
cloudscale.apiUrl https://api.cloudscale.ch/ URL of the cloudscale.ch API. You can almost certainly use the default.
cloudscale.max_csi_volumes_per_node 125 Override max. Number of CSI Volumes per Node.
cloudscale.token.existingSecret cloudscale Name of the Kubernetes Secret which contains the cloudscale.ch API Token.
controller.resources {} Resource limits and requests for the controller container.
controller.serviceAccountName null Override the controller service account name.
driverRegistrar.resources {} Resource limits and requests for the driverRegistrar side-car.
extraDeploy [] To deploy extra objects together with the driver.
nameOverride null Override the default {{ .Release.Name }}-csi-cloudscale name pattern with a custom name.
node.resources {} Resource limits and requests for the node container.
node.serviceAccountName null Override the controller node account name.
node.tolerations [] Set tolerations on the node daemonSet.
provisioner.resources {} Resource limits and requests for the provisioner side-car.
resizer.resources {} Resource limits and requests for the resizer side-car.

Note: if you want to test a debug/dev release, you can use the following command:

$ helm install -g -n kube-system --set controller.image.tag=dev --set node.image.tag=dev --set controller.image.pullPolicy=Always --set node.image.pullPolicy=Always ./charts/csi-cloudscale

2b. Using YAML Manifests:

Before you continue, be sure to checkout to a tagged release. Always use the latest stable version For example, to use the latest stable version (v3.5.6) you can execute the following command:

$ kubectl apply -f https://raw.githubusercontent.com/cloudscale-ch/csi-cloudscale/master/deploy/kubernetes/releases/csi-cloudscale-v3.5.6.yaml

The storage classes cloudscale-volume-ssd and cloudscale-volume-bulk will be created. The storage class cloudscale-volume-ssd is set to "default" for dynamic provisioning. If you're using multiple storage classes you might want to remove the annotation and re-deploy it. This is based on the recommended mechanism of deploying CSI drivers on Kubernetes

3. Test and verify:

Create a PersistentVolumeClaim. This makes sure a volume is created and provisioned on your behalf:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: cloudscale-volume-ssd

Check that a new PersistentVolume is created based on your claim:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM             STORAGECLASS            REASON    AGE
pvc-0879b207-9558-11e8-b6b4-5218f75c62b9   5Gi        RWO            Delete           Bound     default/csi-pvc   cloudscale-volume-ssd             3m

The above output means that the CSI plugin successfully created (provisioned) a new Volume on behalf of you. You should be able to see this newly created volumes in the server detail view in the cloudscale.ch UI.

The volume is not attached to any node yet. It will only attach to a node if a workload (i.e: pod) is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container:

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-cloudscale-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-cloudscale-volume
      persistentVolumeClaim:
        claimName: csi-pvc 

Check if the pod is running successfully:

$ kubectl describe pods/my-csi-app

Write inside the app container:

$ kubectl exec -ti my-csi-app /bin/sh
/ # touch /data/hello-world
/ # exit
$ kubectl exec -ti my-csi-app /bin/sh
/ # ls /data
hello-world

Upgrading

From csi-cloudscale v1.x to v2.x

When updating from csi-cloudscale v1.x to v2.x please note the following:

  • Ensure that all API objects of the existing v1.x installation are removed. The easiest way to achieve this is by running kubectl delete -f <old version> before installing the new driver version.
  • Prior to the installation of v2.x, existing persistent volumes (PVs) must be annotated with: "pv.kubernetes.io/provisioned-by=csi.cloudscale.ch". You can use this script or any other means to set the annotation.
  • If you are using self defined storage classes: change the storage class provisioner names from "ch.cloudscale.csi" to "csi.cloudscale.ch".

From csi-cloudscale v2.x to v3.x

When updating from csi-cloudscale v2.x to v3.x please note the following:

  • The node label region was renamed to csi.cloudscale.ch/zone.
  • The new release adds the csi.cloudscale.ch/zone label to all nodes (existing ones as well as new ones added after the upgrade)
  • The region label will stay in place for existing nodes and not be added to new nodes. It can be safely removed from all nodes from a csi-cloudscale driver perspective.

Advanced Configuration

Please use the following options with care.

Max. Number of CSI Volumes per Node

In the v1.3.0 release the default CSI volumes per node limit of has been increased to 125 (previously 23). To take advantage of the higher CSI limit you must ensure that all your cluster nodes are using virtio-scsi devices (i.e. /dev/sdX devices are used). This is the default for servers created after October 1st, 2020.

If you want to use a different value, for example because one of your nodes does not use virtio-scsi, you can set the following environment variable for the csi-cloudscale-plugin container in the csi-cloudscale-node DaemonSet:

env:
 - name: CLOUDSCALE_MAX_CSI_VOLUMES_PER_NODE
   value: '10'

Or use the cloudscale.max_csi_volumes_per_node value of the Helm chart.

Note that there are currently the following hard-limits per Node:

  • 26 volumes (including root) for virtio-blk (/dev/vdX).
  • 128 volumes (including root) for virtio-scsi (/dev/sdX).

Development

Requirements:

  • Go: min v1.10.x
  • Helm

Build out the charts/ directory from the Chart.lock file:

$ cd charts/csi-cloudscale/
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
$ helm dependency build charts/csi-cloudscale

Install the chart from local sources:

$ helm install -n kube-system -g ./charts/csi-cloudscale

Useful commands to compare the generated helm chart to the static YAML manifests:

$ helm template csi-cloudscale --dry-run -n kube-system --set nameOverride=csi-cloudscale charts/csi-cloudscale | kubectl-slice -f - -o deploy/kubernetes/releases/generated
$ kubectl-slice -f deploy/kubernetes/releases/csi-cloudscale-v6.0.0.yaml -o deploy/kubernetes/releases/v3

After making your changes, run the unit tests:

$ make test

Note: If you want to run just a single test case, from csi-test, find the corresponding, It in the source code, and temporarly replace it with FIt, example:

- It("should work if node-expand is called after node-publish", func() {
+ FIt("should work if node-expand is called after node-publish", func() {

If you want to test your changes, create a new image with the version set to dev:

apt install docker.io
# At this point you probably need to add your user to the docker group
docker login --username=cloudscalech [email protected]
$ VERSION=dev make publish

This will create a binary with version dev and docker image pushed to cloudscalech/cloudscale-csi-plugin:dev

To run the integration tests run the following:

$ export KUBECONFIG=$(pwd)/kubeconfig 
$ TESTARGS='-run TestPod_Single_SSD_Volume' make test-integration

Release a new version

To release a new version bump first the version:

$ make NEW_VERSION=vX.Y.Z bump-version
$ make NEW_CHART_VERSION=vX.Y.Z bump-chart-version

Make sure everything looks good. Verify that the Kubernetes compatibility matrix is up-to-date. Create a new branch with all changes:

$ git checkout -b new-release
$ git add .
$ git push origin

After it's merged to master, create a new Github release from master with the version v3.5.6 and then publish a new docker build:

$ git checkout master
$ make publish

This will create a binary with version v3.5.6 and docker image pushed to cloudscalech/cloudscale-csi-plugin:v3.5.6

Contributing

At cloudscale.ch we value and love our community! If you have any issues or would like to contribute, feel free to open an issue/PR

csi-cloudscale's People

Contributors

adamwg avatar alakae avatar bliemli avatar davidhalter avatar dependabot[bot] avatar disperate avatar eyenx avatar fatih avatar hamidzr avatar isantospardo avatar jcodybaker avatar nicktate avatar nussjustin avatar timoreimann avatar tobru avatar varshavaradarajan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

csi-cloudscale's Issues

Support volume resize

What did you do? (required. The issue will be closed when not provided.)

Tried to change size of a PVC.

What did you expect to happen?

Volume get's resized.

  • CSI Version: dev
  • Kubernetes Version: 1.11
  • Cloud provider/framework version, if applicable (such as Rancher): Rancher

Error logged when PVC gets edited:

error: persistentvolumeclaims "csi-pvc1" could not be patched: persistentvolumeclaims "csi-pvc1" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize

Support Kubernetes 1.12+

What did you do? (required. The issue will be closed when not provided.)

Installed csi-cloudscale on Kubernetes 1.12

What did you expect to happen?

It just works

  • CSI Version: dev
  • Kubernetes Version: 1.12
  • Cloud provider/framework version, if applicable (such as Rancher): Rancher

See also digitalocean/csi-digitalocean/issues/86

Not defined fsType causes volumes are mounted with root permission only

What did you do?

After installing the CSI Plugin on OpenShift 4.7 new created volumes couldn't be accessed. Same on a regular Kubernetes cluster didn't show that behavior.

I've figured out: With version 2.0 the external provisioner has removed the default
fsType [1]. With no filesystem set, Kubernetes does not set the
group ID and this causes volumes are mounted with root permissions
only [2]. Some distributions like OpenShift disallow access in this case.

# ls -l /var/lib/kubelet/pods/c50571e0-cd19-4e1b-a724-1da3b994f740/volumes/kubernetes.io~csi/pvc-41e303f6-4aa9-43db-9d50-c9d4102f6208/
total 8
drwxr-xr-x. 5 root root 4096 Jun 18 13:21 mount
-rw-r--r--. 1 root root  290 Jun 18 13:05 vol_data.json

[1] https://github.com/kubernetes-csi/external-provisioner/blob/master/CHANGELOG/CHANGELOG-2.0.md#changelog-since-v160
[2] NetApp/trident#556 (comment)

How it should look like

# ls -l /var/lib/kubelet/pods/d49c9a5e-b5b9-41c1-ae48-26ed19db1c4f/volumes/kubernetes.io~csi/pvc-69fb7bd4-0bf0-433e-9d43-d442534e9d97/mount/
total 20
drwxrwsr-x. 4 1000570000 1000570000  4096 Oct 22  2020 elasticsearch
drwxrws---. 2 root       1000580000 16384 Oct 22  2020 lost+found

Configuration

  • CSI Version: 2.1.0
  • Kubernetes Version: 1.20

Summary

I think it's recommendable to set a file system type for all Kubernetes clusters. Adding this the upgrade procedure or direct in the manifests, would help prevent other's run into the same issue.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cloudscale-volume-ssd
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.cloudscale.ch
allowVolumeExpansion: true
parameters:
  csi.cloudscale.ch/volume-type: ssd
  fsType: ext4 <====

Topology keys don't follow CSI specification

What did you do?

kubectl get nodes worker-1802 -o json | jq .metadata.labels | grep region
  "region": "lpg1"

What did you expect to happen?

I would expect a label in the form of ch.cloudscale/region or similar.

Reasoning:

  • As far as I understand, this label is set based on the driver topology:

When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. "company.com/zone", "company.com/region"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object.

AccessibleTopology: &csi.Topology{
Segments: map[string]string{
"region": d.region,

Configuration:

  • CSI Version: v2.1.0

  • Kubernetes Version:

  • Cloud provider/framework version, if applicable (such as Rancher):

oc version  
Client Version: 4.7.5
Server Version: 4.7.18
Kubernetes Version: v1.20.0+87cc9a4

CC: @megian

Feature: Support for Raw Block Volume Mounts

Creating a block volume mount with this driver is currently not possible.

In our case we should at least return InvalidArgument if this appears:

https://kubernetes-csi.github.io/docs/raw-block.html#implementing-raw-block-volume-support-in-your-csi-driver

Because we don't return the InvalidArgument there, it can fail pretty randomly with segmentation faults:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x80db26]

goroutine 70 [running]:

github.com/cloudscale-ch/csi-cloudscale/driver.(*Driver).NodeStageVolume(0xc000192150, 0x9b2a60, 0xc0002a8360, 0xc00028c120, 0xc000192150, 0x0, 0x0)

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/driver/node.go:90 +0x3a6

github.com/cloudscale-ch/csi-cloudscale/vendor/github.com/container-storage-interface/spec/lib/go/csi._Node_NodeStageVolume_Handler.func1(0x9b2a60, 0xc0002a8360, 0x904a20, 0xc00028c120, 0x7f77ad31eb00, 0x0, 0xc0002ba000, 0x0)

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go:4929 +0x86

github.com/cloudscale-ch/csi-cloudscale/driver.(*Driver).Run.func1(0x9b2a60, 0xc0002a8360, 0x904a20, 0xc00028c120, 0xc00027a360, 0xc00027a380, 0x893ca0, 0xd70840, 0x9160e0, 0xc0002ba000)

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/driver/driver.go:144 +0x78

github.com/cloudscale-ch/csi-cloudscale/vendor/github.com/container-storage-interface/spec/lib/go/csi._Node_NodeStageVolume_Handler(0x9113e0, 0xc000192150, 0x9b2a60, 0xc0002a8360, 0xc000288280, 0xc000184100, 0x0, 0x0, 0xc0002bc480, 0x223)

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go:4931 +0x158

github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc000188180, 0x9b49a0, 0xc00027e300, 0xc0002ba000, 0xc000170570, 0xd47e80, 0x0, 0x0, 0x0)

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc/server.go:966 +0x4a2

github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc.(*Server).handleStream(0xc000188180, 0x9b49a0, 0xc00027e300, 0xc0002ba000, 0x0)

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc/server.go:1245 +0xd61

github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc000278030, 0xc000188180, 0x9b49a0, 0xc00027e300, 0xc0002ba000)

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc/server.go:685 +0x9f

created by github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc.(*Server).serveStreams.func1

/home/dave/go/src/github.com/cloudscale-ch/csi-cloudscale/vendor/google.golang.org/grpc/server.go:683 +0xa1

When implementing this, please also look out for

AccessType: &csi.VolumeCapability_Mount{
    Mount: &csi.VolumeCapability_MountVolume{},
},

and in the specs: https://github.com/container-storage-interface/spec/blob/release-1.1/spec.md

  oneof access_type {
    BlockVolume block = 1;
    MountVolume mount = 2;
  }

Also see: digitalocean/csi-digitalocean#192

Feature Request: Automatically generate $pvc-luks-key secret for encrypted PVCs

What did you do?

What I would like to do:
Create a PVC with storage class "cloudscale-volume-ssd-luks" WITHOUT creating the corresponding secret $pvc-luks-key.

What did you expect to happen?

What I would like to happen:
The Cloudscale CSI driver generates a luks encryption secret of suitable length, puts it into a secret $pvc-luks-key, and then sets up the encrypted volume as it does now. The automatically generated secret $pvc-luks-key gets the same labels as the $pvc (as far as possible).

Configuration:

Since this is a feature request there are no logs or configuration information.

  • CSI Version: 3.5.0 at the time of writing this feature request

  • Kubernetes Version: 1.24

  • Cloud provider/framework version: OpenShift 4.11.28

Reasoning

TL;DR: We want to use encrypted volumes without having to know the implementation details of csi-cloudscale.

General reasons:

  • Creating the secret manually does not provide any value to the user of an encrypted volume. Getting rid of this manual step improves the user experience and simplifies tools which want to automatically set up encrypted volumes.
  • The user can still access the generated secret if he desires to. As far as I understand this the secret is pretty much useless to the user anyway because the user cannot access the underlying encrypted block device. (not 100% sure about this)
  • Security is improved by ensuring that the secret has the maximum possible entropy (instead of potentially reduced entropy due to insecure secret generation methods)
  • This change is backwards compatible to how things work now simply by only generating the secret if it doesn't already exist. If it already exists use the existing secret.

Specific reasons in our case:

  • We're working on tooling to abstract away the complexity of allocating PVs from the user as far as possible (https://github.com/vshn/k8ify ). It would be very helpful if we could use encrypted volumes just by specifying the storage class. With the current implementation this isn't possible; instead the abstraction layer (k8ify) has to have information about the target cluster, in particular it needs to know that the storage driver is csi-cloudscale and it has to know about the luks secrets. This is essentially preventing us from implementing support for encrypted volumes for now, because the step that generates K8s manifests (a build pipeline running k8ify) has little or no information about the target K8s cluster.
  • The secret needs to copy the labels from the PVC because that allows us to automatically delete the secret when the PVC gets deleted. Without the labels we would again need to know specifics of csi-cloudscale (in particular how the secret name is constructed).

Thanks for your consideration!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.