Giter Site home page Giter Site logo

livekit-helm's Introduction

LiveKit's helm charts are published on S3.

Installing helm

Add it to your helm repo with:

helm repo add livekit https://helm.livekit.io

Customize values in values-sample.yaml

Then install the chart

helm install <instance_name> livekit/livekit-server --namespace <namespace> --values values.yaml

For LiveKit Helm developers

Publishing requires helm-s3 plugin

helm plugin install https://github.com/hypnoglow/helm-s3.git
AWS_REGION=us-east-1 helm repo add livekit s3://livekit-helm

./deploy.sh

livekit-helm's People

Contributors

ahmed-adly-khalil avatar arrase avatar benjamin658 avatar biglittlebigben avatar bmbferreira avatar cscherban avatar dave-b-code avatar davidzhao avatar dsa avatar fabius avatar frostbyte73 avatar imredobos avatar kannonski avatar lukasio avatar matkam avatar msamoylov avatar nightvisi0n avatar real-danm avatar reguchibr avatar rezaxd avatar rnakano avatar stevenaldinger avatar stogas avatar zifeo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

livekit-helm's Issues

Unable to download the chart for 1.3.2

Hi!

I noticed there's an issue with fetching the latest 1.3.2 version of the chart, example:

$ helm repo update
$ helm pull livekit/livekit-server --version 1.3.2
Error: fetch from s3 url=s3://livekit-helm/livekit-server-1.3.2.tgz: fetch object from s3: AccessDenied: Access Denied
	status code: 403, request id: P0YY34MK56C4DQWF, host id: 0rYDRlEotXf6lwdE7AmrEOSJHbH+FirwrwLkPfRhV5CaJHLYirkVYHSIVbwrhCUS5czqCRoBjWU=
Error: plugin "bin/helm-s3 download" exited with error

Digital Ocean Ingress Addition

I am going to be deploying LiveKit to Digital Ocean Kubernetes. I will be doing some development on the Helm chart to support Digital Ocean Nginx Ingress. I'd be happy to put up a pull request to add support for Digital Ocean Kubernetes Ingress to this repo once I get it working if that is desired.

Ingress 1.2.0 chart not present in Helm repo

Ran 5mins ago:

hal@arch ~ took 2s 
❯ helm repo update
Hang tight while we grab the latest from your chart repositories...
[...]
...Successfully got an update from the "livekit" chart repository
[...]
Update Complete. ⎈Happy Helming!⎈

hal@arch ~ 
❯ helm search repo livekit
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
livekit/livekit-recorder        0.3.13          v0.3.13         LiveKit recorder is used by LiveKit server to r...
livekit/livekit-server          1.5.0           v1.5.0          Open source WebRTC infrastructure. Host your ow...
livekit/egress                  1.8.0           v1.8.0          Egress is used by LiveKit to stream and record ...
livekit/ingress                 1.1.0           v1.1.0          Ingress is used by LiveKit to ingest streams pr...

The chart is updated in this repo but not in the chart repo shown in the README.md
This is probably not a bug but some miscommunication on how the release process is done. Perhaps a delay is completely intended.
This may cause confusion for some people

TURN with udp only throws error

I wanted to try the UDP only approach for the turn server as mentioned in the docs here

but I get the following error when trying to deploy with turn enabled and only a udp port set:
execution error at (livekit-server/templates/deployment.yaml:97:27): tls secret required if turn enabled

Tolerations aren't set for Daemonset

The Helm chart allows for tolerations to be added to the deployment, and this works correctly, but the same tolerations aren't added to the Daemonset.

K8s v1beta1 ingress is deprecated

networking.k8s.io/v1beta1 is deprecated in the v1.19 release of kubernetes and will be unavailable in the v1.22+ release. This documentation specifies this. I can put up a pull request that I know works for digital ocean but I would not be able to verify it with other cluster configurations.

Add ingress annotations value

Hi, I'm trying to deploy the chart to our EKS cluster, and there is a practical open-source project ExternalDNS that can synchronize DNS records, however, it needs to add some annotations to the ingress.

This is also useful if the user wants to add more AWS Load Balancer Controller annotations, for example, alb.ingress.kubernetes.io/group.name for saving load balancer costs.

Possible solution

Add loadBalancer.annotations value to allow adding custom annotations to the ingress.

loadBalancer:
type: disable
servicePort: 80

annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
{{- if .Values.loadBalancer.tls }}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
{{- end }}
{{- end }}
{{- if eq .Values.loadBalancer.type "gke-managed-cert" }}
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.loadBalancer.staticIpName }}
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
{{- end }}
{{- if eq .Values.loadBalancer.type "do" }}
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: {{ .Values.loadBalancer.clusterIssuer }}
{{- end }}

I am willing to make a PR if this is ok.

Feature request: generate API key/secret and store it in a Secret

The idea for this is inspired by charts I've used for database deployments - usually the credentials for the database can be generated for you during the deploy and stored in a Secret.

The nice part about this behavior is you don't have to worry about manually setting up the secret for your API server also running in the cluster; you can set up the API server deployment to reference the secret generated by the Livekit release by name with an expectation that it will be present and kept in sync even if Livekit is redeployed.

The only wrinkle might be if the format or generation of the keys is particular in some way such that it can't be generated on-the-fly during the templating (like how Neo4J does it). A cursory look at the server codebase seems to indicate it's just random strings though, so this might work?

storeKeysInSecret key files secret wrong permissions, livekit error "key file others permissions must be set to 0"

Hey, ive set storeKeysInSecret to use a secret created with VaultSecretOperator. The secret is being created correctly but livekit is in crashloopBackoff with error: key file others permissions must be set to 0

Now after checking th deployment template i can see that defaultMode has permission 0600

        {{- if .Values.storeKeysInSecret.enabled }}
        - name: keys-volume
          secret:
            secretName: {{ (tpl .Values.storeKeysInSecret.existingSecret .) | default (include "livekit-server.fullname" .) }}
            defaultMode: 0600

But then after checking livekit code:

func createKeyProvider(conf *config.Config) (auth.KeyProvider, error) {
	// prefer keyfile if set
	if conf.KeyFile != "" {
		var otherFilter os.FileMode = 0007
		if st, err := os.Stat(conf.KeyFile); err != nil {
			return nil, err
		} else if st.Mode().Perm()&otherFilter != 0000 {
			return nil, fmt.Errorf("key file others permissions must be set to 0")
		}
		f, err := os.Open(conf.KeyFile)
		if err != nil {
			return nil, err
		}
		defer func() {
			_ = f.Close()
		}()
		decoder := yaml.NewDecoder(f)
		if err = decoder.Decode(conf.Keys); err != nil {
			return nil, err
		}
	}

	if len(conf.Keys) == 0 {
		return nil, errors.New("one of key-file or keys must be provided in order to support a secure installation")
	}

	return auth.NewFileBasedKeyProviderFromMap(conf.Keys), nil
}

The function is checking for permissions 0000

Adding label to discover service monitor for prometheus.

Hi. I deployed livekit using eks and configured monitoring.
But there was an issue between Prometheus and service monitor. Prometheus can not find the service monitor.
Prometheus check a label which is 'release=prometheus' in the service monitor.
But livekit-server helm chart doesn't make this label.
Please check this helpers code in livekit-helm repository.

I edited service monitor and added the line with release=prometheus and then Prometheus can find the service monitor.

image image

Can I make the PR which add line 'release=prometheus' in service monitor labels if prometheus_port is specified in values.yml?

I used this options.

replicaCount: 2

# Suggested value for gracefully terminate the pod: 5 hours
livekit:
  port: 7880
  log_level: info
  rtc:
    ...
  redis:
    ...
  keys:
    ...
  prometheus_port: 6789
  turn:
    enabled: true
    ...
loadBalancer:
  type: disable

autoscaling:
  ...

resources:
  ...
serviceMonitor:
  create: true
  annotations: {
    "prometheus.io/scrape": "true",
    "prometheus.io/path": "/metrics",
    "prometheus.io/port": "6789"
  }
  name: "prometheus-operator"
  interval: 30s

This is my infrastructure
image

Deprecated warning for Kubernetes 1.19 and later

Hello, I tried to deploy livekit with eks.

I found the warning about deprecated ingress annotation when I command "helm install".

W0207 22:19:34.660821    5648 warnings.go:70] annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead

I think I can fix the livekit ingrees.yml to use spec.ingressClassName.
Can I make a PR for this warning?

Helm chart does not support TURN external_tls option

Several spots in the helm chart expect to handle TLS if TURN is enabled and a tls_port is passed.

For example:

{{- if and .Values.livekit.turn.enabled .Values.livekit.turn.tls_port }}
      volumes:
        - name: lkturncert
          secret:
            secretName: {{ required "tls secret required if turn enabled" .Values.livekit.turn.secretName }}
      {{- end }}

causes the chart to fail if trying to use external_tls.

Suggested fix: Add a check for .Values.livekit.turn.external_tls and skip managing certs/TLS with the chart for cases where the load balancer is managed outside of the chart and TLS is terminated there.

TLS spec in Ingress resource not placed under spec.rules scope causing https to be misconfigured after deployment

When deploying the helm chart with a do loadbalancer type there is an issue with the Ingress deployment and the tls specification is not present. The issue seems to be that ingress.spec.tls is missing after the deployment. It is present in the helm template output but is scoped to the wrong location in the YAML.

How to reproduce:

  1. Add the livekit helm repo
  2. Copy the contents of this file down locally so you can use it with helm https://github.com/livekit/livekit-helm/blob/master/examples/server-do.yaml
  3. Run the command helm template livekit-deployment livekit/livekit-server --namespace lkns --values server-do.yaml > helm-template-server-do.yaml
  4. This will output a template file with the YAML components
  5. Find the Ingress resource and you will see that the tls configuration is scoped to spec.rules[0].tls but to be valid it should be scoped to spec.tls. When you actually configure the values.yaml file using this example with real data (replacing the placeholder values) it causes tls settings to not be configured for the ingress. The current workaround is to reapply the ingress manually with the tls spec in the right location.

Here is the example output from the above:

# Source: livekit-server/templates/ingress.yaml
kind: Ingress
metadata:
  name: livekit-deployment-livekit-server
  labels:
    helm.sh/chart: livekit-server-1.4.2
    app.kubernetes.io/name: livekit-server
    app.kubernetes.io/instance: livekit-deployment
    app.kubernetes.io/version: "v1.4.2"
    app.kubernetes.io/managed-by: Helm
  annotations:
  # custom annotations
  # AWS ALB
  # GKE with managed certs
  # DO with cert manager
    cert-manager.io/cluster-issuer: letsencrypt-prod
apiVersion: networking.k8s.io/v1
spec:
  ingressClassName: nginx
  rules:
  # In order to work with cert manager on DO, we cannot set us as a default backend
  - host: "<your-primary-domain>"
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: livekit-deployment-livekit-server
            port:
              number: 80
    tls:
      - hosts:
          - "<your-primary-domain>"
        secretName: "<secret-name>"

Here is the change that needs to be manually made to the specification for it to work correctly:

# Source: livekit-server/templates/ingress.yaml
kind: Ingress
metadata:
  name: livekit-deployment-livekit-server
  labels:
    helm.sh/chart: livekit-server-1.4.2
    app.kubernetes.io/name: livekit-server
    app.kubernetes.io/instance: livekit-deployment
    app.kubernetes.io/version: "v1.4.2"
    app.kubernetes.io/managed-by: Helm
  annotations:
  # custom annotations
  # AWS ALB
  # GKE with managed certs
  # DO with cert manager
    cert-manager.io/cluster-issuer: letsencrypt-prod
apiVersion: networking.k8s.io/v1
spec:
  ingressClassName: nginx
  rules:
  # In order to work with cert manager on DO, we cannot set us as a default backend
  - host: "<your-primary-domain>"
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: livekit-deployment-livekit-server
            port:
              number: 80
  tls:
    - hosts:
        - "<your-primary-domain>"
      secretName: "<secret-name>"

Cloudflare R2 support for Egress

As i can se only

case *livekit.S3Upload: return newS3Uploader(c) case *livekit.GCPUpload: return newGCPUploader(c) case *livekit.AzureBlobUpload: return newAzureUploader(c) case *livekit.AliOSSUpload: return newAliOSSUploader(c)

Can we add Cloudflare R2 Support too ??

Deploy to DOKS

I followed the self-host guide from https://docs.livekit.io/home/self-hosting/kubernetes
The turn server is not working.

I'm getting this error on the server connection test.
Connecting to signal connection via WebSocket

Connected to server, version 1.6.0.

Establishing WebRTC connection

udp 165.22.54.232:54288 host

tcp 165.22.54.232:7881 host (passive)

udp 10.15.0.47:56975 host (private)

tcp 10.15.0.47:7881 host (private)

udp 10.104.0.48:50234 host (private)

tcp 10.104.0.48:7881 host (private)

udp 172.17.0.1:55338 host (private)

tcp 172.17.0.1:7881 host (private)

udp 100.65.27.61:55053 host

tcp 100.65.27.61:7881 host (passive)

udp 10.244.0.211:52807 host (private)

tcp 10.244.0.211:7881 host (private)

WARNING: error with ICE candidate: 701 STUN host lookup received error. stun:global.stun.twilio.com:3478

WARNING: error with ICE candidate: 701 turns:st-turn.programming-hero.com:443?transport=tcp

WARNING: error with ICE candidate: 701 turns:st-turn.programming-hero.com:443?transport=tcp

Can connect via TURN

ERROR: could not establish pc connection
Resuming connection after interruption

PASS

sysctl support for arm64

Thanks for an awesome implementation. I'm deploying this on AWS graviton instances using EKS and Helm. Everything is working well except the sysctl service isn't starting. It's getting an exec format error which I'm presuming is due to the image not being compatible with arm64. I was wondering if anyone has found a workaround.

Feature request: loadBalancerIP support for the TURN LoadBalancer service

As the Kubernetes docs note:

Some cloud providers allow you to specify the loadBalancerIP. In those cases, the load-balancer is created with the user-specified loadBalancerIP. If the loadBalancerIP field is not specified, the loadBalancer is set up with an ephemeral IP address. If you specify a loadBalancerIP but your cloud provider does not support the feature, the loadbalancerIP field that you set is ignored.

It would be useful in my org's case to provision a static external IP address before making cluster changes. Happy to contribute a PR if that's alright with you.

deploying to eks

Hi, can anyone please help me to deploy livekit on eks using a deployment.yaml file. I tried using manifest file but it is not working for me.

Current helm charts doesn't work metrics exporter.

Current helm charts doesn't work metrics exporter.

Reason
Promethes port doesn't open.

Tasks

  • Expose 6789 port in deployment.
  • Expose 6789 port in service.
  • Output config.yaml for metrics. How to setup it in prometheus.
  • Update values.yaml
  • Update README

Turn load balancer with external TLS

Hi, is there any specific reason for the if statement for the below code?

spec:
{{- if not .Values.livekit.turn.external_tls }}
type: LoadBalancer
{{- end }}

I am deploying it behind a L4 load balancer where the TLS terminates. So that I set external_tls to true.
Because of the if statement, the service is created as ClusterIP and ExternalDNS is not adding a record to Route53.

In the previous version there was no if statement. That is added with this commit.

Parametrize namespace

I am facing an issue while adding the LiveKit server and Egress to my project. Their Helm charts do not respect the specified namespace and end up running in the default namespace, which is not what I want.

Here is an example of how I usually declare external charts from Bitnami, such as PostgreSQL:

// helmfile.yaml

repositories:
- name: bitnami
  url: registry-1.docker.io/bitnamicharts

releases:
  - name: postgres
    namespace: {{ .Namespace }}
    chart: bitnami/postgresql
    version: 13.1.5
    values:
       ....

I run the following command to deploy:

helmfile -n my-namespace template .

With the Bitnami charts, everything runs properly in the specified my-namespace namespace. Bitnami charts allow passing a namespace value in the template metadata, for example:

// charts/bitnami/postgresql/templates/secrets.yaml
...
apiVersion: v1
kind: Secret
metadata:
  name: {{ include "common.names.fullname" . }}
  namespace: {{ .Release.Namespace | quote }}
...

When reproducing the same approach with LiveKit charts, the LiveKit server and Egress charts do not seem to respect the specified namespace. They end up running in the default namespace.

Is the namespace attribute missing from the charts' metdata? for example:

// livekit-helm/livekit-server/templates/secret.yaml
...
metadata:
  name: {{ include "livekit-server.fullname" . }}
  labels:
    {{- include "livekit-server.labels" . | nindent 4 }}
...

Am I missing something in my current setup? Please find my current Helmfile under this repository.

Should we add a namespace attribute to the LiveKit server and Egress charts? I would be happy to propose a PR to address this issue.

what is the egress gcp.credentials_json value supposed to look like?

I was expecting it to be a file path with a volume mount but then saw the deployment template has no volume mounts.

From: https://github.com/livekit/livekit-helm/blob/master/egress-sample.yaml#L30-L31

Is this supposed to be raw JSON content like the following snippet? I'll happily open a PR that adds more clarity to the example if you can help me out.

egress:
  gcp:
    credentials_json: |
      {
        ...my JSON key contents...
      }
    bucket: my-bucket

Unable to install with loadBalancer type gke-managed-cert

I would like to install livekit server on GKE with google managed cert. It is the helm values I used, since I am using google managed cert, I suppose secretName is not required:

loadBalancer:
  type: gke-managed-cert
  staticIpName: static-ip
  certificateName: cert
  tls:
    - hosts:
      - livekit.host.com

However, I got the following error on the Ingress after helm install:

Error syncing to GCP: error running load balancer syncing routine: error getting secrets for Ingress: secret "" does not exist

I tried to comment out the tls setting but now I got another error:

Error: INSTALLATION FAILED: Ingress.extensions "livekit-server" is invalid: spec: Invalid value: []networking.IngressRule(nil): either `defaultBackend` or `rules` must be specified

Any suggestion?

Support Kubernetes 1.26

autoscaling/v2beta1 is deprecated and the helmchart tried to generate it. I've modified the output yaml for the new spec manually:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
(...)
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 60

Node won't be ready on EKS ( status is always pending )

I basically followed the guide of deploying to Kubernetes, but the node shows to be not ready.
here is the result.

ubuntu@ip-172-31-7-14:~/livekit-k8ts$ kubectl get pod -n kube-system
NAME                                            READY   STATUS    RESTARTS   AGE
aws-load-balancer-controller-66cffc9868-66vgl   1/1     Running   0          162m
aws-load-balancer-controller-66cffc9868-zx2qb   1/1     Running   0          162m
coredns-9f6f89c76-6zbl8                         1/1     Running   0          3h11m
coredns-9f6f89c76-q98f7                         1/1     Running   0          3h11m
livekit-server-58588cc88c-9hbx7                 0/1     Pending   0          39m

I use ALB as load balancer and created the public certificate inside ACM.
And I skipped the Importing SSL Certificates step inside the guide.

Here is the values.yaml file I used.

replicaCount: 1

livekit:
  # port: 7880
  log_level: info
  rtc:
    use_external_ip: true
    # default ports used
    port_range_start: 50000
    port_range_end: 60000
    tcp_port: 7801
  redis:
    # address: <redis_host:port>
    # db: 0
    # username:
    # password:
  # one or more API key/secret pairs
  # see https://docs.livekit.io/guides/getting-started/#generate-api-key-and-secret
  keys:
    myapikey: API6JLCdtsxYeCp
  turn:
    enabled: true
    # must match domain of your tls cert
    domain: livekit-turn.room.link
    # tls_port must be 443 if turn load balancer is disabled
    tls_port: 3478
    # udp_port should be 443 for best connectivity through firewalls
    udp_port: 443
    secretName: eCkvaOf5BQVfig62fnjK02foYNtRBflYCn68fKvwKSjP
    # valid values: disable, aws, gke, do
    # tls cert and domain are required, even when load balancer is disabled
    loadBalancerType: disable

loadBalancer:
  # valid values: disable, alb, aws, gke, gke-managed-cert, do
  # on AWS, we recommend using alb load balancer, which supports TLS termination
  # in order to use alb, aws-ingress-controller must be installed
  # https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
  # for gke-managed-cert type follow https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
  # and set staticIpName to your reserved static IP
  # for DO follow https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-on-digitalocean-kubernetes-using-helm
  # steps 2 and 4 to setup your ingress controller
  type: alb
  # staticIpName: <nameofIpAddressCreated>
  # Uncomment and enter host names if TLS is desired.
  # TLS is not supported with `aws` load balancer
  tls:
    # - hosts:
    #   - livekit.myhost.com
    # with ALB, certificates needs to reside in ACM for self-discovery
    # with GKE, specify one or more secrets to use for the certificate
    # with DO, use cert-manager and create certificate for turn. Load balancer is autoamtic
    # see: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl#specifying_certificates_for_your_ingress
    #   secretName: <mysecret>

# when true (default), optimizes network stack for service
# increases UDP send and receive buffers
optimizeNetwork: true

# autoscaling requires resources to be defined
autoscaling:
  # set to true to enable autoscaling. when set, ignores replicaCount
  enabled: true
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 60

# if LiveKit should run only on specific nodes
# this can be used to isolate designated nodes
nodeSelector: {}
  # node.kubernetes.io/instance-type: c5.2xlarge

resources: {}
  # Due to port restrictions, you can run only one instance of LiveKit per physical
  # node. Because of that, we recommend giving it plenty of resources to work with
  # limits:
  #   cpu: 6000m
  #   memory: 2048Mi
  # requests:
  #   cpu: 4000m
  #   memory: 1024Mi

I don't know what I'm missing.
Would you please help me solve this problem ?

Cloud this helm chart provide to custom path domain to connect ?

  • Yub as title, for our architect live-kit in behind and proxy by Nginx, thus I would recommend using subpath to direct livekit server but when proxy is done, livekit reject connect to cause it doesn't support custom subpath to connect to
    How to solve the problem, for my reason it should custom subpath in livekit to support behind proxy !!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.