Giter Site home page Giter Site logo

net-istio's Introduction

net-istio's People

Contributors

andrew-su avatar angelodanducci avatar dprotaso avatar evankanderson avatar howardjohn avatar izabelacg avatar jrbancel avatar julz avatar k4leung4 avatar knative-automation avatar knative-prow-robot avatar markusthoemmes avatar mattmoor avatar mattmoor-sockpuppet avatar n3wscott avatar nairb774 avatar nak3 avatar pastequo avatar psschwei avatar retocode avatar saschaschwarze0 avatar savitaashture avatar shashwathi avatar skonto avatar sniperking1234 avatar tcnghia avatar trshafer avatar vagababov avatar wtam2018 avatar yanweiguo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

net-istio's Issues

istio virtualservice-ingress with retries as 0

Hi,

We are seeing unwanted retries due to 500 status code by the service, and this is annoying as it is running in a serverless environment and the error had to be immediately propagated back.

Need to add the below code to https://github.com/knative-sandbox/net-istio/blob/524afbe8aa70b5589360d0e96b92ef2cd8c6a317/pkg/reconciler/ingress/resources/virtual_service.go#L221,

Retries: &istiov1alpha3.HTTPRetry{
				Attempts:      0,
				PerTryTimeout: nil,
			},

This is a very bad situation to be in, as the actual service is returning error but we are retrying it for 3 times before returning the response.

knative/net-istio should support mesh mode for e2e?

There are istio mesh mode specific in knative/serving like:

  • TestProbeWhitelist
  • TestClusterLocalAuthorization
  • TestServiceToServiceCall (with injection)
  • TestSvcToKubeSvc (trying to add knative/serving#7670)

Although these tests should move to knative/net-istio instead of hosting in knative/serving, currently knative/net-istio seems not have the option to run tests with mesh mode.

Investigate the Automatic Protocol Selection feature of Istio 1.3

In what area(s)?

/area API
/area networking

Describe the feature

Istio 1.3 supports "Automatic Protocol Selection", meaning we might not have to specify container port names to support http and http2.

This issue is to track the investigation of that feature.

Notes:

  • We will need to support Istio versions prior to 1.3 until they are end of life'd, approx. 3 months.
  • We may not need to update the Ports and Protocols section of the runtime contract after all (cc @dgerd )

cc @tcnghia @mattmoor
/assign

Support different Ingress Service types

AFAICS, only the LoadBalancer-type Ingressgateway service is currently supported. When Istio is installed with ingressgateway type NodePort or ClusterIP, the ingress gw status.loadbalancer field remains empty "{}" (k8s 1.16.4) and the Knative routes don't reconcile and are stuck with the Ready status set to "Unknown" and the reason "Uninitialized". A static IP can only be set if the cloud controller manager supports it (Azure). Other setups may use virtual IP routing in which case the ingress gateway service would have externalIPs configured. Or it may use NodePort in which case it would be nice if the ingress endpoint could be configured statically in config-istio for the istio-webhook not to depend on a LB type ingressgateway service.

IIRC older versions may have supported the NodePort type.
Wdyt? Best, Manuel

grpc streams timeout

knative version

0.15

Issue

I am able to create a unidirectional grpc stream server as ksvc successfully. But the stream gets terminated by timeout set on istio virtual service. This time out by default is set to the max revision time out value.

Steps To Recreate

Create a unidirectional grpc stream server (where client receives the stream), deploy it as ksvc.
Have the ksvc timeout less than the max revision timeout, have the grpc server send the stream continuously and you will notice that after the max revision timeout the client will receive a RST_STREAM error.

Since the queue proxy times out after the time mentioned in ksvc, is it necessary for the istio virtual service to have timeout set. Is it ok to remove it ? @nak3 @tcnghia @ZhiminXiang

Improve gatewayPodTargetLister

  • Coalesce servers across Gateways
  • Maybe rewrite the implementation, it is messy and hard to read
  • Intersect Ingress.Spec.Rules.Hosts with Gateway.Spec.Servers[].Hosts

istio latest doesn't run post-build

I see the presubmit leg: pull-knative-sandbox-net-istio-latest

However, I do not see similar post-submit legs like we have in serving. We should probably have these for anything we care about having presubmit.

cc @tcnghia @JRBANCEL

Remove cluster-local-gateway Deployment

The details are in this doc.

This is to track the rollout. The current plan to have 0 downtime is:

Phase 1

  1. Add a server entry to cluster-local-gateway on port 8081 (no-op just makes the Envoy listen on this port)

πŸ• Wait for Istio propagation

Phase 2

  1. Modify the cluster-local Service targetPort to 8081 (k8s will route internal traffic to the new port)
  2. Remove the original server entry in cluster-local-gateway (port 80 is not used anymore)
  3. Update the selector of the Gateway to select both Deployment (Ingress and Cluster Local)

πŸ• : Wait for Istio propagation

Phase 3

  1. Modify the cluster-local Service to select the Ingress Deployment
  2. Update the selector of the Gateway to select only the Ingress Deployment

πŸ• Wait for Istio propagation

Phase 4

  1. Delete Cluster Local Deployment

This could be done in a single release by waiting for propagation before proceeding to the next steps where required, or we can spread it over multiple releases. @tcnghia any thoughts?

/assign

Activator request proxying is incompatible with Istio ClusterRbac

/area networking

While working through
https://cloud.google.com/solutions/authorizing-access-to-cloud-run-on-gke-services-using-istio#deploying_a_sample_service

A few issues came up:

JWT authorization needs to be disabled on some paths for the system to work (controller being able to ping the service, for example).

With the updated policy:

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: default
  namespace: tutorial
spec:
  origins:
  - jwt:
      issuer: https://accounts.google.com
      audiences:
      - http://example.com
      jwksUri: https://www.googleapis.com/oauth2/v3/certs
      triggerRules:
      - includedPaths:
        - prefix: /
      - excludedPaths:
        - exact: /_internal/knative/activator/probe
        - exact: /healthz
        - exact: /probe
        - exact: /metrics
  principalBinding: USE_ORIGIN

Things mostly work, with one large gap.

It was discovered that this example only works when the Activator is not proxying requests. This is pretty clearly non-ideal, and appears to be related to an inability of the Activator to query the health of the (Revision pod and/or ClusterIP).

We should determine how to make Istio JWT authz work properly with Activator proxying in order for users to be able to have scale to zero and burst protection at the same time as locking down Service access via JWT.

Document perf regression of 1.5

There is a noticeable regression in 1.5 that was fixed in 1.7 (and possibly 1.6).

This should be documented (with links to bugs) on knative.dev so users are aware.

Istio 1.6 VirtualService and CORS allowOrigins not recognized

Hello,

Trying to set CORS policy from Istio 1.6.3 VirtualService and especially 'allowOrigins' field:

...
http:
  - corsPolicy:
      allowCredentials: true
      allowHeaders:
      - content-type
      - request-id
      - authorization
      allowOrigins:
      - exact: http://localhost
      maxAge: 24h
    match:
    - uri:
        prefix: /mobile/
...

it leads the service networking-istio to the following error:

E0630 16:04:21.662064       1 reflector.go:123] runtime/asm_amd64.s:1357: Failed to list *v1alpha3.VirtualService: v1alpha3.VirtualServiceList.Items: []v1alpha3.VirtualService: v1alpha3.VirtualService.v1alpha3.VirtualService.Spec: unmarshalerDecoder: unknown field "allowOrigins" in v1alpha3.CorsPolicy, error found in #10 byte of ...|":100}]}]}},{"apiVer|..., bigger context ...|ter.local","port":{"number":80}},"weight":100}]}]}},{"apiVersion":"networking.istio.io/v1alpha3","ki|...

Then, my Knative service is not ready:

conditions:
  - lastTransitionTime: "2020-06-30T17:48:18Z"
    status: "True"
    type: ConfigurationsReady
  - lastTransitionTime: "2020-06-30T17:48:18Z"
    message: Ingress reconciliation failed
    reason: ReconcileIngressFailed
    status: "False"
    type: Ready
  - lastTransitionTime: "2020-06-30T17:48:18Z"
    message: Ingress reconciliation failed
    reason: ReconcileIngressFailed
    status: "False"
    type: RoutesReady

Can you please give your feedback about that? Is there a workaround for now? Is it plan to update the 1.6.X CorsPolicy support?
Thanks!

Knative version used:

kubectl -n knative-serving get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE    LABELS
activator-688c498dcd-dhvxr          2/2     Running   0          20h    app=activator,istio.io/rev=default,pod-template-hash=688c498dcd,role=activator,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=activator,service.istio.io/canonical-revision=latest,serving.knative.dev/release=v0.14.2
autoscaler-577b8f6b6-k7m8c          2/2     Running   0          20h    app=autoscaler,istio.io/rev=default,pod-template-hash=577b8f6b6,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=autoscaler,service.istio.io/canonical-revision=latest,serving.knative.dev/release=v0.14.2
autoscaler-hpa-cf757b76b-ckvgh      2/2     Running   0          20h    app=autoscaler-hpa,istio.io/rev=default,pod-template-hash=cf757b76b,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=autoscaler-hpa,service.istio.io/canonical-revision=latest,serving.knative.dev/release=v0.14.2
controller-75cccc4cd6-tmtdg         2/2     Running   1          20h    app=controller,istio.io/rev=default,pod-template-hash=75cccc4cd6,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=controller,service.istio.io/canonical-revision=latest,serving.knative.dev/release=v0.14.2
istio-webhook-b65488fbc-fjkrx       2/2     Running   0          107m   app=istio-webhook,istio.io/rev=default,pod-template-hash=b65488fbc,role=istio-webhook,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=istio-webhook,service.istio.io/canonical-revision=latest,serving.knative.dev/release=v0.14.1
networking-istio-7d9d688b86-d75dr   1/1     Running   0          107m   app=networking-istio,pod-template-hash=7d9d688b86,serving.knative.dev/release=v0.14.1
webhook-7b476996c8-7mlr4            2/2     Running   1          20h    app=webhook,istio.io/rev=default,pod-template-hash=7b476996c8,role=webhook,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=webhook,service.istio.io/canonical-revision=latest,serving.knative.dev/release=v0.14.2

Networking Istio pod memory usage increases in step size of around 100Mib

This is a ticket copied from knative/serving#8243

What version of Knative?

v0.14.0

Actual Behavior

We deployed knative-serving v0.14.0 over Kubernetes Cluster with Istio as Service Mesh.
Observed memory usage for networking-istio pod shows sudden increase & does not get back to original state.

Steps to Reproduce the Problem

  1. Deploy knative-serving with v0.14.0 with Istio as service mesh.
  2. Monitor memory usage for networking-istio pod in knative-serving namespace.
$ kubectl get pod networking-istio-cb8649f6d-tqqkd -nknative-serving -o jsonpath='{.spec.containers[0].resources}{"\n"}'
map[limits:map[cpu:300m memory:400Mi] requests:map[cpu:30m memory:40Mi]]

networking-istio-memory-usage

Captured heap & allocs information before & after memory usage increase on networking-istio pod.

Memory profile information before and after increase in usage.
before-memory-usage-increase.zip

after-memory-usage-increase.zip

networking-istio-memory-usage

side note

No major events happened during this change in memory usage of networking-istio pod.
However, we do have approximately 300+ namespaces.
Was not able to retrieve all events from different namespaces.

The webhook must be excluded from the mesh

Currently, if Istio sidecar injection is enabled on knative-serving namespace, the webhook will also have a sidecar.
Given that the MutatingWebhookConfiguration specifies an expected CA for the certificate exposed by the webhook endpoint, a component outside of the mesh (e.g. kube-api) fails to contact the webhook when the sidecar tries to terminate the TLS connection, which happens in mTLS strict mode (in permissive mode, Envoy acts a TCP proxy, see https://github.com/istio/istio/blob/master/pilot/pkg/networking/core/v1alpha3/listener.go#L211):

  • sidecar + Webhook + mTLS permissive = OK
  • sidecar + Webhook + mTLS strict = FAILS
  • just Webhook + mTLS permissive = OK
  • just Webhook + mTLS strict = OK

The webhook is designed to only be contacted by kube-api (outside of the mesh) and to not contact anything, therefore, it should always be excluded from the mesh:

kubectl patch deployments.apps -n knative-serving webhook -p '{"spec":{"template":{"metadata":{"annotations":{"sidecar.istio.io/inject":"false"}}}}}'

/cc @Cynocracy

Probing optimization take 2

This is the net-istio side of knative/serving#8765.

Currently the network probing logic implements an optimization where a pod is assumed to have been fully programmed once any part of the network programming we are probing has been successfully verified. In practice, this means that for each pod we are enqueuing N pieces of work and cancelling N-1 pieces of work for each gateway pod every time we change a kingress.

Given that this builds on a faulty assumption (in the linked issue), I'd propose instead optimizing this on the net-istio side here:

https://github.com/knative-sandbox/net-istio/blob/56f23ab0d69625e409602c9bd56c9de07f008142/pkg/reconciler/ingress/lister.go#L105

Instead of enqueuing N pieces of work per pod, and then cancelling the N-1 pieces of work, simply choose one piece of work to verify (per IP) and prune the rest. This should achieve what the prober-side optimization yields and more because this should also improve the throughput of the prober workqueue.

I'd like to avoid having Istio take a hit to correct the issue above, and this should be safe to start in isolation, so if anyone wants to try and knock this off I'd appreciate it. πŸ™

cc @JRBANCEL @tcnghia @vagababov

VirtualService gets validation error - unknown field "websocketUpgrade" in io.istio.networking.v1beta1.VirtualService.spec.http

steps to reproduce

step-0. Deploy istio (I used istio-1.5 with mesh&galley)
step-1. Deploy any ksvc
step-2. Edit virtualservice
$ kubectl edit vs hello-example-ingress
step-3. Close without edit (by :q).

Actual result

Then, you get following error:

# virtualservices.networking.istio.io "hello-example-ingress" was not valid:
# * : Invalid value: "The edited file failed validation": [ValidationError(VirtualService.spec.http[0]): unknown field "websocketUpgrade" in io.istio.networking.v1beta1.VirtualService.spec.http, ValidationError(VirtualService.spec.http[1]): unknown field "websocketUpgrade" in io.istio.networking.v1beta1.VirtualService.spec.http]

Expected

The websocketUpgrade is already deprecated, so we should remove it.

https://github.com/knative/net-istio/blob/2833eeb85bda6c5ead740a4b871ce95c5d520a8e/vendor/istio.io/api/networking/v1alpha3/virtual_service.pb.go#L520-L522

ingress pods cannot be started with third_party/istio-1.5.7-helm/istio-minimal.yaml

I have tried to apply the third_party/istio-1.5.7-helm/istio-minimal.yaml. But both istio-ingressgateway pods and cluster-local-gateway pods cannot be started because istio-proxy container is not ready.

cluster-local-gateway-96488b8bf-qknql   0/1     Running   0          98s
istio-ingressgateway-6c866f94c6-vhpxv   1/2     Running   0          98s
istio-pilot-6bdfc6f49c-k2hfd            1/1     Running   0          98s

I can reproduce this issue consistently.

I checked the log, and got a lot of the following logs:

2020-09-04T03:45:21.061808Z	info	waiting for file
2020-09-04T03:45:21.162125Z	info	waiting for file
2020-09-04T03:45:21.262440Z	info	waiting for file
2020-09-04T03:45:21.362749Z	info	waiting for file
2020-09-04T03:45:21.462978Z	info	waiting for file
2020-09-04T03:45:21.563242Z	info	waiting for file
2020-09-04T03:45:21.663541Z	info	waiting for file

And later I got the following logs:

2020-09-04T04:10:25.459577Z	info	Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 1 rejected; lds updates: 1 successful, 0 rejected

This issue caused knative/serving#9275

/cc @nak3

Generated VirtualService contains wrong gateways field

The gateways field of the VirtualService CRD indicates to which Istio
Gateways this VirtualService applies to.

At the moment, all configured gateways of are added to the gateways fields.
However, if the user wants to only specify a local gateway, they also
get the public gateway in that field.

This confuses the reconciler, which can't find the gateway and errors out.
The VirtualService generation logic should instead only add gateways which are actually used by the VirtualService routes.

Related code: https://github.com/knative/net-istio/blob/f08de598438b0e85c27b401416575405cc451eba/pkg/reconciler/ingress/resources/virtual_service.go#L135-L140
Related issue: #43
/cc @tcnghia

net-istio should guard the prober against resync flooding

Right now the prober is unconditionally called during reconciliation to determine readiness: https://github.com/knative-sandbox/net-istio/blob/81a9b95df6ba5516bb3240b53caaa1686fc2dd75/pkg/reconciler/ingress/ingress.go#L203

This is ok under normal circumstances because internally the prober caches the probe result and so on global resyncs we hit the cache and things finish quickly. However, when we resync due to failing over (perhaps due to rollout) this cache is empty.

If I have 1000 ksvc, they are exposed on both gateways, and those gateways have 5 pods each, then the prober must perform 5 * 2 * 1,000 = 10,000 probes before normal work may resume.

In net-contour, I changed things to have the main Reconcile loop rely on the recorded readiness of the kingress to elide IsReady checks, effectively recording the probers internal cache in the CRD's durable status.

cc @JRBANCEL @tcnghia @nak3 @ZhiminXiang

http redirect should not be added to cluster local gateway

Steps to reproduce :

  1. Enable Auto TLS
  2. Create 2 service one cluster local and other ingress svc

The cluster local svc never gets ready.

TLS redirect gets added to the cluster local gateway

tls:
    httpsRedirect: true

This doesnt happen the second time (i.e. if you remove the tls redirect from gateway, delete services and recreate) . Only during first time certificate reconciliation.

Istio uninstallation cleans up Gateway created by net-istio.yaml

This might be an issue in operation not net-istio, but I report the issue here as a Knative user was confused.

Issue description

  • If we uninstalled Istio, Gateway (cluster-local-gateway, knative-ingress-gateway) in knative-serving namespace also gone as istioctl deletes CRD.

Step to reproduce the issue

1. Install istio by following doc

$ istioctl manifest apply -f istio-minimal-operator.yaml

2. Install Knative Serving and net-istio

$ kubectl apply --filename https://storage.googleapis.com/knative-nightly/serving/latest/serving-crds.yaml
$ kubectl apply --filename https://storage.googleapis.com/knative-nightly/serving/latest/serving-core.yaml
$ kubectl apply --filename https://storage.googleapis.com/knative-nightly/net-istio/latest/release.yaml

3. Knative app works fine

$ kn service create hello-example --image=gcr.io/knative-samples/helloworld-go

$ kubectl port-forward -n istio-system service/istio-ingressgateway  8000:80
Forwarding from 127.0.0.1:8000 -> 8080
Forwarding from [::1]:8000 -> 8080
Handling connection for 8000

$ curl -H "Host: hello-example.default.example.com" localhost:8000
Hello World!

4. Re-installing Istio

Do not touch Knative, but just re-install Istio.

$ istioctl manifest generate -f istio-minimal-operator.yaml | kubectl delete -f -
$ istioctl manifest apply -f istio-minimal-operator.yaml

5. Knative becomes unreachable

Now, Knative app started unreachable.

  • port-forward starts producing following error
$ kubectl port-forward -n istio-system service/istio-ingressgateway  8000:80
Forwarding from 127.0.0.1:8000 -> 8080
Forwarding from [::1]:8000 -> 8080
Handling connection for 8000
E0813 15:22:23.690783    4348 portforward.go:400] an error occurred forwarding 8000 -> 8080: error forwarding port 8080 to pod fb1991564571888471fc564b2a2fe08f528fea1ceeafe99db8fef57f4564a310, uid : exit status 1: 2020/08/13 06:22:23 socat[199389] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
Handling connection for 8000
  • Access gets Empty reply from server.
$ curl -H "Host: hello-example.default.example.com" localhost:8000
curl: (52) Empty reply from server

Root cause of this issue

This is because Gateway in knative-serving ns was deleted in step-4. We need to re-install Gateways in net-istio.yaml.

cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: knative-ingress-gateway
  namespace: knative-serving
  labels:
    serving.knative.dev/release: "v20200811-4a3c487"
    networking.knative.dev/ingress-provider: istio
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cluster-local-gateway
  namespace: knative-serving
  labels:
    serving.knative.dev/release: "v20200811-4a3c487"
    networking.knative.dev/ingress-provider: istio
spec:
  selector:
    istio: cluster-local-gateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
EOF

I wonder that Gateway (cluster-local-gateway, knative-ingress-gateway) should not be included in net-istio.yaml, but net-istio controller should create it if not exist.

Bug[StatusManager]: Waiting for load balancer to be ready

When I restart my local kubernetes cluster, and type kubectl get king, it always reminder error:Waiting for load balancer to be ready

How to reproduce?

  • Install local kubernetes cluster using k3d, k3s version is: v1.17.9+k3s1
  • Install knative with net-istio
  • After install everything is ok
  • Now restart local kubernetes cluster: k3d stop cluster xxx and k3d start cluster xxx
  • After a few seconds, type kubectl get king -oyaml and get Waiting for load balancer to be ready error

Proposal

I think there might be a race condition in statusManager IsReady function.

  1. IsReady function first cache ingressStates with gateway service endpoint ips(when restart k3d cluster, it just return last cached endpoint ips)
  2. kubernetes cluster reconciler all resources include gateway service endpoint, and now gateway service endpoint change its ip to now reconciled gateway service pods
  3. statusManager prob with old ip and get connect error

This will not a problem when deploy kuberneter cluster in production mode. But I think when gateway service pod reconciled, statusManager will ignore to prob new gateway service endpoint ips. This bug can be checked manual re-deploy gateway service deployment, and no new prob logs at net-istio log.
What do you think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.