Giter Site home page Giter Site logo

vmware-tanzu / sonobuoy Goto Github PK

View Code? Open in Web Editor NEW
2.9K 56.0 336.0 29.83 MB

Sonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of Kubernetes conformance tests and other plugins in an accessible and non-destructive manner.

Home Page: https://sonobuoy.io

License: Apache License 2.0

Shell 2.36% Go 91.29% Dockerfile 0.19% HTML 3.00% JavaScript 0.05% SCSS 3.03% Makefile 0.07%
kubernetes kubernetes-cluster kubernetes-setup kubernetes-deployment discovery bugreport heptio tanzu sonobuoy conformance

sonobuoy's Introduction

Sonobuoy Logo

Test Lint

Sonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of plugins (including Kubernetes conformance tests) in an accessible and non-destructive manner. It is a customizable, extendable, and cluster-agnostic way to generate clear, informative reports about your cluster.

Its selective data dumps of Kubernetes resource objects and cluster nodes allow for the following use cases:

  • Integrated end-to-end (e2e) conformance-testing
  • Workload debugging
  • Custom data collection via extensible plugins

Starting v0.20, Sonobuoy supports Kubernetes v1.17 or later. Sonobuoy releases will be independent of Kubernetes release, while ensuring that new releases continue to work functionally across different versions of Kubernetes. Read more about the new release cycles in our blog.

Note: You can skip this version enforcement by running Sonobuoy with the --skip-preflight flag.

Prerequisites

Installation

The following methods exist for installing Sonobuoy:

Install binary

  1. Download the latest release for your client platform.

  2. Extract the tarball:

    tar -xvf <RELEASE_TARBALL_NAME>.tar.gz
    

    Move the extracted sonobuoy executable to somewhere on your PATH.

Install with Hombrew (MacOS)

  1. Run the command:

    brew install sonobuoy
    

Getting Started

To launch conformance tests (ensuring CNCF conformance) and wait until they are finished run:

sonobuoy run --wait

Note: Using --mode quick will significantly shorten the runtime of Sonobuoy. It runs just a single test, helping to quickly validate your Sonobuoy and Kubernetes configuration.

Get the results from the plugins (e.g. e2e test results):

results=$(sonobuoy retrieve)

Inspect results for test failures. This will list the number of tests failed and their names:

sonobuoy results $results

Note: The results command has lots of useful options for various situations. See the results page for more details.

You can also extract the entire contents of the file to get much more detailed data about your cluster.

Sonobuoy creates a few resources in order to run and expects to run within its own namespace.

Deleting Sonobuoy entails removing its namespace as well as a few cluster scoped resources.

sonobuoy delete --wait

Note: The --wait option ensures the Kubernetes namespace is deleted, avoiding conflicts if another Sonobuoy run is started quickly.

If you have an issue with permissions in your cluster but you still want to run Sonobuoy, you can use --aggregator-permissions flag. Read more details about it here.

Other Tests

By default, sonobuoy run runs the Kubernetes conformance tests but this can easily be configured. The same plugin that has the conformance tests has all the Kubernetes end-to-end tests which include other tests such as:

  • tests for specific storage features
  • performance tests
  • scaling tests
  • provider specific tests
  • and many more

To modify which tests you want to run, checkout our page on the e2e plugin.

If you want to run other tests or tools which are not a part of the Kubernetes end-to-end suite, refer to our documentation on custom plugins.

Monitoring Sonobuoy during a run

You can check on the status of each of the plugins running with:

sonobuoy status

You can also inspect the logs of all Sonobuoy containers:

sonobuoy logs

Troubleshooting

If you encounter any problems that the documentation does not address, file an issue.

Docker Hub rate limit

This year, Docker has started rate limiting image pulls from Docker Hub. We're planning a future release with a better user interface to work around this. Until then, this is the recommended approach.

Sonobuoy Pod

Sonobuoy by default pulls from Docker Hub for sonobuoy/sonobuoy image. If you're encountering rate limit on this, you can use VMware-provided mirror with:

sonobuoy run --sonobuoy-image projects.registry.vmware.com/sonobuoy/sonobuoy:<VERSION>

Conformance

Kubernetes end-to-end conformance test pulls several images from Docker Hub as part of testing. To override this, you will need to create a registry manifest file locally (e.g. conformance-image-config.yaml) containing the following:

dockerLibraryRegistry: mirror.gcr.io/library

Then on running conformance:

sonobuoy run --sonobuoy-image projects.registry.vmware.com/sonobuoy/sonobuoy:<VERSION> --e2e-repo-config conformance-image-config.yaml

Technically dockerGluster is also a registry pulling from Docker Hub, but it's not part of Conformance test suite at the moment, so overriding dockerLibraryRegistry should be enough.

Known Issues

Leaked End-to-end namespaces

There are some Kubernetes e2e tests that may leak resources. Sonobuoy can help clean those up as well by deleting all namespaces prefixed with e2e:

sonobuoy delete --all

Run on Google Cloud Platform (GCP)

Sonobuoy requires admin permissions which won't be automatic if you are running via Google Kubernetes Engine (GKE) cluster. You must first create an admin role for the user under which you run Sonobuoy:

kubectl create clusterrolebinding <your-user-cluster-admin-binding> --clusterrole=cluster-admin --user=<[email protected]>

Run on Kubernetes for Docker Desktop

We don't recommend running via a cluster set up via Docker Desktop. Known issues include:

  • kubectl logs will not function
  • sonobuoy logs will not function
  • sonobuoy retrieve will not function
  • systemd-logs plugin will hang

Most of these issues revolve around issues with kube-proxy on Docker Desktop so if you know of how to resolve these issues, let us know.

Certified-Conformance bug (versions v0.53.0 and v0.53.1)

These versions of Sonobuoy have a bug that runs the wrong set of tests without additional actions. See more details here. The simplest way to avoid this is to update your version of Sonobuoy to >= v0.53.2.

Strategy Document

See our current strategy document and roadmap for context on what our highest priority use cases and work items will be. Feel free to make comments on Github or start conversations in Slack.

Contributing

Thanks for taking the time to join our community and start contributing! We welcome pull requests. Feel free to dig through the issues and jump in.

The most common build/test functions are called via the Makefile:

// Build the binary
$ make build

// Run local unit tests
$ make test

If you make changes which change output, you may fail tests which utilize the golden file testing pattern (e.g. correct data is stored in external files), update them by running:

$ make golden

In most cases, running integration tests is more simply done in CI when you open a pull request. You can dig into scripts/build_funcs.sh and our .github/workflows/ci-test.yaml for exact details of existing test flows.

Before you start

  • Please familiarize yourself with the Code of Conduct before contributing.
  • See CONTRIBUTING.md for instructions on the developer certificate of origin that we require.
  • There is a Slack channel if you want to interact with other members of the community

Changelog

See the list of releases to find out about feature changes.

sonobuoy's People

Contributors

alexbrand avatar alrs avatar andrewyunt avatar bostrt avatar chuckha avatar dependabot[bot] avatar diptochakrabarty avatar divya063 avatar eturra avatar franknstyle avatar jamesburchell avatar jayunit100 avatar jimmidyson avatar johnschnake avatar johscheuer avatar jonasrosland avatar joyvuu-dave avatar laverya avatar liztio avatar nikhilsharmawe avatar rbankston avatar rdodev avatar sdbrett avatar stevesloka avatar tanmay-g avatar timothysc avatar vicmarbev avatar vladimirvivien avatar wilsonehusin avatar zubron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sonobuoy's Issues

Auto generate examples with ksonnet.

Currently we have plugins.d + examples and the set of potential examples could grow quite large. I would like to leverage ksonnet here to template our examples to generate a submission.

Provide a CLI tool for running sonobuoy

We all know that our lovely Kubernetes YAML definitions can be a little overwhelming sometimes...
If the user wants to add/remove functionality right now, they would have to go and mess with those.

In order to get more users of the tool; a CLI tool could be made to simplify that experience

(Feel free to disregard; these are just ideas coming from me on my first sonobuoy run 😉 )

Sonobuoy fails to start when you have Completed pods in kube-system

Sonobuoy fails start when you have Completed pods (i.e. those created by Jobs) in the kube-system namespace. Here's the error I'm getting

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:221
Sep 28 15:23:55.754: Error waiting for all pods to be running and ready: 1 / 100 pods in namespace "kube-system" are NOT in RUNNING and READY state in 10m0s
POD                                   NODE                          PHASE     GRACE CONDITIONS
elastic-delete-index-1506556860-nskhn ip-10-203-21-240.ec2.internal Succeeded       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-09-28 00:01:00 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-09-28 00:01:05 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-09-28 00:01:00 +0000 UTC  }]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:172

If I remove the Completed pods, Sonobuoy starts without an issue.

Add support for lite-medium-full tests

There will need to be default configs that support a lite-medium-full set of conformance tests to speed up testing on larger clusters.

We should also investigate lite-parallelism on the suite to speed it up

/cc @chuckha

Handle evicted pods

I have an eventrouter Pod running in my kube-system namespaces that was evicted at some point. Sonobuoy appears to still be trying to get information about it:

E0720 15:23:15.298114       1 master.go:64] the server rejected our request for an unknown reason (get pods eventrouter-4148953255-rmwd1)

Docs: re-run of the test on the existing namespace does not create pods

  1. git install the sonubuoy and run the test as steps provided in the READ.md
  2. If the test run fails due to network-proxy issue, it raises the GO panic trace as see from the logs of the pod created in the namesapce "heptio-sonubuoy"
  3. Because of the above panic, in my understand the cleanup never gets executed, thereby the namespace "heptio-sonobuoy" persists
  4. Now re-apply the yaml file (kubectl apply -f examples/quickstart/), this time the pods are not getting created. User has to delete the namespace and then re-apply the yaml.

Error from server (Forbidden)

Getting this running the quickstart

$ kubectl --context=prod apply -f examples/quickstart/
namespace "heptio-sonobuoy" configured
serviceaccount "sonobuoy-serviceaccount" configured
clusterrolebinding "sonobuoy-serviceaccount" configured
configmap "sonobuoy-config-cm" configured
configmap "sonobuoy-plugins-cm" configured
pod "sonobuoy" configured
service "sonobuoy-master" configured
Error from server (Forbidden): error when creating "examples/quickstart/00-rbac.yaml": clusterroles.rbac.authorization.k8s.io "sonobuoy-serviceaccount" is forbidden: attempt to grant extra privileges: [{[*] [*] [*] [] []}] user=&{Kubernetes admin  [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

Rationalize some tests issues upstream.

The Kubectl tests were purposefully disabled for a number of reasons and we should move to rationalize this with upstream.

  1. I don't actually consider them conformance tests, but more aggregated behavioral tests
  2. They require passing in kubeconfig vs. leveraging in-cluster config
    ...

Also there are other tests, including the scheduler tests that I think need to be cherry-picked back to a 1.7 release.

Configured plugin systemd_logs does not exist

Hey Heptio Team.

When I check logs of my just deployed pod it exists with this error:

root:heptio# kubectl logs -f sonobuoy --namespace=heptio-sonobuoy
I0807 13:53:51.548619       1 loader.go:46] Scanning plugins in ./plugins.d (pwd: /)
I0807 13:53:51.548852       1 loader.go:46] Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)
I0807 13:53:51.548880       1 loader.go:46] Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)
E0807 13:53:51.548907       1 master.go:49] Configured plugin systemd_logs does not exist

Any ideas , something is missing on the cluster itself?

[idea] brainstorm how to communicate PASS / FAIL data...

This might be out of scope, just an idea, feel free to close if interpretting/displaying collection results is out of scope

GOAL

Many buouys will have a discreet result, i.e. cluster is healthy (pass) or not (fail). Some might not have such results, but since many will, it would be nice if they could share this info with the collector somehow.

EXAMPLE

As a E2E test user, I'd like to:

  1. Run, or automate the run, of the sonobouy E2E tests via kube apply as with a --summary= , where the summary value is a command that exits 1 or 0 (i.e. --summary='grep -q 0 FAILURES').
  2. See something like this in a sidecar, without having to collect data.
Sonobuoy Results
=============
{
{"date":"Aug 15, 2013, 1:35 PM","exec-pod":"E2E", "result":"PASSED",link:"results15-2013-135.gz"},
{"date":"Aug 15, 2013, 2:40 PM","exec-pod":"E2E", "result":"FAILED", link:"results14-2013-240.tar.gz"}
}

Note that (1) and (2) are independent. We can have (2) without having (1). Possibly the sonobuouy test pod can run (1) and output it as its final log before exiting, without (2) and then you can kubectl logs -f xyz | tail -1 to get the end result of your pass/fail. But ideally (1) and (2) would work together for an 'at-a-glance' of your buouys.

USE CASE

  • vendors could ship their products with sonobuoy instructions for setting up alerts. i.e. I could have a sonobuoy plugin which ran smoke tests, and then, I don't have to write my own reports, because sonobuoy is broadcasting those reports for me in a sidecar.
  • this makes the need to access tarballs or volumes for test results unnecessary for superficial inspection and only needed for forensics, deep diving into problems/ archival..

Best analogy is to the simple prometheus UI that ships on the prometheus master - gives you a global way to play w/ the data before putting it in a more sophisticated dashboard. Im thinking SB could ship w/ something similar for users.

Allow plugin configuration to be set from sonobuoy config

Currently the E2E tests (like which tests to run, etc) are driven by environment variables, and in order to change them you have to change the plugin definition (./plugins.d/e2e.yaml).

It'd be nice to let the e2e.yaml plugin definition remain constant, and for plugin behavior to be configurable from the main sonobuoy.conf file, since it's specific from run-to-run.

This was mostly the idea from the beginning, but it doesn't currently work.

Example sonobuoy config:

{
  ...
  "plugins": {
    {
      "name": "e2e",
      "config": {
        "focus": "foo",
        "more": "config",
        ...
      }
  }
}

And the plugins[...].config section would be available in "/etc/sonobuoy/plugin-config.json" or similar on the worker side.

Drop the concept of "Expected Results", have each plugin block to completion

Plugins' "ExpectedResults" model needs a bit of an overhaul, we should just have the Monitor() function of each plugin signal when the results have all come in.

For that matter, if we want to go that far we can drop Monitor() altogether and just have the Run() function return when the plugin is done.

One thing to consider here is for DaemonSets: since they get continually relaunched (restartPolicy must be "Always"), it's difficult to know a DaemonSet has finished without asking the aggregator if the result has been seen. So we should rely on any given DaemonSet plugin to use readiness probes to indicate that it is finished. (And the plugin driver would just wait for all pods to be "ready", then finish.)

Document structure of results tarball

We should have strict rules about how data in a results tarball is arranged, so that ingesting sonobuoy results can be done in an automated way without lots of if statements and special casing.

We can start with simple documentation, outlining the paths within the tarball and what should be contained in them, etc.

Endpoint for reporting status of running scan.

Sort of in line with whats discussed in #49

It would be useful if there was a REST endpoint that

  1. provided the current global state, eg. RUNNING, COMPLETED
  2. provided the current state of an individual plugin, e.g. WAITING, RUNNING, COMPLETED (PASS/FAIL)
  3. provided the ability to pull the data/logs associated with individual plugins

Version specific conformance test images

Currently the e2e plugin suggests that it is not advisable to run the E2E suite on clusters < 1.7+ due to the upstream issue of not having gates around the tests that are available. Instead of relying on that upstream change to be made and people "doing the right thing" all the time, would it be possible to simply have a multi-version build for the kube-conformance image, where each is tagged for the appropriate K8 version?

Selfishly I would like to use this on a few 1.6 clusters however it also seems like it may be a bit more stable.

The main deficiency I see is if there are newer tests created to validate additional aspects of the cluster that are compatible with lower environments, those changes wouldn't be included.

Getting error when trying to find cluster/log-dump/log-dump.sh

Noticed this error in the e2e log. This happens with failed or successful tests:

[snip]
03: INFO: Running AfterSuite actions on all node
Sep 25 19:38:00.103: INFO: Running AfterSuite actions on node 1
>>>Sep 25 19:38:00.104: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

Ran 1 of 652 Specs in 39.740 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 651 Skipped PASS

It doesn't seem to affect the tests, but it does seem like a problem

[quickstart] Support non-rbac clusters

Right now we only have an RBAC solution for the quickstart. We would like to support clusters that do not have RBAC.

Work involved: If #52 gets merged, then we can add an if statement in the ksonnet that doesn't generate the rbac objects if a particular flag is passed in. Then in the makefile we can add an environment variable to the generate-examples target. Since the only dependency is docker for building the quickstart examples, we can then encourage non-rbac users to regenerate the examples with a particular flag or we could maintain a generated copy of that JSON.

Why is ./plugins/e2e/results.tar.gz tarred twice?

I'd expect sonobuoy to produce the tar file for me to untar once and not have multiple layers of tarred files.
I guess this is just the output from the plugin, but could sonobuoy handle untarring possible .tar.gz files produced by plugins (like e2e)?

Higher-level aggregation of the kubelet logs in scope?

I saw that journalctl -o json ... was used for creating the logs file, but how do I actually read that I may wonder as an user...
I quickly searched Google for how to parse journalctl JSON output to something slightly more readable (default journalctl output for example), but couldn't find anything quickly at least.
It would be nice with at least some words on how you can parse/aggregate that json.

Removal of "PodLogs" from 10-configmaps.yaml appears to have no effect

Taken from a Slack conv:

tennis [11:24 AM]
@timothysc Hi. I’ve removed the “PodLogs” line from 10-configmaps.yaml and I still get this in the log:

I0928 16:03:14.961899       1 pods.go:46] Collecting Pod Logs...
I0928 16:03:14.961952       1 queries.go:215] Running ns query (account-authn)
I0928 16:03:16.644295       1 pods.go:46] Collecting Pod Logs...
I0928 16:06:50.866586       1 queries.go:215] Running ns query (account-cdvr)
I0928 16:06:53.399190       1 pods.go:46] Collecting Pod Logs...
I0928 16:06:53.854836       1 queries.go:215] Running ns query (account-sd)
I0928 16:06:56.212353       1 pods.go:46] Collecting Pod Logs...
I0928 16:06:57.027177       1 queries.go:215] Running ns query (authorize-channel)
I0928 16:06:59.603169       1 pods.go:46] Collecting Pod Logs...
I0928 16:07:01.215041       1 queries.go:215] Running ns query (catalog-content)
I0928 16:07:03.811513       1 pods.go:46] Collecting Pod Logs...
E0928 16:07:05.847814       1 discovery.go:41] error querying resources under namespace catalog-content: the server rejected our request for an unknown reason (get pods consumptionurl-1-1-7-3784334805-0s7q5)
I0928 16:07:05.847839       1 queries.go:215] Running ns query (catalog-feed)
I0928 16:07:15.455583       1 pods.go:46] Collecting Pod Logs...
I0928 16:07:17.814393       1 queries.go:215] Running ns query (catalog-gracenote)
I0928 16:07:20.403349       1 pods.go:46] Collecting Pod Logs...

Is that expected?

timothysc [11:26 AM]
Hmm let me check, that seems odd.

tennis [11:27 AM]
Thanks

timothysc [11:34 AM]
What version are you running?
0.8.0?

tennis [11:34 AM]
latest. Pulled right from github

timothysc [11:34 AM]
hmm, It's possible there is a bug on master. I don't have time to dbl check atm. If you file an issue we'll be sure to take a look, b/c we plan to rev next week.

tennis [11:35 AM]
will do. Np.

Networking Test

General feedback from the community is that many providers will block ICMP packets and the network conformance test can fail.

There are a couple of issues around this upstream and this item serves as a tracking task for once a fix is delivered into sonobuoy.

TODO: Link upstream issues.

/cc @rbankston

Add documentation describing how to analyze test results

Right now there is very little in the way of describing what the results look like and how to analyze them.

I was able to dig around and found that the meat of the results is in the tarball under: resources/ns/heptio-sonobuoy/pods/sonobuoy-e2e-job-*/logs/e2e.tx

I was able to get a fairly good summary with:

grep -A3 • resources/ns/heptio-sonobuoy/pods/sonobuoy-e2e-job-*/logs/e2e.tx

But perhaps there should be a docs page describing how to read the results.

quickstart needs explanation

Issue

Looks like the examples/quickstart is meant (i think) to setup the knobs necessary to deploy sonobuoy on a secure initial kube cluster, like the one heptio deploys - which needs special RBACs.

Solution

Usage of the rbac.yaml + pod.yaml files should probably be explained in a file inside of exampels/quickstart i.e. examples/quickstart/README or whatever. Not all clusters will need the special RBACs (although many will / should) and so some folks can probably just create the pod.yaml.

Refactor Error Handling

Original error handling was made around gathering the details and continuing on. The problem is they are spewed at the end of the run which make it more difficult to debug.

I no longer think we need to propagate errors back to the root, but simply indicate pass fail and log at the location of the error.

I do not think we need to fix this for the OSS release, but can do it afterwards.

Add makefile caching or separate out plugins.

Currently the plugins build as separate containers within the main repo but they don't necessarily need to churn with all the sonobuoy changes, not to mention the build for the e2es are heavy.

We could either shift them out of the main repo, or create a cached building mechanism.

/cc @kensimon

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.