Giter Site home page Giter Site logo

otterize / network-mapper Goto Github PK

View Code? Open in Web Editor NEW
570.0 7.0 22.0 2.53 MB

Map Kubernetes traffic: in-cluster, to the Internet, and to AWS IAM and export as text, intents, or an image

License: Apache License 2.0

Go 98.90% Dockerfile 0.89% Shell 0.20%
kubernetes mapping network go golang intents monitoring observability visibility service-discovery

network-mapper's Introduction

Otterize network mapper

build Go Report Card community

Maps pod-to-pod traffic, pod-to-Internet traffic, and even AWS IAM traffic, with zero-config.

mapper.mp4

About

The Otterize network mapper is a zero-config tool that aims to be lightweight and doesn't require you to adapt anything in your cluster. Its goal is to give you insights about traffic in your cluster without a complete overhaul or the need to adapt anything to it, unlike other solutions, which may require deploying a new CNI, a service mesh, and so on.

You can use the Otterize CLI to list the traffic by client, visualize the traffic, export the results as JSON or YAML, or reset the traffic the mapper remembers.

Example output after running otterize network-mapper visualize on the Google Cloud microservices demo: graph

The same microservices demo in the Otterize Cloud access graph, as it appears when you choose to connect the network mapper to Otterize Cloud: image

Example output after running otterize network-mapper list on the Google Cloud microservices demo:

cartservice in namespace otterize-ecom-demo calls:
  - redis-cart
checkoutservice in namespace otterize-ecom-demo calls:
  - cartservice
  - currencyservice
  - emailservice
  - paymentservice
  - productcatalogservice
  - shippingservice
frontend in namespace otterize-ecom-demo calls:
  - adservice
  - cartservice
  - checkoutservice
  - currencyservice
  - productcatalogservice
  - recommendationservice
  - shippingservice
loadgenerator in namespace otterize-ecom-demo calls:
  - frontend
recommendationservice in namespace otterize-ecom-demo calls:
  - productcatalogservice

Try the network mapper

Try the quickstart to get a hands-on experience in 5 minutes.

Looking to map AWS traffic? Check out the AWS visibility tutorial.

Installation instructions

Install and run the network mapper using Helm

helm repo add otterize https://helm.otterize.com
helm repo update
helm install network-mapper otterize/network-mapper -n otterize-system --create-namespace --wait

Install Otterize CLI to query data from the network mapper

Mac

brew install otterize/otterize/otterize-cli

Linux 64-bit

wget https://get.otterize.com/otterize-cli/v1.0.5/otterize_linux_x86_64.tar.gz
tar xf otterize_linux_x86_64.tar.gz
sudo cp otterize /usr/local/bin

Windows

scoop bucket add otterize-cli https://github.com/otterize/scoop-otterize-cli
scoop update
scoop install otterize-cli

For more platforms, see the installation guide.

Components

  • Mapper - the mapper is deployed once per cluster, and receives traffic information from the sniffer and watchers, and resolves the information to communications between service identities.
  • Sniffer - the sniffer is deployed to each node using a DaemonSet, and is responsible for capturing node-local DNS traffic and inspecting open connections.
  • Kafka watcher - the Kafka watcher is deployed once per cluster and is responsible for detecting accesses to Kafka topics, which services perform those accesses and which operations they use.
  • Istio watcher - the Istio watcher is part of the Mapper and queries Istio Envoy sidecars for HTTP traffic statistics, which are used to detect HTTP traffic with paths. Currently, the Istio watcher has a limitation where it reports all HTTP traffic seen by the sidecar since it was started, regardless of when it was seen.
  • AWS IAM visibility - The AWS IAM visibility components are optionally deployed with --set aws.visibility.enabled=true. Label pods with network-mapper.otterize.com/aws-visibility: true, and if connected to Otterize Cloud, the Cloud will combine the information to put together a map of accesses to AWS resources, which you can export as ClientIntents yamls for use with the (Intents Operator)[https://github.com/otterize/intents-operator].

DNS responses

DNS is a common network protocol used for service discovery. When a pod (checkoutservice) tries to connect to a Kubernetes service (orderservice) or another pod, a DNS query is sent out. The network mapper watches DNS responses and extracts the IP addresses, which are used for the service identity resolving process.

Active TCP connections

DNS responses will only appear when new connections are opened. To handle long-lived connections, the network mapper also queries open TCP connections in a manner similar to netstat or ss. The IP addresses are used for the service identity resolving process, as above.

Kafka logs

The Kafka watcher periodically examines logs of Kafka servers provided by the user through configuration, parses them and deduces topic-level access to Kafka from pods in the cluster. The watcher is only able to parse Kafka logs when Kafka servers' Authorizer logger is configured to output logs to stdout with DEBUG level.

Istio sidecar metrics

The Istio watcher, part of the Network mapper periodically queries for all pods with the security.istio.io/tlsMode label, queries each pod's Istio sidecar for metrics about connections, and deduces connections with HTTP paths between pods covered by the Istio service mesh.

AWS IAM visibility

AWS IAM visibility consists of several components: a HTTP proxy that proxies AWS traffic for pods which you opt-in on using the label network-mapper.otterize.com/aws-visibility: true, a webhook admission controller that patches Pods with that label as they are admitted to add a certificate for the HTTP proxy and direct DNS traffic for amazonaws.com to a DNS server belonging to the network mapper, and finally said DNS server which responds only to amazonaws.com requests and forwards the rest to the cluster's DNS server.

Service name resolution

Service names are resolved in one of two ways:

  1. If an otterize/service-name label is present, that name is used.
  2. If not, a recursive look-up is performed for the Kubernetes resource owner for a pod until the root is reached. For example, if you have a Deployment named client, which then creates and owns a ReplicaSet, which then creates and owns a Pod, then the service name for that pod is client - same as the name of the Deployment. The goal is to generate a mapping that speaks in the same language that dev teams use.

Exporting a network map

The network mapper continuously builds a map of pod to pod communication in the cluster. The map can be exported at any time in either JSON or YAML formats with the Otterize CLI.

The YAML export is formatted as ClientIntents Kubernetes resource files. Client intents files can be consumed by the Otterize intents operator to configure pod-to-pod access with network policies, or Kafka client access with Kafka ACLs and mTLS.

Learn more

Explore our documentation site to learn how to:

Contributing

  1. Feel free to fork and open a pull request! Include tests and document your code in Godoc style
  2. In your pull request, please refer to an existing issue or open a new one.
  3. See our Contributor License Agreement.

Slack

To join the conversation, ask questions, and engage with other users, join the Otterize Slack!

button

Usage telemetry

The mapper reports anonymous usage information back to the Otterize team, to help the team understand how the software is used in the community and what aspects users find useful. No personal or organizational identifying information is transmitted in these metrics: they only reflect patterns of usage. You may opt out at any time through a single configuration flag.

To disable sending usage information:

  • Via the Otterize OSS Helm chart: --set global.telemetry.enabled=false.
  • Via an environment variable: OTTERIZE_TELEMETRY_ENABLED=false.
  • If running a mapper directly: --telemetry-enabled=false.

If the telemetry flag is omitted or set to true, telemetry will be enabled: usage information will be reported.

Read more about it in the Usage telemetry Documentation

network-mapper's People

Contributors

amit7itz avatar amitlicht avatar davidgs avatar dependabot[bot] avatar evyatarmeged avatar netanelbollag avatar omris94 avatar orishavit avatar orishoshan avatar otterizebot avatar otterobert avatar roekatz avatar smithclay avatar tomergreenwald avatar usarid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

network-mapper's Issues

"Failed to save state" in mapper

Hi
I have a local Kind cluster where I have installed the network-mapper according to README.
When I'm trying to run ./otterize mapper list in my CLI, I get no output. I have noticed log messages in mapper pod:

{"time":"2023-01-25T13:44:36.68256383Z","id":"","remote_ip":"10.244.0.1","host":"otterize-network-mapper:9090","method":"POST","uri":"/query","user_agent":"Go-http-client/1.1","status":200,"error":"","latency":61569408,"latency_human":"61.569408ms","bytes_in":2004,"bytes_out":41}                                  │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=kube-system owner=coredns ownerKind="apps/v1, Kind=Deployment" pod=coredns-565d847f94-wvmtf                                                                                                                                             │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=default owner=grafana-agent ownerKind="apps/v1, Kind=DaemonSet" pod=grafana-agent-wkcjz                                                                                                                                                 │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=default owner=crossplane ownerKind="apps/v1, Kind=Deployment" pod=crossplane-6fd66789b9-v7tdw                                                                                                                                           │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=kube-system owner=coredns ownerKind="apps/v1, Kind=Deployment" pod=coredns-565d847f94-4zbw5                                                                                                                                             │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=default owner=crossplane-rbac-manager ownerKind="apps/v1, Kind=Deployment" pod=crossplane-rbac-manager-6f4889484b-bxwcq                                                                                                                 │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=komodor owner=k8s-watcher ownerKind="apps/v1, Kind=Deployment" pod=k8s-watcher-6fc58bbffc-zs9fb                                                                                                                                         │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=default owner=crossplane-contrib-provider-aws ownerKind="pkg.crossplane.io/v1, Kind=Provider" pod=crossplane-contrib-provider-aws-92d22091732f-66dc7448bc-mtj8g                                                                         │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=local-path-storage owner=local-path-provisioner ownerKind="apps/v1, Kind=Deployment" pod=local-path-provisioner-684f458cdd-27sh7                                                                                                        │
│ time="2023-01-25T13:44:46Z" level=info msg="pod resolved to owner name" namespace=otterize-system owner=otterize-network-mapper ownerKind="apps/v1, Kind=Deployment" pod=otterize-network-mapper-5856bd77b7-bq5h7                                                                                                         │
│ time="2023-01-25T13:44:46Z" level=warning msg="Failed to save state into the store file" error="an empty namespace may not be set during creation"                                                                                                                                                                        │```

What do I do wrong? How can I see if network-mapper does discover any connections for me?

Custom annotation for service name

According to: https://docs.otterize.com/reference/configuration/network-mapper#pod-annotations setting the intents.otterize.com/service-name annotation on a pod will set the name of the service.

Please consider allowing this annotation to be set by the users. We already have such an annotation on our pods and it would be best not to duplicate it.

(As an aside, maybe a label would be better? Perhaps default to the app.kubernetest.io/name label? As virtually everything should have that set already.)

Thank you

Service incorrectly appearing in every list of calls

I'm running network-mapper 1.0.4 and am seeing a service incorrectly in the list of calls for every service listed. E.g.

zookeeper in namespace kafka calls:
  - unused-service in namespace example
  - zookeeper in namespace kafka
kafka in namespace kafka calls:
  - unused-service in namespace example
  - zookeeper in namespace kafka
prometheus in namespace monitoring calls:
  - unused-service in namespace example
  - kafka in namespace kafka
  - zookeeper in namespace kafka

In the example above, there should be no calls from zookeeper or kafka to unused-service. Is there a way to drill down and determine why a certain service appears in the call list?

Network-mapper crash - invalid memory address or nil pointer dereference

Hi Otterize,

Crash report

What did we do
Install network-mapper on a Kubernetes cluster version 1.27.3
Environment: EKS
Network-mapper image version: v0.1.27

Expected
Network-mapper pod is up and running

Actual
Network-mapper pod in a crash loop.

[ 2023-08-24T08:20:43.031 ]: 	/src/mapper/cmd/main.go:58 +0x118
[ 2023-08-24T08:20:43.031 ]: main.main()
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/manager.go:351 +0xf9
[ 2023-08-24T08:20:43.031 ]: sigs.k8s.io/controller-runtime/pkg/manager.New(_, {0x0, 0x0, 0x0, {{0x1b50210, 0xc000408c00}, 0x0}, 0x0, {0x0, 0x0}, ...})
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/cluster.go:159 +0x18d
[ 2023-08-24T08:20:43.031 ]: sigs.k8s.io/controller-runtime/pkg/cluster.New(0xc000207440, {0xc0000c3b28, 0x1, 0x0?})
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/cluster.go:217 +0x25
[ 2023-08-24T08:20:43.031 ]: sigs.k8s.io/controller-runtime/pkg/cluster.setOptionsDefaults.func1(0x0?)
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/apiutil/dynamicrestmapper.go:110 +0x182
[ 2023-08-24T08:20:43.031 ]: sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper(0xc0004983f0?, {0x0, 0x0, 0x24ff0a13d815e301?})
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/apiutil/dynamicrestmapper.go:130
[ 2023-08-24T08:20:43.031 ]: sigs.k8s.io/controller-runtime/pkg/client/apiutil.(*dynamicRESTMapper).setStaticMapper(...)
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/apiutil/dynamicrestmapper.go:94 +0x25
[ 2023-08-24T08:20:43.031 ]: sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper.func1()
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/restmapper/discovery.go:148 +0x42
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/restmapper.GetAPIGroupResources({0x1b53768?, 0xc00034e8a0?})
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:355 +0x3a
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?)
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:621 +0x72
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.withRetries(0x2, 0xc0000bf180)
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:356 +0x25
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1()
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:392 +0x59
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.ServerGroupsAndResources({0x1b53768, 0xc00034e8a0})
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:198 +0x5c
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xc00040a1e0?)
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:310 +0x47c
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0x1b2fdf0?)
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:35 +0x2f8
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc000596228, 0x15}, {0xc0000482a0, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:69 +0x5f0
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000597878, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
[ 2023-08-24T08:20:43.031 ]: 	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:88
[ 2023-08-24T08:20:43.031 ]: k8s.io/client-go/discovery.convertAPIResource(...)
[ 2023-08-24T08:20:43.031 ]: goroutine 1 [running]:
[ 2023-08-24T08:20:43.031 ]: 
[ 2023-08-24T08:20:43.031 ]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1300c10]
[ 2023-08-24T08:20:43.031 ]: panic: runtime error: invalid memory address or nil pointer dereference

Additional information

According to the stack trace and online research, it appears that the problem has been resolved in version 0.26.5 of k8s.io/client-go.
Github issue

Vulnerabilities in GO version 1.19.x

Hi team,

During our monthly scan of 3rd-party utilities,
we have found the following critical vulnerabilities in network-mapper-sniffer:v1.0.2

GO /main

VulnerabilityID Severity InstalledVersion FixedVersion
CVE-2023-24538 CRITICAL 1.19.13 1.20.3-r0
CVE-2023-24540 CRITICAL 1.19.13 1.20.4-r0
CVE-2023-29402 CRITICAL 1.19.13 1.20.5-r0
CVE-2023-29404 CRITICAL 1.19.13 1.20.5-r0
CVE-2023-29405 CRITICAL 1.19.13 1.20.5-r0

Can you release a new version of this excellent tool built with a more recent GO version?

Support for Knative

Really great work on the visualisation and ease guys!
I was wondering if you are going to add support for knative-eventing and knative-serving and how they can are corelated with each other and other services both inside and outside the cluster.
If the connections are going outside, it can just give us the name of the sink.
What are your thoughts?

Query result podOwnerKind is null

I have a network-mapper instance v0.1.12 and I'm trying to build a correct GraphQL query for its API.
I'm trying to make query to return intents with corresponding kind field. I have composed the following request:

query ServiceIntentsUpToMapperV017 ($namespaces: [String!]) {
	serviceIntents(namespaces: $namespaces) {
		client {
			... NamespacedNameFragment
		}
		intents {
			... NamespacedNameFragment
		}
	}
}
fragment NamespacedNameFragment on OtterizeServiceIdentity {
	name
	namespace
    podOwnerKind { 
      kind
    }
}

But whenever I execute it against mapper, the "podOwnerKind" field is always null. I expect to get the valid podOwnerKind for all the cases, be it just orphaned pod, deployment, job etc.

Evyatar Meged has suggested on Slack that it's a real bug.

Corresponding Slack thread: https://otterizecommunity.slack.com/archives/C046SG6PRJM/p1677597181342809

Support export / list intents by client

Today, the otterize network-mapper export and otterize network-mapper list only support export by server and by namespace.

There is currently no way to export intents for a single client, which you may be trying to do if you own that service.

Proposal: add --client flag to the two commands, that behaves like --server, except filtering the intents that are outputted by client.

Support emitting map as an OpenTelemetry metric

Several different monitoring and observability solutions (like Grafana tempo) can natively visualize maps from timeseries data that follows a specific format.

If network mapper emits a OpenTelemetry timeseries with from and to edges, the connections inferred from network-mapper can be easily visualized and used in external tools for performance, inventory, and observability use-cases.

Invalid pod selector / affinity

I'm following tutorials (just finished the network policies one which worked perfectly). Now onto mapper and:

helm install network-mapper otterize/network-mapper -n otterize-system --create-namespace --wait


$ kubectl -n otterize-system get pods
NAME                                                            READY   STATUS    RESTARTS   AGE
intents-operator-controller-manager-5986b5fd5d-8gb9q            2/2     Running   0          6m7s
otterize-network-mapper-74c4c7f567-wht25                        0/1     Pending   0          81s
otterize-network-sniffer-cmspw                                  0/1     Pending   0          81s
otterize-network-sniffer-nkgwf                                  0/1     Pending   0          81s
otterize-network-sniffer-pb46z                                  0/1     Pending   0          81s
otterize-spire-agent-f6gmm                                      1/1     Running   0          6m8s
otterize-spire-agent-k6l6z                                      1/1     Running   0          6m8s
otterize-spire-agent-l5pgk                                      1/1     Running   0          6m8s
otterize-spire-server-0                                         1/1     Running   0          6m7s
otterize-watcher-64b668ffc9-kphl6                               1/1     Running   0          6m7s
spire-integration-operator-controller-manager-f48894bd9-7n65z   2/2     Running   0          6m7s


Warning  FailedScheduling   10s (x3 over 96s)  default-scheduler           0/3 nodes are available: 1 Insufficient cpu, 2 node(s) didn't match Pod's node affinity/selector.

CVE-2023-24535 - High vulnerability in google protobuf package

Hi team!

We've identified a category high vulnerability (CVE-2023-24535) in the docker image caused by google.golang.org/[email protected] which can be resolved by upgrading to google.golang.org/[email protected].
It seems to be an indirect dependency from another module of yours (intents-operator).

Is it something that can be updated and taken care of or are you reliant on this specific protobuf version?

Import graph:

❯ go mod graph | grep [email protected]
github.com/otterize/network-mapper/src google.golang.org/[email protected]
github.com/otterize/intents-operator/[email protected] google.golang.org/[email protected]
google.golang.org/[email protected] github.com/golang/[email protected]
google.golang.org/[email protected] github.com/google/[email protected]

Aquasecurity scan:
image

Thank you!

Otterize network-mapper invalid URL escape

We're trying to use otterize network-mapper with rancher based cluster, which has cluster:server field in kubeconfig that looks like this

clusters:
- cluster:
    server: https://RANCHER_URL/k8s/clusters/CLUSTER_NAME
  name: CLUSTER_NAME

However when running otterize network-mapper list we're getting the below error

Error: error upgrading connection: error creating request: parse "https://<RANCHER_URL>%2Fk8s%2Fclusters%2FCLUSTER_NAME/api/v1/namespaces/otterize-system/pods/otterize-network-mapper-65b5b9c654-tkgvq/portforward": invalid URL escape "%2F"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.