Giter Site home page Giter Site logo

gardener / gardener-extension-networking-calico Goto Github PK

View Code? Open in Web Editor NEW
8.0 10.0 43.0 28.23 MB

Gardener extension controller for the Calico CNI network plugin.

Home Page: https://gardener.cloud

License: Apache License 2.0

Shell 13.98% Dockerfile 0.65% Makefile 3.49% Go 76.00% Smarty 3.54% Python 2.33%
gardener extension networking calico cni kubernetes

gardener-extension-networking-calico's Introduction

REUSE status CI Build status Go Report Card

This controller operates on the Network resource in the extensions.gardener.cloud/v1alpha1 API group. It manages those objects that are requesting Calico Networking configuration (.spec.type=calico):

---
apiVersion: extensions.gardener.cloud/v1alpha1
kind: Network
metadata:
  name: calico-network
  namespace: shoot--core--test-01
spec:
  type: calico
  clusterCIDR: 192.168.0.0/24
  serviceCIDR:  10.96.0.0/24
  providerConfig:
    apiVersion: calico.networking.extensions.gardener.cloud/v1alpha1
    kind: NetworkConfig
    overlay:
      enabled: false

Please find a concrete example in the example folder. All the Calico specific configuration should be configured in the providerConfig section. If additional configuration is required, it should be added to the networking-calico chart in controllers/networking-calico/charts/internal/calico/values.yaml and corresponding code parts should be adapted (for example in controllers/networking-calico/pkg/charts/utils.go).

Once the network resource is applied, the networking-calico controller would then create all the necessary managed-resources which should be picked up by the gardener-resource-manager which will then apply all the network extensions resources to the shoot cluster.

Finally after successful reconciliation an output similar to the one below should be expected.

  status:
    lastOperation:
      description: Successfully reconciled network
      lastUpdateTime: "..."
      progress: 100
      state: Succeeded
      type: Reconcile
    observedGeneration: 1
    providerStatus:
      apiVersion: calico.networking.extensions.gardener.cloud/v1alpha1
      kind: NetworkStatus

Compatibility

The following table lists known compatibility issues of this extension controller with other Gardener components.

Calico Extension Gardener Action Notes
>= v1.30.0 < v1.63.0 Please first update Gardener components to >= v1.63.0. Without the mentioned minimum Gardener version, Calico Pods are not only scheduled to dedicated system component nodes in the shoot cluster.

How to start using or developing this extension controller locally

You can run the controller locally on your machine by executing make start. Please make sure to have the kubeconfig pointed to the cluster you want to connect to. Static code checks and tests can be executed by running make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.

Feedback and Support

Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn more!

Please find further resources about out project here:

gardener-extension-networking-calico's People

Contributors

achimweigel avatar acumino avatar aleksandarsavchev avatar axel7born avatar dependabot[bot] avatar dimitar-kostadinov avatar dimityrmirchev avatar docktofuture avatar etiennnr avatar gardener-ci-robot avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar hown3d avatar ialidzhikov avatar kostov6 avatar maboehm avatar majst01 avatar neo-liang-sap avatar nschad avatar plkokanov avatar rfranzke avatar robinschneider avatar scheererj avatar shafeeqes avatar stoyanr avatar timebertt avatar timuthy avatar vpnachev avatar zanetworker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gardener-extension-networking-calico's Issues

Priority Class is missing when deploying shoot

How to categorize this issue?

/area networking
/kind bug

What happened:
FailedCreate 3m50s (x22 over 75m) replicaset-controller Error creating: pods "calico-typha-horizontal-autoscaler-78c69b844-" is forbidden: no PriorityClass with name gardener-shoot-system-800 was found

What you expected to happen:
Replica sets to be deployed an running inside of calico extension
How to reproduce it (as minimally and precisely as possible):
deploy a shoot with this extension and it is missing the prority class

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Switch calico images to be pulled from quay.io

How to categorize this issue?

/area networking
/kind enhancement

What would you like to be added:
#254 adapted the dockerhub calico images to our internal gcr copies. However calico images are also maintained in quay.io:

% docker pull quay.io/calico/node:v3.25.1
v3.25.1: Pulling from calico/node
6c8ba610e030: Pull complete
de3d34951e10: Pull complete
Digest: sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f
Status: Downloaded newer image for quay.io/calico/node:v3.25.1
quay.io/calico/node:v3.25.1

See https://quay.io/repository/calico/node?tab=tags&tag=latest

Why is this needed:
Instead of maintaining GCR copies of calico images from dockerhub, we can use the quay.io images.

Inconsistent resource naming

What would you like to be added:
In a shoot, the rolebindings have inconsistent names:

$ ks get rolebinding
NAME                                                                AGE
...
garden.sapcloud.io:psp:calico                                       24h
garden.sapcloud.io:psp:calico-kube-controllers                      24h
garden.sapcloud.io:psp:calico-typha                                 24h
...
gardener.cloud:psp:typha-cpa                                        24h
gardener.cloud:psp:typha-cpva                                       24h
...
typha-cpha                                                          24h

It probably makes sense to also check the other resources.

Why is this needed:
Consistency

Enabling SecurityContextDeny Admission Plugin results in Calico failure

Desciption
Creating Shoot with Calico and SecurityContextDeny Admission Plungin results in Calico being unable to setup and probably blocking Tiller traffic.

SecurityContextDeny disallows allowPrivilegeEscalation: true, meanwhile in gardener-extensions it is set to true:

Expected results
Calico and Tiller setup successful.

Actual result

  • calico-kube-controllers-asdadsgd pod CrashLoopBackOff.
  • Tiller certs setup failure due to timeout connecting kubernetes service.

Steps to reproduce
Provision cluster with Calico and SecurityContextDeny Admission Plugin.

  • AllowPrivilegegContainer: true
  • KubernetesVersion: 1.15.10
  • KubeAPIServer:
    • EnableBasicAuthentication: false
  • Networking:
    • Type: "calico"
    • Nodes: "10.250.0.0/19"

k8s e2e tests are failing because of missing typha

What happened:
While running tests on shoot clusters without typha, we noticed multiple k8s e2e tests are repeatedly failing with error message:

Feb 17 08:40:04.346: Failed to find expected responses:
Tries 34
Command curl -g -q -s 'http://100.64.1.202:8080/dial?request=hostname&protocol=http&host=100.64.0.228&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

Failing k8s e2e tests:

  • [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  • [sig-cli] Kubectl client Simple pod should handle in-cluster config
  • [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  • [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  • [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
  • [sig-network] DNS should provide DNS for services [Conformance]
  • [sig-network] DNS should provide DNS for the cluster [Conformance]
  • [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  • [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  • [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  • [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  • [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly]

What you expected to happen:
Normally, it shouldn't matter for k8s e2e tests, if shoot cluster has typha enabled or disabled.

How to reproduce it (as minimally and precisely as possible):

  1. Create gcp shoot cluster with typha disabled
  2. Set KUBECONFIG to your cluster
  3. Run docker run -ti -e --rm -v $KUBECONFIG:/mye2e/shoot.config golang:1.13 bash
  4. Run within container:
export E2E_EXPORT_PATH=/tmp/export; export KUBECONFIG=/mye2e/shoot.config
go get github.com/gardener/test-infra; cd /go/src/github.com/gardener/test-infra
export GO111MODULE=on
go run -mod=vendor ./integration-tests/e2e -debug=true -k8sVersion=1.17.3 -cloudprovider=gcp -testcase="[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]" -testcase="[sig-cli] Kubectl client Simple pod should handle in-cluster config" -testcase="[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]" -testcase="[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]" -testcase="[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]" -testcase="[sig-network] DNS should provide DNS for services  [Conformance]" -testcase="[sig-network] DNS should provide DNS for the cluster  [Conformance]" -testcase="[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]" -testcase="[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]" -testcase="[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]" -testcase="[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]" -testcase="[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly]"

Environment:

  • Gardener version (if relevant): v1.0.4
  • Extension version: v1.3.0
  • Kubernetes version (use kubectl version): v1.17.3
  • Cloud provider or hardware configuration: gcp

Support for CrossSubnet in Calico configuration

How to categorize this issue?

/kind enhancement
/priority normal
/area networking

What would you like to be added:
Currently, we are using IPIP all the time for GCP and AWS providers. Maybe we should consider using IPIP only for cross subnet network traffic.

Test Cases (Validate that this possible in the first place) enabling / disabling cross subnet:

  • Old shoot -> new shoot with Cross Subnet (are the old route there? are se able to reach all new nodes and new pods from old nodes)
  • Completely new shoots.
  • When switching from CS to normal will calico add the routes back and will everything work as expected or do we need to do something (e.g., roll the nodes)?
  • Check whether the IPPool IPIP configuration is automatically updated or do we have to manually pach it (if so we might aim for a short-term solution with init-containers and raise in issue on calico/node).

Expose the src/destination check configuration:

  • Expose it in the MCM (raise a PR or create an issue).
  • Set it when cross-subnet is enabled in the provider-aws extension (probably in the worker configuration, we need to access the settings from the cluster resource).
  • When it is disabled...are the checks now working.

Some concerns:

  • What happens when CS is disabled?
  • Routes will only exist on new nodes probably.
  • How do we enforce node rolling? this usually happens during the maintenance time-window.
  • Maybe we don't need to roll the nodes, but adding a Daemonset that would delete the old routes would be enough.

Why is this needed:
Improve network performance.

calico-typha-vertical-autoscaler cannot patch calico-typha-deploy

  1. Create Shoot.
  2. Check the logs of calico-typha-vertical-autoscaler in kube-system.
$ k logs calico-typha-vertical-autoscaler-75bf9c488b-v7xq2 -n kube-system
I1122 17:03:53.441565       1 autoscaler.go:46] Scaling namespace: kube-system, target: deployment/calico-typha-deploy
I1122 17:03:53.466563       1 autoscaler_server.go:120] setting config = { [calico-typha-deploy]: { requests: { [cpu]: { base=120m max=1 incr=80m nodes_incr=10 }, }, limits: { } }, }
I1122 17:03:53.466616       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:03:53.466622       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:03:53.472468       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:04:23.467090       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:04:23.467109       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:04:23.470817       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:04:53.468969       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:04:53.468989       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:04:53.472785       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:05:23.467284       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:05:23.467304       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:05:23.470830       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:05:53.467830       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:05:53.467901       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:05:53.471852       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:06:23.467916       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:06:23.467935       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:06:23.471634       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:06:53.472676       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:06:53.472693       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:06:53.476680       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:07:23.467223       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:07:23.467240       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:07:23.471003       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:07:53.467294       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:07:53.467313       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:07:53.470909       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:08:23.467565       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:08:23.467587       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:08:23.471182       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:08:53.467272       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:08:53.467291       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:08:53.471139       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:09:23.467194       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:09:23.467212       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:09:23.470817       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:09:53.467388       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:09:53.467406       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:09:53.471524       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:10:23.467345       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:10:23.467365       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:10:23.471218       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:10:53.467197       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:10:53.467220       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:10:53.470945       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:11:23.467259       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:11:23.467277       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:11:23.470929       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
I1122 17:11:53.467354       1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 8
I1122 17:11:53.467372       1 autoscaler_server.go:162] Setting calico-typha-deploy requests["cpu"] = 200m
E1122 17:11:53.471051       1 autoscaler_server.go:153] Update failure: patch failed: Deployment.apps "calico-typha-deploy" is invalid: spec.template.spec.containers[0].image: Required value
[...]

Ref kubernetes-sigs/cluster-proportional-vertical-autoscaler#24

Unable to create a 1.22 Shoot

How to categorize this issue?

/area networking
/kind bug

What happened:
After the #193 the Network reconciliation for a K8s 1.22 Shoot fails with:

E0708 09:44:01.579736       1 runtime.go:78] Observed a panic: &errors.errorString{s:"could not find image \"calico-podtodaemon-flex\" opts runtime version 1.22.11 target version 1.22.11"} (could not find image "calico-podtodaemon-flex" opts runtime version 1.22.11 target version 1.22.11)
goroutine 301 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1a7db20?, 0xc000cbc400})
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x86
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:106 +0x6f
panic({0x1a7db20, 0xc000cbc400})
	/usr/local/go/src/runtime/panic.go:838 +0x207
k8s.io/apimachinery/pkg/util/runtime.Must(...)
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:171
github.com/gardener/gardener-extension-networking-calico/pkg/imagevector.findImage({0x1d22a23, 0x17}, {0xc000ac6cf0, 0x7})
	/go/src/github.com/gardener/gardener-extension-networking-calico/pkg/imagevector/image_finders.go:25 +0x1df
github.com/gardener/gardener-extension-networking-calico/pkg/imagevector.CalicoFlexVolumeDriverImage(...)
	/go/src/github.com/gardener/gardener-extension-networking-calico/pkg/imagevector/image_finders.go:51
github.com/gardener/gardener-extension-networking-calico/pkg/charts.ComputeCalicoChartValues(0xc00056a1c0, 0xc000cbc060?, 0x1, {0xc000ac6cf0, 0x7}, 0x1, 0xd5?)
	/go/src/github.com/gardener/gardener-extension-networking-calico/pkg/charts/utils.go:164 +0x3b4
github.com/gardener/gardener-extension-networking-calico/pkg/charts.RenderCalicoChart({0x1fa8c80, 0xc000cbc070}, 0xc000d8d6e8?, 0x1?, 0x1?, {0xc000ac6cf0?, 0x0?}, 0x0?, 0x0?)
	/go/src/github.com/gardener/gardener-extension-networking-calico/pkg/charts/values.go:30 +0x45
github.com/gardener/gardener-extension-networking-calico/pkg/controller.(*actuator).Reconcile(0xc00009d860, {0x1fc24d0, 0xc0003335c0}, 0xc00056a1c0, 0xc000dc8ea0)
	/go/src/github.com/gardener/gardener-extension-networking-calico/pkg/controller/actuator_reconcile.go:131 +0x4a8
github.com/gardener/gardener/extensions/pkg/controller/network.(*reconciler).reconcile(0xc00027fec0, {0x1fc24d0, 0xc0003335c0}, 0xc000e57bf0?, 0x1fc74b0?, {0x1d0f4e4, 0x9})
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/github.com/gardener/gardener/extensions/pkg/controller/network/reconciler.go:139 +0x235
github.com/gardener/gardener/extensions/pkg/controller/network.(*reconciler).Reconcile(0xc00027fec0, {0x1fc2578, 0xc000e57bf0}, {{{0xc000b0c700?, 0x1fc2578?}, {0xc000904d16?, 0x1fce080?}}})
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/github.com/gardener/gardener/extensions/pkg/controller/network/reconciler.go:125 +0xea8
github.com/gardener/gardener/pkg/controllerutils/reconciler.(*operationAnnotationWrapper).Reconcile(0xc00077ede0, {0x1fc2578, 0xc000e57bf0}, {{{0xc000b0c700?, 0x1bfba00?}, {0xc000904d16?, 0x30?}}})
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/github.com/gardener/gardener/pkg/controllerutils/reconciler/operation_annotation_wrapper.go:73 +0x9b5
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc0005fc0b0, {0x1fc2578, 0xc000e57b30}, {{{0xc000b0c700?, 0x1bfba00?}, {0xc000904d16?, 0x4041f4?}}})
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114 +0x27e
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005fc0b0, {0x1fc24d0, 0xc0007d8c80}, {0x1b179e0?, 0xc00005a720?})
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311 +0x349
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005fc0b0, {0x1fc24d0, 0xc0007d8c80})
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
	/go/src/github.com/gardener/gardener-extension-networking-calico/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:223 +0x31c

What you expected to happen:
No such error.

How to reproduce it (as minimally and precisely as possible):
See above.

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version: fdd6f55
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Calico cni plugin renders new node unusable

How to categorize this issue?

If multiple identifiers make sense you can also state the commands multiple times, e.g.

/area networking
/kind bug
/priority normal

What happened:
This ticket originates from a slack discussion

When we provision a new cluster, sporadically the pods on a node are stuck in state Container Creating. There are events saying Pod sandbox changed, it will be killed and re-created. over and over.

In the logs I can see the calico CNI plugin getting installed:

time="2021-02-17T10:14:40Z" level=info msg="Running as a Kubernetes pod" source="install.go:140"
time="2021-02-17T10:14:40Z" level=info msg="Installed /host/opt/cni/bin/bandwidth"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/calico"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/calico-ipam"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/flannel"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/host-local"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/install"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/loopback"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/portmap"
time="2021-02-17T10:14:41Z" level=info msg="Installed /host/opt/cni/bin/tuning"
time="2021-02-17T10:14:41Z" level=info msg="Wrote Calico CNI binaries to /host/opt/cni/bin\n"
time="2021-02-17T10:14:41Z" level=info msg="CNI plugin version: v3.17.1\n"
time="2021-02-17T10:14:41Z" level=info msg="/host/secondary-bin-dir is not writeable, skipping"
time="2021-02-17T10:14:41Z" level=info msg="Using CNI config template from CNI_NETWORK_CONFIG environment variable." source="install.go:319"
time="2021-02-17T10:14:41Z" level=info msg="Created /host/etc/cni/net.d/10-calico.conflist"
time="2021-02-17T10:14:41Z" level=info msg="Done configuring CNI.  Sleep= false"

But according to the slack discussion something must have removed it later so that the error appears.

What you expected to happen:
A regular node being provisioned where pods can run.

How to reproduce it (as minimally and precisely as possible):
Unfortunately, I do not know how to reproduce this. It is sporadic

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version): 1.17.14
  • Cloud provider or hardware configuration: Azure
  • Others:

Controller reports unhealthy system components

How to categorize this issue?

/area networking
/ops-productivity
/kind bug

What happened:
We regularly experience the SystemComponentsHealthy of a shoot cluster being set to False with the following message:

Network extension (shoot--foo--bar/bar) reports
failing health check: managed resource
"extension-networking-calico-config" in namespace
"shoot--foo--bar" is unhealthy: DaemonSet
"kube-system/calico-node" is unhealthy: unready pods found (70/71), 55
pods updated (DaemonSetUnhealthy).

Most of the time this unhealthy condition is reported because the DaemonSet was just updated by the calico-node-vertical-autoscaler and thus a rolling update with maxUnavailable: 1 is performed. This rollout takes longer than the grace period considered by the controller's health check.

What you expected to happen:
The condition should not be reported as False, i.e. it should be more tolerant when a rolling update happens. Operators are distracted by these false positives and thus "real" issues can be missed easily.

Environment:

  • Gardener version (if relevant): v1.41.2
  • Extension version: v1.19.7

Limit removal for calico-node had no effect because `resources` are not reconciled

How to categorize this issue?

/area networking
/area auto-scaling
/kind bug

What happened
The PR #166 had no effect in our landscapes, because resources.gardener.cloud/preserve-resources: "true" is set on the ManagedResources. Therefore, resources (and limits) are not touched when the resource is reconciled and the limit is never removed.

What you expected to happen:
#166 having the desired effect and removing the weird CPU limit of 1000 cores

calico-typha-vertical-autoscaler in CrashLoopBackOff when deployments is available under multiple API groups

What happened:
calico-typha-vertical-autoscaler fails when deployments is available under multiple API groups.

How to reproduce it (as minimally and precisely as possible):

  1. Install linkerd. See https://linkerd.io/2/getting-started/

  2. Ensure that there are multiple API groups serving resource deployments:

$ k api-resources
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
# ...
daemonsets                        ds           apps                           true         DaemonSet
deployments                       deploy       apps                           true         Deployment
# ...
daemonsets                        ds           tap.linkerd.io                 true         Tap
deployments                       deploy       tap.linkerd.io                 true         Tap
  1. Ensure that calico-typha-vertical-autoscaler fails with
$ k logs calico-typha-vertical-autoscaler-7b9954b9df-ptrpq -n kube-system
I0217 20:41:29.699612       1 autoscaler.go:46] Scaling namespace: kube-system, target: deployment/calico-typha-deploy
E0217 20:41:30.799782       1 autoscaler.go:49] unknown target kind: Tap

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version: v1.3.0
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

calico-node readiness probe can fail for large clusters

What happened:
We see the calico-node readinessProbe failing and the calico-node Pods being unhealthy:

$ k get po -n kube-system -l k8s-app=calico-node
NAME                READY   STATUS    RESTARTS   AGE
calico-node-42rmq   0/1     Running   0          17d
calico-node-5jxnw   0/1     Running   0          28d
calico-node-7rdh5   0/1     Running   0          28d
calico-node-865vs   1/1     Running   0          10d
calico-node-942mc   1/1     Running   0          72m
calico-node-bgkgk   0/1     Running   0          28d
calico-node-cv69k   0/1     Running   0          28d
calico-node-fjdlg   1/1     Running   0          4d6h
calico-node-fsdsp   1/1     Running   0          3d10h
calico-node-kkvsh   0/1     Running   0          28d
calico-node-kwv6l   0/1     Running   0          23d
calico-node-mmhwz   0/1     Running   0          28d
calico-node-mnc4m   0/1     Running   0          28d
calico-node-nplkr   0/1     Running   0          24d
calico-node-p7vnq   1/1     Running   0          10d
calico-node-q6gs5   0/1     Running   0          28d
calico-node-wrlrb   1/1     Running   0          14d
calico-node-x52q4   0/1     Running   0          28d

calico-node Pod events

Events:
  Type     Reason     Age                    From                                                     Message
  ----     ------     ----                   ----                                                     -------
  Warning  Unhealthy  11m (x989 over 3d21h)  kubelet, ip-10-242-14-150.eu-central-1.compute.internal  Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory
  Warning  Unhealthy  61s (x13358 over 11d)  kubelet, ip-10-242-14-150.eu-central-1.compute.internal  Readiness probe failed: calico/node is not ready: BIRD is not ready: Error executing command: read unix @->/var/run/calico/bird.ctl: i/o timeout

Currently the calico-node resources are such as

  resources:
      limits:
        cpu: 500m
        memory: 700Mi
      requests:
        cpu: 100m
        memory: 100Mi

And we see cpu throttling

$ k top po -n kube-system -l k8s-app=calico-node
NAME                CPU(cores)   MEMORY(bytes)
calico-node-42rmq   501m         113Mi
calico-node-5jxnw   499m         122Mi
calico-node-7rdh5   499m         123Mi
calico-node-865vs   379m         99Mi
calico-node-942mc   128m         72Mi
calico-node-bgkgk   499m         123Mi
calico-node-cv69k   500m         121Mi
calico-node-fjdlg   64m          81Mi
calico-node-fsdsp   107m         83Mi
calico-node-kkvsh   500m         119Mi
calico-node-kwv6l   501m         132Mi
calico-node-mmhwz   500m         121Mi
calico-node-mnc4m   502m         118Mi
calico-node-nplkr   501m         135Mi
calico-node-p7vnq   215m         96Mi
calico-node-q6gs5   499m         120Mi
calico-node-wrlrb   415m         107Mi
calico-node-x52q4   500m         115Mi

You can see that the not ready ones are the one that hit the cpu limit of 500m.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gardener version (if relevant): v1.1.5
  • Extension version: v1.3.0
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

"No overlay network" -> "overlay network" migration does not work

How to categorize this issue?

/area networking
/kind bug

What happened:
We discovered the following issue as part of the investigations about the impact of gardener/gardener-extension-provider-aws#621. In the context of this bug:

  1. The Shoot gets created with "no overlay network"
      apiVersion: calico.networking.extensions.gardener.cloud/v1alpha1
      backend: none
      ipv4:
        mode: Never
      kind: NetworkConfig
  1. The Shoots is reconciled and the Network resource is changed to run "with overlay network". The providerConfig field is removed.

We discovered that after such migration for multi-zone clusters (let's say zone-1, and zone-2) the Pods running in zone-2 for example cannot reach the Pods running in zone-1.

In details:

  1. Make sure that coredns Pods run in zone-1.
coredns-755f85d7d9-6hkxn                              1/1     Running       0            17s    100.64.2.16     ip-10-180-15-62.eu-central-1.compute.internal    <none>           <none>
coredns-755f85d7d9-j2w64                              1/1     Running       0            17s    100.64.2.15     ip-10-180-15-62.eu-central-1.compute.internal    <none>           <none>
  1. Create a Pod in zone-2.
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: debug-pod
  name: debug-pod
spec:
  containers:
  - args:
    - sleep
    - "1000000"
    image: gcr.io/google-containers/busybox
    name: debug-pod
    resources: {}
  restartPolicy: Never
  # nodeName: pick a zone-2 Node
  1. Make sure that the debug-pod from zone-2 cannot reach the coredns Pods in zone-1
$ kubectl exec -it debug-pod -- /bin/sh
/ # ping google.com
ping: bad address 'google.com'

The DNS resolution fails because the debug-pod is not able to reach the coredns Pods to DNS resolution.

After investigation we found that there is a ippools.crd.projectcalico.org resource. And the ipipMode field in this resource is set to Never:

apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  annotations:
    projectcalico.org/metadata: '{"uid":"746e13ec-29a7-43a5-88f4-d558ab1b3135","creationTimestamp":"2022-09-22T11:15:36Z"}'
  creationTimestamp: "2022-09-22T11:15:36Z"
  generation: 1
  name: default-ipv4-ippool
  resourceVersion: "1600"
  uid: 8002bf4f-b2a1-4b41-b700-086323ea373d
spec:
  allowedUses:
  - Workload
  - Tunnel
  blockSize: 26
  cidr: 100.96.0.0/11
  ipipMode: Never
  natOutgoing: true
  nodeSelector: all()
  vxlanMode: Never

ipipMode: Never would mean "no overlay" network, but the cluster should already use overlay network. We can also verify from the control plane monitoring that the tunl0 device does not have any network traffic.

We had to manually patch the ipipMode field to be Always. After that the Pod to Pod communication between zones worked again. The control plane monitoring started reporting network traffic for the tunl0 device.

What you expected to happen:
The "no overlay network" -> "overlay network" migration to work without issues or a validation to be present to forbid such update or all known issues to be documented.

How to reproduce it (as minimally and precisely as possible):
See above.

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version: v1.26.0
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

max-concurrent-reconciles flag is not honored

How to categorize this issue?

/area usability
/kind bug
/priority normal
/area networking

What happened:
Currently, the calico controller is always started with only one concurrent worker, independently of the provided config or the default worker size of 5.

What you expected to happen:
The calico controller to honor the --max-concurrent-reconciles flag.

How to reproduce it (as minimally and precisely as possible):

$ LEADER_ELECTION_NAMESPACE=garden GO111MODULE=on go run -mod=vendor ./cmd/gardener-extension-networking-calico --ignore-operation-annotation=true --leader-election=true --max-concurrent-reconciles=25 --config-file=./example/00-componentconfig.yaml
...
{"level":"info","ts":"2020-07-24T10:15:56.159+0300","logger":"controller-runtime.controller","msg":"Starting workers","controller":"network_controller","worker count":1}

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version: v1.8.0
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

dockerfile
Dockerfile
  • golang 1.22.5
github-actions
.github/workflows/vendor_gardener.yaml
  • actions/checkout v4
  • actions/setup-go v5
gomod
go.mod
  • go 1.22.2
  • github.com/ahmetb/gen-crd-api-reference-docs v0.3.0
  • github.com/gardener/gardener v1.99.1
  • github.com/go-logr/logr v1.4.2
  • github.com/onsi/ginkgo/v2 v2.19.0
  • github.com/onsi/gomega v1.33.1
  • github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.74.0
  • github.com/spf13/cobra v1.8.1
  • github.com/spf13/pflag v1.0.5
  • go.uber.org/mock v0.4.0
  • golang.org/x/tools v0.23.0
  • helm.sh/helm/v3 v3.14.4
  • k8s.io/api v0.29.6
  • k8s.io/apimachinery v0.29.6
  • k8s.io/client-go v0.29.6
  • k8s.io/code-generator v0.29.6
  • k8s.io/component-base v0.29.6
  • k8s.io/utils v0.0.0-20240711033017-18e509b52bc8@18e509b52bc8
  • sigs.k8s.io/controller-runtime v0.17.5
helm-values
charts/gardener-extension-admission-calico/values.yaml
charts/gardener-extension-networking-calico/values.yaml
regex
imagevector/images.yaml
  • quay.io/calico/node v3.27.4
  • quay.io/calico/cni v3.27.4
  • quay.io/calico/typha v3.27.4
  • quay.io/calico/kube-controllers v3.27.4
  • registry.k8s.io/cpa/cluster-proportional-autoscaler v1.8.9
  • registry.k8s.io/cpa/cpvpa v0.8.4

Network policies may block traffic

What happened:
End-users might deploy network policies that may block traffic for calico components:

$ ks get pod
NAME                                                  READY   STATUS             RESTARTS   AGE
...
calico-typha-vertical-autoscaler-7b9954b9df-8v8kd     0/1     CrashLoopBackOff   8960       35d
...

What you expected to happen:
Calico components work as expected even if a user deploys custom network policies.

How to reproduce it (as minimally and precisely as possible):
Deploy a network policy that e.g. forbids outgoing traffic for the calico-vpa.

Anything else we need to know?:
Related to gardener/gardener#565.

  • Gardener version (if relevant): v1.1.5
  • Kubernetes version (use kubectl version): all
  • Cloud provider or hardware configuration: all

Relationship between gardener-extension-networking-calico & Shoot Defination

Hi,
When we create the shoot in Gardener, it seems that we can define
networking:
type: calico
nodes: 10.250.0.0/16
or even
networking:
type: calico
nodes: 10.44.0.0/20
pods: 172.16.0.0/11
services: 172.17.0.0/13
I also noticed there is this Gardener calico network extention but it seems that the configuration in this extention is different if we compare with its document like https://github.com/gardener/gardener-extension-networking-calico/blob/master/example/20-network.yaml#L37
So, could u please share with us if the defination in Shoot is actually be used by this extention of it's totally different things at all?
Or is there is any document to explain the networking defination in Garderner Shoot? It seems that I can't find any detail on Gardener Website.
Meanwhile, for this gardener-extension-networking-calico, in the README.md, the defination is like
spec:
type: calico
clusterCIDR: 192.168.0.0/24
serviceCIDR: 10.96.0.0/24
But in the example mentioned above(https://github.com/gardener/gardener-extension-networking-calico/blob/master/example/20-network.yaml#L37) is like
spec:
type: calico
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/24
If there is some document explain what's the difference betwen clusterCIDR, serviceCIDR and podCIDR and the usuage?
Thanks!
BR,
Yongyuan

VPA `gardener-extension-networking-calico-vpa` has limit scaling active, triggers known VPA OOMkill-loop bug

How to categorize this issue?
/area networking
/area auto-scaling
/kind bug

What happened:
The VPA gardener-extension-networking-calico-vpa does not have controlledValues specified. It is thus acting per default, scaling both requests and limits. This occasionally results in excessive memory limit downscaling. In turn that triggers a known VPA bug, where VPA fails to respond to quick OOMkills, and the container get stuck in a OOMkill-restart-OOMkill loop indefinitely.

On the gardener side, the cause should be considered the combination of a memory limit, plus scaling it - that combination is known to cause the aforementioned situation, and should be avoided. The default policy, subject to component owner discretion, is: components which are critical to gardener's operation, have no memory limit. For a non-critical component, memory limit may be beneficial (or not), but it should not be scaled.

Leverage VPA for calico components

How to categorize this issue?

/kind enhancement
/priority normal
/area networking

What would you like to be added:
With gardener/gardener#2478 Gardener will add support for the VPA for shoot clusters.
The calico extension is relying on other components for horizontal and vertical autoscaling, ref https://github.com/gardener/gardener-extension-networking-calico/blob/master/charts/images.yaml#L22-L28.
Probably it would make sense to only use them if the shoot does not enable VPA, and otherwise use the VPA directly instead of deploying these additional components.

Why is this needed:
Scalability.

test-e2e-provider-local.sh: Use skaffold instead of `make docker-images` and `kind load docker-image`

How to categorize this issue?

/area networking
/kind enhancement

What would you like to be added:

make docker-images
docker tag europe-docker.pkg.dev/gardener-project/public/gardener/extensions/networking-calico:latest networking-calico-local:$version
kind load docker-image networking-calico-local:$version --name gardener-local
mkdir -p $repo_root/tmp
cp -f $repo_root/example/controller-registration.yaml $repo_root/tmp/controller-registration.yaml
yq -i e "(select (.helm.values.image) | .helm.values.image.tag) |= \"$version\"" $repo_root/tmp/controller-registration.yaml
yq -i e '(select (.helm.values.image) | .helm.values.image.repository) |= "docker.io/library/networking-calico-local"' $repo_root/tmp/controller-registration.yaml
kubectl apply --server-side --force-conflicts -f "$repo_root/tmp/controller-registration.yaml"
to be changed to something like https://github.com/gardener/gardener-extension-registry-cache/blob/c924400d255feba1266d5f943a2e64ec625ff210/hack/ci-e2e-kind.sh#L36-L40.

make docker-images pulls the golang image from docker.io defined in the Dockerfile. With skaffold this is likely avoided.

Why is this needed:
To prevent dockerhub rate limit issues for e2e executions in prow.

Stop using github.com/pkg/errors

How to categorize this issue?

/area networking
/kind enhancement
/priority 3

What would you like to be added:
Similar to gardener/gardener#4280 we should be using Go native error wrapping (available since GO 1.13).

$ grep -r '"github.com/pkg/errors"' | grep -v vendor/ | cut -f 1 -d ':' | cut -d '/' -f 1-3 | sort | uniq -c | sort
      1 pkg/controller/actuator.go
      1 pkg/controller/actuator_reconcile.go

Why is this needed:
Getting rid of vendors in favor of using stdlib is always nice. Others seem to do this as well - kubernetes/kubernetes#103043 and containerd/console#54.

Please allow to enable auto heps

With recent calico versions you can make calico automatically create HostEndpoint resources for all K8s nodes.
See e.g. https://www.tigera.io/blog/securing-kubernetes-nodes-with-calico-automatic-host-endpoints/ ...
Conveniently, the automatically created host endpoint objects directly come along with a decent default calico Profile, and have the same labels as the node.

This is more or less a prerequisite to harden K8s nodes, because you need HostEndpoints to match hosts with selectors in GlobalNetworkPolicy and NetworkPolicy objects.

Of course, one could maintain the HostEndpoint objects manually or with an own control loop, but that's tedious, resp. nonsense because now calico itself can do it for you through this auto-host-endpoints feature.

Remove bidirectional mount on /sys/fs

How to categorize this issue?

/area networking
/kind enhancement
/priority 1

What would you like to be added:

Please remove the bidirectional mount of /sys/fs in the calico-node container as this is currently not required.

Why is this needed:

With gardenlinux/gardenlinux#275 we are investigating an issue where the /sys/fs/cgroup mount disappears from the node rendering the node unusable. As the kubelet still reports a healthy node pods are still scheduled on that node but never start. This situation can go on for days.

The calico-node container is currently the only container that we know that has been started and that can cause the issue with the bidirectional mount. This said, this might not be caused by a program in the calico-node container but we will only know this after this restriction has been applied.

Please include calico-apiserver

Please include calico-apiserver (https://projectcalico.docs.tigera.io/maintenance/install-apiserver) into the shoot deployment (calico-apiserver went GA with calico v3.20.0). If you don't want to make this default it would be nice to have it as an optional component, configurable via shoot spec.

Not sure, maybe this can even go into the seed ...

Background: without the calico-apiserver, the projectcalico.org/v3 api group is not available in the shoots, and people have to use the crd.proejctcalico.org/v1 variants of the calico resources. Which is unsupported and dangerous (see e.g. projectcalico/calico#6412). And I can confirm that really really bad things can happen when using the low-level ones (i.e. api group crd.projectcalico.org/v1) ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.