Giter Site home page Giter Site logo

tigera / operator Goto Github PK

View Code? Open in Web Editor NEW
175.0 19.0 131.0 92.04 MB

Kubernetes operator for installing Calico and Calico Enterprise

License: Apache License 2.0

Shell 0.35% Go 98.42% Makefile 0.71% Smarty 0.32% Python 0.12% Dockerfile 0.08%

operator's Introduction

Calico Operator

Docker image

This repository contains a Kubernetes operator which manages the lifecycle of a Calico or Calico Enterprise installation on Kubernetes or OpenShift. Its goal is to make installation, upgrades, and ongoing lifecycle management of Calico and Calico Enterprise as simple and reliable as possible.

This operator is built using the operator-sdk, so you should be familiar with how that works before getting started.

Getting Started Running Calico

There are many avenues to get started running Calico depending on your situation.

Get Started Developing

See the developer guidelines for more information on designing, coding, and testing changes.

operator's People

Contributors

asincu avatar bcreane avatar behnam-shobiri avatar brian-mcm avatar caseydavenport avatar coutinhop avatar doublek avatar fasaxc avatar freecaykes avatar gantony avatar hjiawei avatar jaderhs avatar josh-tigera avatar lmm avatar marvin-tigera avatar matthewdupre avatar mazdakn avatar mgleung avatar nelljerram avatar ozdanborne avatar pasanw avatar rene-dekker avatar song-jiang avatar sridhartigera avatar stevegaossou avatar suraiya-hameed avatar tmjd avatar vara2504 avatar vberezny avatar xiumozhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator's Issues

Multi-architecture Container Image

The container image only supports the amd64 architecture. I am unable to run it on my Raspberry Pi or AWS Graviton instances.

Expected Behavior

Ideally it would be great to have a multi-architecture "manifest list" image which allows seamless running on multiple CPU architectures (including arm64): https://docs.docker.com/registry/spec/manifest-v2-2/

Possible Solution

It looks like binaries for multiple architectures are getting built but only a single image. It might be as easy as using Docker Buildx.

Provide a way to disable typha

As discussed in the Calico documentation, typha is only recommended in large installations: https://github.com/projectcalico/typha#when-should-i-use-typha. My use case has much smaller installations with far less than 50 nodes.

Expected Behavior

Have a way to disable to typha component in the InstallationSpec.

Possible Solution

Add an API field that allows enable/disable for each component.

Context

I am attempting to install calico with typha disabled using the operator.

Prometheus metrics for Typha - Currently hard coded

After setting up the operator and configuring Felix to supply Prometheus metrics via the FelixConfiguration CR (which is all working fine), I then went on a hunt for enabling the same for Typha. This can be enabled if you go the non operator route and just use manifests to deploy Calico, but I'm trying to stick with operators where possible, but it currently looks like this is non-configurable via the operator as the environment variables supplied to Typha are all hard coded (unless I've completely missed something), as per the link below.

func (c *typhaComponent) typhaEnvVars() []v1.EnvVar {

As Typha uses host networking, this obviously raises a possible on the security / port already in use front, and while I could just do a custom build of the operator with these hard coded to enabled and set to the appropriate port for my environment, I wanted to see if there were any plans to add a CRD for configuring Typha in the same manner as Felix?

Installation of Operator with BPF Dataplane Crashes Due to API Server Resolution Failure

Expected Behavior

Installing the Tigera operator on a fresh EKS cluster with AWS VPC CNI Networking, the BPF data plane enabled and kube-proxy disabled is able to connect to the Kubernetes API Server using a domain name and run successfully.

Current Behavior

The operator gets stuck in a crash loop due to DNS lookup timeouts. The DNS lookup is attempting to use CoreDNS, which is not yet running as the AWS VPC CNI pod is not running. The AWS VPC CNI pod is not running because it cannot connect to the API server without kube-proxy or calico running. The operator should be able to connect to the API server using a domain name without CoreDNS as per the docs - https://projectcalico.docs.tigera.io/maintenance/ebpf/enabling-bpf#configure-calico-to-talk-directly-to-the-api-server

Steps to Reproduce (for bugs)

  1. Create an EKS cluster on Kubernetes 1.21 with the default AWS VPC CNI networking enabled. Do not scale out any nodes.

  2. Apply the Tigera Operator to the cluster, including CRDs so the Installation resource will get created successfully:

helm template calico projectcalico/tigera-operator --include-crds --values=helm-values.yml > all.yml
kubectl apply -f all.yml

My helm-values.yml file looks like:

installation:
  kubernetesProvider: "EKS"
  calicoNetwork:
    linuxDataplane: "BPF"
    hostPorts: null
  1. Disable kube-proxy as per the instructions in https://projectcalico.docs.tigera.io/maintenance/ebpf/enabling-bpf. Setup the kubernetes-services-endpoint configmap with the EKS API server endpoint. This can be pulled using kubectl get configmap -n kube-system kube-proxy -o yaml | grep server.

  2. Scale up one node for the operator to run on.

Context

When the operator comes up, it attempts to connect to the EKS API server endpoint, which needs to use DNS. The operator is running with the ClusterFirstWithHostNet DNS policy, so it tries to resolve this using cluster DNS. Because cluster DNS isn't running, this fails, and the operator degrades to a crash loop.

Your Environment

EKS using Kubernetes 1.21 with AWS VPC CNI networking

tigera installation from digest fails

When installing calico from the private registry by digest as described here the tigera installation object claims that it requires calico/windows-upgrade to be present in the imageSet.

Expected Behavior

windows related image should not be required when installing calico on linux

Current Behavior

when calico/windows-upgrade is not provided, the tigera installation degrades

workaround

Add to mine imageSet calico/windows-upgrade entry with fake digest

  • image: "calico/windows-upgrade"
    digest: "sha256:27878b4f6665bf84549254d688d3610674e613813cb864f0bf503654cf4fb926"

Steps to Reproduce (for bugs)

follow the instructions from https://docs.projectcalico.org/maintenance/image-options/imageset

Context

I'd like to install calico from a private registry.

Your Environment

kubernetes 1.21
calico 3.21

How to deploy specific calico version on openshift 4.x

It would be cool to be able to deploy specific calico version or just update minor calico version

Expected Behavior

I'd expect something like "put var here, i deploy calico of given version".

Current Behavior

Nothing like that.

Possible Solution

Add environment variable for that?

Your Environment

Openshift 4.x

Fluentd resources not set

Hi,
I am deploying the tigera operator but I see that resources can be set only for a predefined list of components in the Installer CRD (Node, Typha, KubeControllers)

Most of the other components doesn't have any resources and my issue is especially with fluentd-node daemonset deployed in tigera-fluentd, which can be quite hungry. From

return ElasticsearchContainerDecorateENVVars(corev1.Container{
it seems that resources are not declared, so I would like to have a way to override them.

Current workaround is to change the daemonset resources and annotate it to avoid being reverted by the operator, but it is not very clean

Thanks

Update the documentation to indicate the Use Cases supported by Calico Operator

Expected Behavior

What use cases are supported by Calico Operator and how is that being achieved.
Some details/steps for the same would be very helpful - like upgrades of version, changing to calico enterprise ..

Current Behavior

Possible Solution

Steps to Reproduce (for bugs)

Context

Your Environment

  • Operating System and version:
  • Link to your project (optional):

The operator namespace is hardcoded

Expected Behavior

The operator should be able to be installed into any namespace.

Current Behavior

When installing the operator into the kube-system namespace the operator errors because it can't find the tigera-operator namespace.

{"level":"error","ts":1628066134.605089,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Reconciler error","name":"default","namespace":"","error":"namespaces "tigera-operator" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:267\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"}

Possible Solution

Remove the hardcoded namespace.

Steps to Reproduce (for bugs)

Change the namespace in the deployment manifests.

Context

I'd like more control over how the operator runs.

Your Environment

AWS EKS v1.21, v1.20, v1.19 & v1.18

v3.21 operator failing to set up the pod network on a k3d cluster

When using the v3.21 operator to install calico on a K3D cluster, the pod network is failing to start. This bug is a result of investigations done with the k3d team at rancher. k3d-io/k3d#898

Expected Behavior

The pod network should be up and running successfully in all namespaces. All pods are in the running state.

Current Behavior

The calico-nodes are able to run without issue but other containers are stuck in the ContainerCreating state (coredns, metrics, calico-kube-controller)

$ kubectl get pods -A
NAMESPACE         NAME                                       READY   STATUS              RESTARTS   AGE     IP           NODE                             NOMINATED NODE   READINESS GATES
tigera-operator   tigera-operator-7dc6bc5777-jqgj6           1/1     Running             0          6m36s   172.29.0.3   k3d-test-cluster-3-21-server-0   <none>           <none>
calico-system     calico-typha-786fc79b-hm5sr                1/1     Running             0          6m17s   172.29.0.3   k3d-test-cluster-3-21-server-0   <none>           <none>
calico-system     calico-kube-controllers-78cc777977-trgbz   0/1     ContainerCreating   0          6m17s   <none>       k3d-test-cluster-3-21-server-0   <none>           <none>
kube-system       metrics-server-86cbb8457f-59s6k            0/1     ContainerCreating   0          6m36s   <none>       k3d-test-cluster-3-21-server-0   <none>           <none>
kube-system       local-path-provisioner-5ff76fc89d-w7bf6    0/1     ContainerCreating   0          6m36s   <none>       k3d-test-cluster-3-21-server-0   <none>           <none>
kube-system       coredns-7448499f4d-7rwx9                   0/1     ContainerCreating   0          6m36s   <none>       k3d-test-cluster-3-21-server-0   <none>           <none>
calico-system     calico-node-99jc6                          1/1     Running             0          6m17s   172.29.0.3   k3d-test-cluster-3-21-server-0   <none>           <none>

When describing the stuck pods, I see this in its events:

$ kubectl describe pod/coredns-7448499f4d-7rwx9 -n calico-system

  Warning  FailedCreatePodSandBox  6s                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "51947047f29820ea93c486fe4c18f5a31e9c9c9418e859e320b8d3b2c43bd383": netplugin failed with no error message: fork/exec /opt/cni/bin/calico: no such file or directory

Based on the error above, I went to check /opt/cni/bin/calico to see if the calico binary existed in the container, which it does:

glen@glen-tigera: $ docker exec -ti k3d-test-cluster-3-21-server-0 /bin/sh
/ # ls
bin  dev  etc  k3d  lib  opt  output  proc  run  sbin  sys  tmp  usr  var
/ # cd /opt/cni/bin/
/opt/cni/bin # ls -a
.  ..  bandwidth  **calico**  calico-ipam  flannel  host-local  install  loopback  portmap  tags.txt  tuning

CNI Config Yaml:

kubectl get cm cni-config -n calico-system -o yaml
apiVersion: v1
data:
  config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "datastore_type": "kubernetes",
          "mtu": 0,
          "nodename_file_optional": false,
          "log_level": "Info",
          "log_file_path": "/var/log/calico/cni/cni.log",
          "ipam": { "type": "calico-ipam", "assign_ipv4" : "true", "assign_ipv6" : "false"},
          "container_settings": {
              "allow_ip_forwarding": true
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "k8s_api_root":"https://10.43.0.1:443",
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        },
        {"type": "portmap", "snat": true, "capabilities": {"portMappings": true}}
      ]
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2021-12-22T16:00:02Z"
  name: cni-config
  namespace: calico-system
  ownerReferences:
  - apiVersion: operator.tigera.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Installation
    name: default
    uid: 90769081-24a2-440d-9666-a9c3b94ebd34
  resourceVersion: "635"
  uid: 609157c5-c43b-42d9-bb5b-7053a8673a49

Possible Solution

This is only occuring in v3.21 of the operator. I tested prior versions of operator and it sets up the pod network successfully. See k3d-calico-operator-install-findings.txt. The issue should lie between v3.20 and v3.21 changes of operator.

Steps to Reproduce (for bugs)

  1. Install K3D (v.5.2.1 was used at the time of testing) - https://k3d.io/v5.2.1/
  2. Run the list of commands in order below
k3d cluster create "test-cluster-3-21" --k3s-arg "--flannel-backend=none@server:*" --k3s-arg "--no-deploy=traefik@server:*"
kubectl apply -f https://docs.projectcalico.org/archive/v3.21/manifests/tigera-operator.yaml
curl -L https://docs.projectcalico.org/archive/v3.21/manifests/custom-resources.yaml > k3d-custom-res.yaml
yq e '.spec.calicoNetwork.containerIPForwarding="Enabled"' -i k3d-custom-res.yaml
kubectl apply -f k3d-custom-res.yaml

This should try to install calico through the operator on your k3d cluster with IP forwarding enabled.

  1. Get all pods kubectl get pods -A
  2. Notice that pods are stuck in the container creating state, so the pod network has failed to start.

Context

Delivery engineering wants to be able to support k3d provisioning and install in Banzai to expand our E2E coverage of supported provisioners. This would help our engineering team for testing their features on a local k3d cluster as it is much faster to setup.

Your Environment

OS: GNU/Linux
Kernel Version: 20.04.2-Ubuntu SMP
Kernel Release: 5.11.0-40-generic
Processor/HW Platform/Machine Architecture: x86_64

Priority class calico-priority isn't high enough for the calico-node daemonset

Expected Behavior

The calico-node daemonset should always be scheduled onto a node.

Current Behavior

The calico-node pods aren't always scheduled due to the calico-priority priority class not being higher than system-cluster-critical.

Possible Solution

Use system-node-critical priority class for the calico-node daemonset. Or even better allow the priority classes to be configured so we can use system-cluster-critical for the other Calico components.

Steps to Reproduce (for bugs)

Deploy resources onto the cluster with system-cluster-critical priority to saturate a node, this happens when the resources are in place before the calico-node pod but it should also work to deschedule the pod.

Context

This is currently blocking our adoption of the Tigera operator.

Your Environment

AWS EKS v1.21, v1.20, v1.19 & v1.18

Chart version 3.20.2 and 3.20.1 missing

We have a chart that pulls the operator chart as dependency.

apiVersion: v2
appVersion: "1.0"
description: A Helm chart for tigera-operator
name: tigera-operator
version: "0.0.1"
dependencies:
  - name: tigera-operator
    version: "v3.20.2"
    repository: https://docs.projectcalico.org/charts

It used to work fine, but now when attempting to deploy it we run into:

โฏ helm3 repo add project-calico https://docs.projectcalico.org/charts
โฏ helm3 dependency build
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "localstack-repo" chart repository
...Successfully got an update from the "hashicorp" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "strimzi" chart repository
...Successfully got an update from the "project-calico" chart repository
...Successfully got an update from the "gympass" chart repository
Update Complete. โŽˆHappy Helming!โŽˆ
Saving 1 charts
Save error occurred:  could not find : no matching version
Error: could not find : no matching version

Even the docs mention version 3.20.2 as the latest minor for 3.20:

helm show values projectcalico/tigera-operator --version v3.20.2

Expected Behavior

That we have access to all versions on Helm.

Current Behavior

We only have access up to 3.20.0:

โฏ helm3 search repo project-calico -l
NAME                          	CHART VERSION	APP VERSION	DESCRIPTION                            
project-calico/tigera-operator	v3.20.0      	v3.20.0    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.19.2      	v3.19.2    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.19.1      	v3.19.1    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.19.0      	v3.19.0    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.18.4      	v3.18.4    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.18.3      	v3.18.3    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.18.2      	v3.18.2    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.18.1      	v3.18.1    	Installs the Tigera operator for Calico
project-calico/tigera-operator	v3.18.0      	v3.18.0    	Installs the Tigera operator for Calico

Context

We can't move forward with a new deployment changing configuration in our production environment. For a quick fix we plan on downgrading to 3.20, which is not ideal.

For completeness' sake, here's the output prior to running helm repo update, showing the 3.20.2 version:

โฏ helm search project-calico -l
NAME                          	CHART VERSION	APP VERSION	DESCRIPTION                            
project-calico/tigera-operator	v3.20.2      	v3.20.2    	Installs the Tigera operator for Calico

Your Environment

  • Operating System and version: AWS EKS 1.21
  • Link to your project (optional):

Ability to tune the number of typha replicas

Expected Behavior

I want to provision a reasonable number of small test EKS clusters that don't necessarily require a large number of nodes but do need to be fully functional, i.e. calico, cluster-autoscaler, etc. The typha deployment seems to be set to 3 replicas which means each cluster has a minimum of 3 nodes regardless of how utilised they are.

Current Behavior

The typha deployment is set for 3 replicas which means as soon as it's deployed, the autoscaler kicks in and increases the nodes to 3 to satisfy the deployment and will never scale down again. I can edit the deployment once deployed but it seems the change is overwritten again within a few minutes. Do I need 3 for consensus reasons or can I get away with fewer?

Possible Solution

I've made use of the registry and imagePullSecrets fields on the Installation resource as my EKS clusters are entirely private, (thanks for those!), would adding another field here be a potential solution?

Your Environment

  • Operating System and version: Amazon EKS

Fixed CNI bin dir prevents nodes from working on Kubernetes deployment

It's not possible to configure the path to the CNI bin directory for the Calico deployment. Calico puts the files in /opt/cni/bin for Kubernetes deployments.
If the cluster does not use this path, the files end up in the wrong location and as a result the nodes never become ready.

Expected Behavior

The CNI bin directory can be specified on the operator deployment or Installation spec.

Current Behavior

CNI bin directory can only ever be /opt/cni/bin and if a cluster uses anything else, it will fail.

Possible Solution

Proper

Update the operator to expose configuration options to set the CNI bin directory path with (and the config directory wouldn't be a bad idea either tbh)

Workaround

a. Manually copy over the files to the correct location on each node in the cluster.
b. Patch Calico Node DaemonSet after the operator has deployed it to change the hostPath.

Steps to Reproduce (for bugs)

  1. Have a cluster where Kubelet and CRI-O use a CNI bin dir other than /opt/cni/bin
  2. Deploy Calico on said Kubernetes cluster
  3. See the Calico Node instances place 3 files in /opt/cni/bin on the nodes
  4. See Kubelet CNI network configuration failure error and nodes never becoming ready as the required binaries aren't placed where they are needed.

Your Environment

  • Operating System and version: Linux LTS 5.10.36 (Arch Linux)
  • Kubernetes Version: 1.21.0
  • CRI-O Version: 1.21.0
  • Calico/Operator Version: 3.19.0

Typha causing APIServer crashes on larger Openshift clusters

On larger Openshift clusters: restarts of the typha operator cause really large memory spikes that result in the APIServer getting out of memory events and effects downstream components. In my cluster I currently have
12374 pods
225 nodes
3314 services

kubectl get pods --all-namespaces -o wide --no-headers  | wc -l
   13274
   
   kubectl get nodes --no-headers | wc -l
     225

kubectl get services --all-namespaces --no-headers | wc -l
    3314

Note we have a total of 3 typha pods in the cluster

kubectl get pods -n calico-system -l k8s-app=calico-typha
NAME                           READY   STATUS    RESTARTS   AGE
calico-typha-b66bc54df-44zk7   1/1     Running   0          28m
calico-typha-b66bc54df-shh4f   1/1     Running   0          28m
calico-typha-b66bc54df-vl7jp   1/1     Running   0          28m

Posting the associated graphs that show the behavior when restarting typha (I can replicate by simply doing kubectl delete pods -n calico-system -l k8s-app=calico-typha) and waiting some time
Screen Shot 2021-08-03 at 3 43 24 PM

At the time of the restart the kube-apiserver memory doubles after the restart of the pods (goes up by 20+Gigs).

The maximum size of the backing database if 4GB. It seems odd that on startup of typha it would result in over 20+GB of utilization when it's meant to be a single "pool" of connecting to the datastore for all the calico-node pods.

Expected Behavior

Memory usage of the openshift apiserver on a restart of typha should not double. Typha should be able to download the data it needs in a controlled fashion that doesn't cause the APIServer to drastically change from a memory perspective.

Current Behavior

At startup of typha pods memory usage drastically increases for the kube-apiserver hosting the cluster (doubles).

Possible Solution

Potentially try and rate limit data download for larger clusters? Maybe look at watch optimizations?

Steps to Reproduce (for bugs)

  1. Get large cluster. The scaling characteristics I put above are within Kubernetes maximums and can be utilized for reference
  2. Restart typha pods
  3. Watch memory of kube-apiserver while the calico-typha pods are starting up (takes about 10 minutes in my monitoring for results to be shown.)

Context

This issue affects the availability of the kube-apiserver. It can lead to slower processing of other workloads, drastically slower kubectl request processing (to the point where the requests time out), and failure of other control plane components like scheduling of pods, failover of pods, etc. This is due to the fact that calico-typha is consuming all the resources from the kube-apiserver

Your Environment

  • Operating System and version:
  • Link to your project (optional):

Cluster: Openshift 4.7.19 (4.7.19_1526_openshift IBM Cloud Openshift cluster)
Details about the scale of the cluster are listed at the top of the issue

Chart with fixed namespace brakes Helm convention

Expected Behavior

When installing the Helm Chart, the user expects that namespace is configurable by flag --namespace. However the current version fixes namespace and doesn't respect the convention.

https://helm.sh/docs/faq/changes_since_helm2/#release-names-are-now-scoped-to-the-namespace

Current Behavior

If you render the current chart version, you'll see that many resources are created with hard-coded namespace tigera-operator.

helm template calico projectcalico/tigera-operator --namespace dummy

Possible Solution

Update the Helm Chart to respect the namespace definition provided by the user when supplied --namespace, and recommend the usage of tigera-operator in the docs. Also, drop the creation of the namespace and recommend the usage of the flag --create-namespace instead.

Context

The way the chart is written today is breaking the GitOps workflow using FluxCD.

Migration to Tigera Operator still remains old resources in `kube-system` namespace

Expected Behavior

I'm using EKS v1.17 and calico v1.13, and tried to upgrade calico to the latest one in aws-eks-vpc-cni repo (Tigera Operator v1.13.2, and calico v1.17).
Since calico changed to be installed to calico-namespace via Tigera Operator, I've expected that old Calico resources in kube-system namespace are cleaned up by Tigera Operator.
https://docs.projectcalico.org/maintenance/operator-migration

Current Behavior

But after upgrade some of the resources still remained in kube-system.
While calico-node seems working well, but migrating namespace causes some error in operator, so I wonder the affect of that.

Possible Solution

Easiest way is to remove the remaining resources manually, but I don't know if I should do that since the document says we should not touch kube-system resources.
Is the namespace migration not completed? If it's done, can I remove the remaining resources manually?

Steps to Reproduce (for bugs)

  1. Apply calico-operator and calico-crs
# Apply calico-operator
$ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.9/calico-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

# Apply calico-crs
$ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.9/calico-crs.yaml     
installation.operator.tigera.io/default created
  1. Pod include calico-node in calico-system seems ready
$ kubectl get daemonset calico-node --namespace calico-system     
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-node   3         3         3       3            3           kubernetes.io/os=linux   2m36s

# pod in calico-system
$ kubectl get pod -n calico-system    
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7f959f6886-bnhv2   1/1     Running   0          3m24s
calico-node-xvjxs                          1/1     Running   0          108s
calico-node-xzjr5                          1/1     Running   0          2m9s
calico-node-zg5d9                          1/1     Running   0          119s
calico-typha-67bbd4b6c8-44zrj              1/1     Running   0          106s
calico-typha-67bbd4b6c8-qzpbv              1/1     Running   0          2m9s
calico-typha-67bbd4b6c8-tgxkl              1/1     Running   0          2m9s
  1. But some old calico resources still remains in kube-system
# Before upgrade
$ kubectl get all -n kube-system | grep calico
pod/calico-node-k7pz5                                     1/1     Running   0          40s
pod/calico-node-s2zmq                                     1/1     Running   0          40s
pod/calico-node-xhgw7                                     1/1     Running   0          40s
pod/calico-typha-7c5b5df5d7-p9xw4                         1/1     Running   0          40s
pod/calico-typha-horizontal-autoscaler-869dbcdddb-kjf29   1/1     Running   0          39s
service/calico-typha   ClusterIP   172.20.145.26   <none>        5473/TCP        39s
daemonset.apps/calico-node   3         3         3       3            3           beta.kubernetes.io/os=linux   42s
deployment.apps/calico-typha                         1/1     1            1           40s
deployment.apps/calico-typha-horizontal-autoscaler   1/1     1            1           40s
replicaset.apps/calico-typha-7c5b5df5d7                         1         1         1       40s
replicaset.apps/calico-typha-horizontal-autoscaler-869dbcdddb   1         1         1       40s

# After upgrade
$ kubectl get all -n kube-system | grep calico
pod/calico-typha-horizontal-autoscaler-869dbcdddb-kjf29   1/1     Running   0          28m
service/calico-typha   ClusterIP   172.20.145.26   <none>        5473/TCP        28m
deployment.apps/calico-typha-horizontal-autoscaler   1/1     1            1           28m
replicaset.apps/calico-typha-horizontal-autoscaler-869dbcdddb   1         1         1       28m
  1. Log in Tigera Operator
$ kubectl logs tigera-operator-6db99fb878-pgpps -n tigera-operator    
2021/10/15 08:35:55 [INFO] Version: v1.13.2
2021/10/15 08:35:55 [INFO] Go Version: go1.14.4
2021/10/15 08:35:55 [INFO] Go OS/Arch: linux/amd64
{"level":"info","ts":1634286955.861958,"logger":"setup","msg":"Checking type of cluster","provider":"EKS"}
{"level":"info","ts":1634286955.8648527,"logger":"setup","msg":"Checking if TSEE controllers are required","required":false}
{"level":"info","ts":1634286955.9689445,"logger":"setup","msg":"starting manager"}
I1015 08:35:55.969173       1 leaderelection.go:243] attempting to acquire leader lease  tigera-operator/operator-lock...
{"level":"error","ts":1634286955.9694507,"logger":"typha_autoscaler","msg":"Failed to autoscale typha","error":"could not get number of nodes: the cache is not started, can not read objects","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*typhaAutoscaler).start.func1\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/typha_autoscaler.go:122"}
I1015 08:35:55.986564       1 leaderelection.go:253] successfully acquired lease tigera-operator/operator-lock
{"level":"info","ts":1634286956.0697503,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.1701388,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1634286956.270546,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1634286956.3710735,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.4715445,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.5719342,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.6724467,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.7729394,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.873332,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.973727,"logger":"controller","msg":"Starting Controller","controller":"tigera-installation-controller"}
{"level":"info","ts":1634286956.9738533,"logger":"controller","msg":"Starting workers","controller":"tigera-installation-controller","worker count":1}
{"level":"info","ts":1634286956.9739733,"logger":"controller_installation","msg":"Installation config not found","Request.Namespace":"tigera-operator","Request.Name":"default-token-jt62m"}
{"level":"info","ts":1634286956.974015,"logger":"controller_installation","msg":"Installation config not found","Request.Namespace":"tigera-operator","Request.Name":"tigera-operator-token-2khgm"}
{"level":"info","ts":1634287131.4939146,"logger":"migration_convert","msg":"did not detect kube-controllers"}
{"level":"info","ts":1634287131.4939778,"logger":"migration_convert","msg":"did not detect kube-controllers"}
{"level":"info","ts":1634287132.1048486,"logger":"render","msg":"Creating certificate secret","secret":"node-certs"}
{"level":"info","ts":1634287132.2683523,"logger":"render","msg":"Creating certificate secret","secret":"typha-certs"}
{"level":"error","ts":1634287133.0472322,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.2847915,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.5205033,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.7560074,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.9838965,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.2245877,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.4569678,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.6893094,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.9266615,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.1565535,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.3856297,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.6329298,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.8653526,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.2036061,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.440355,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.6671417,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.8996701,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.133558,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.3731642,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.6077807,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.8459415,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.0817683,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.3175223,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.5825086,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.9529684,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287139.186407,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287139.4511397,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287140.4645057,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287140.7104788,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287140.9744563,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287143.2503622,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287143.4994097,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287143.766432,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
2021/10/15 08:39:08 [INFO] Patch NodeSelector with: [{"op":"add","path":"/spec/template/spec/nodeSelector/projectcalico.org~1operator-node-migration","value":"pre-operator"}]
{"level":"error","ts":1634287148.6741517,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 3/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287148.9497187,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287149.223919,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287159.190339,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287159.4546425,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287159.7325585,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"info","ts":1634287195.9691308,"logger":"typha_autoscaler","msg":"Updating typha replicas from 1 to 3"}

Context

Your Environment

  • Operating System and version:

    • EKS: v1.17
    • aws-eks-vpc-cni plugin: amazon-k8s-cni:v1.7.5-eksbuild.1
    • Calico: v3.13 -> v3.17
    • Tigera Operator: None -> v1.13.2
  • Link to your project (optional):

Missing tag 1.20.4 on quay for operator image

Image 1.20.4 (default in manifests in the quickstart) seems to be missing from quay.io:

podman pull quay.io/tigera/operator:v1.20.4
Trying to pull quay.io/tigera/operator:v1.20.4...
Error: Error initializing source docker://quay.io/tigera/operator:v1.20.4: Error reading manifest v1.20.4 in quay.io/tigera/operator: manifest unknown: manifest unknown

Pod Calico-node crashing as liveness probe fails 50% of the time at least

Expected Behavior

Current Behavior

We're expecting the pod calico-node not to restart but the pod restart after the liveness probe fails.

Possible Solution

We would like to open a PR by adding an initialDelaySeconds attribute to the liveness probe. What do you think ?

Steps to Reproduce (for bugs)

  1. Watch the calico pod behavior at its start.

Context

This issue affects us as it is polluted our alert feed.

Your Environment

EKS

Used ports not named/exposed

The ports that are exposed by the containers are not named nor even explicitely defined in the kubernetes resources that are created by the operator.

Expected Behavior

When the operator creates a resource (e.g. felix/calico-node DaemonSet) and it is set to have metrics active (e.g. done with a FelixConfiguration CR), the resource should also correctly name and expose the port.

ports:
  - containerPort: 9091
    name: metrics

This is necessary so the prometheus-operator can correctly create the prometheus jobs to monitor the resources (they rely on the ports being available in the metadata).

Current Behavior

Metrics are available from the correct port, however the port is not named/defined in the k8s resource.

Possible Solution

I could possibly directly patch the DaemonSet, but that'd require me to disable the reconciliation which isn't a solution for production at all.

Another solution would be to not use the Prometheus-Operator and write the monitoring jobs for prometheus manually but that isn't really a solution that should be necessary.

Steps to Reproduce (for bugs)

  1. In my case, use Azure AKS, have calico enabled
  2. Add a FelixConfiguration to enable prometheus metrics
  3. Check the calico-node DaemonSet and Pods and see that no ports are defined/named.

Your Environment

  • Azure AKS with Kubernetes 1.21.7
  • Flux GitOps for deployment of k8s Resources (e.g. the FelixConfiguration CR and PodMonitor)

Tigera operator (post-migration) fails to work if some pods have ips not in cluster pod CIDR

First, let me clarify that I'm aware that my cluster is misconfigured =] Unfortunately it seems there is no way to fix that without completely rebuilding the cluster, which I don't have time to do right now.

I did not realize that calico's IPPool resources still needed to be within the cluster pod CIDR, so I'm using pools that are outside of it -- e.g. 172.25.64.0/20 when my cluster CIDR is 10.172.0.0/16. I didn't realize it mattered, and since I'm using ToR bgp peering 99% of everything still works fine. However, after migrating to the tigera operator (during which I attempted to change my cluster CIDR and discovered that it's either not possible or at least more difficult than expected) the operator won't update anything because it's constantly in an "error" state:

Could not resolve CalicoNetwork IPPool and kubeadm configuration: IPPool 172.25.64.0/20 is not within the platform's configured pod network CIDR(s) [10.172.0.0/16 2607:fa18:1000:21::10:0/108]

I'm using VPNs and linking multiple clusters together, so unfortunately moving my IPPools isn't an option.

Expected Behavior

A warning should be thrown, but there should be a way to tell it "yeah, I know this is wrong, but that's how everything is set up so please ignore it"

I absolutely 100% agree that there should be warnings to tell uneducated folks like me that they are Doing Something Stupid, but since it can actually work it should let you do it if you really want.

Current Behavior

The Tigera operator is completely nonfunctional due to the error state.

Steps to Reproduce (for bugs)

  1. Have a cluster with IPPools that are not in the cluster pod CIDR
  2. Upgrade to the calico operator, doing whatever you have to to get it to install =]

Context

I think I've clarified this above, but it seems like it should be an easy fix.

Your Environment

Kubeadm bare metal cluster, 7 servers; opnsense ToR routers with bgp peering. Dual IPv4/IPv6 stack.

Typha autoscaler should exclude tainted nodes in node count

Expected Behavior

We are managing a pipeline to provide on-demand deployments of preconfigured EKS clusters. The created EKS clusters are initially deployed with a small footprint which in a two-AZ deployment would only be two nodes (one managed node group per AZ with one node each). From there users can scale up or use cluster-autoscaling according to their needs. To follow the recommended deployment approach for Calico (also on AWS: https://docs.aws.amazon.com/eks/latest/userguide/calico.html), we switched from the helm-based deployment to the operator-based installation method.
When a new EKS version is released, we want to upgrade these two-node cluster to a new version which includes an upgrade of the control plane and the managed worker nodes. The expectation is that this also works with tigera-operator installed in the cluster.

Current Behavior

When we upgrade the managed worker nodes of a two-node EKS cluster (e.g. from version 1.20 to 1.21) the worker node upgrade fails with a PodEvictionFailure in terraform after almost 30 minutes.

Error: error waiting for EKS Node Group (...) version update (...): EKS Node Group (...) update (...) status (Failed) not successful: Errors:
Error 1: Code: PodEvictionFailure / Message: Reached max retries while trying to evict pods from nodes in node group xyz

The assumed cause here is that for a two-node cluster the typha-autoscaler is "incompatible" with the EKS managed worker node upgrade behavior which is described here: https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html

It says:

...

  • Checks the nodes in the node group for the eks.amazonaws.com/nodegroup-image label, and applies a eks.amazonaws.com/nodegroup=unschedulable:NoSchedule taint on all of the nodes in the node group that aren't labeled with the latest AMI ID. This prevents nodes that have already been updated from a previous failed update from being tainted.
  • Randomly selects up to max nodes to upgrade in parallel.
  • Cordons the node after all of the pods are evicted. This is done so that the service controller doesn't send any new requests to this node and removes this node from its list of healthy, active nodes.
  • ...

Only when we do a manual kubectl cordon of nodes with a eks.amazonaws.com/nodegroup=unschedulable:NoSchedule taint while the worker node upgrade is performed, the upgrade is successful.

Possible Solution

If the typha-autoscaler (https://github.com/tigera/operator/blob/master/pkg/controller/installation/typha_autoscaler.go#L243) would exclude tainted nodes from the node count for required typha instances like it excludes unschedulable nodes, the upgrade could work without manual intervention.

Steps to Reproduce (for bugs)

  1. Create an EKS cluster with two managed nodes.
  2. Install tigera-operator on EKS cluster.
  3. Perform a version upgrade for managed nodes.

Context

We also noticed a similar issue with cluster-autoscaler just as described in #1295. With excluding tainted nodes from the calculation in typha-autoscaler this should also solve the issue with cluster-autoscaler because the documentation of CA says:

What happens when a non-empty node is terminated? As mentioned above, all pods should be migrated elsewhere. Cluster Autoscaler does this by evicting them and tainting the node, so they aren't scheduled there again.

Your Environment

  • CSP: AWS
  • Service: EKS
  • K8s version: 1.20 / 1.21

The `controlPlaneNodeSelector` installation field doesn't effect Typha

Expected Behavior

Based on the documentation for controlPlaneNodeSelector it applies to all components which aren't DaemonSets. That means that it should apply to the Typha deployment.

Current Behavior

The controlPlaneNodeSelector doesn't apply to the Typha deployment. I suspect this might be because there is the typhaAffinity field, but affinity and node selectors can be used in parallel.

Possible Solution

Use controlPlaneNodeSelector with the Typha deployment.

Steps to Reproduce (for bugs)

n/a

Context

n/a

Your Environment

n/a

IPv6 pool natOutgoing doesn't working

Expected Behavior

If I set the natOutgoing in the IPv6 pool installation, it should work. The CALICO_IPV6POOL_NAT_OUTGOING environment variable of the calico-node pods should be set to true.

Example installation:

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
    - blockSize: 26
      cidr: 100.64.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
    - blockSize: 122
      cidr: fc00::/48
      natOutgoing: Enabled
      nodeSelector: all()

Current Behavior

The calico-node pods don't have CALICO_IPV6POOL_NAT_OUTGOING set, even if natOutgoing of the IPv6 pool installation is Enabled.

As a result, IPv6 NAT doesn't work. I have to manually edit the IPPool configuration to enable it.

Possible Solution

By default, the NAT Outgoing setting for the IPv6 Pool created at startup is false (see the manual).

However, the operator only sets CALICO_IPV6POOL_NAT_OUTGOING to false when NATOutgoing in an IPv6 pool is disabled. Thus CALICO_IPV6POOL_NAT_OUTGOING is either absent or false, causing NAT for IPv6 pools never to be enabled.

I've created a PR to fix this (#1038), but the build failed on Semaphore, while the tests all passed on my machine.

Steps to Reproduce (for bugs)

Apply an IPv6 installation with natOutgoing enabled.

tigera-operator pod on AKS 1.20.7 Crashloopback with timeout to API server

Expected Behavior

we deployed AKS cluster and using calico & kubenet
we used same pipeline to deploy many AKS clusters
we need the tigera-operator is in running state and not crashing

Current Behavior

pod tigera-operator is in crashloopback with below logs

2021/07/05 10:53:40 [INFO] Version: v1.17.1

2021/07/05 10:53:40 [INFO] Go Version: go1.15.2

2021/07/05 10:53:40 [INFO] Go OS/Arch: linux/amd64

{"level":"error","ts":1625482450.1487107,"logger":"controller-runtime.manager","msg":"Failed to get API Group-Resources","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.New\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/manager.go:317\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:157\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}

{"level":"error","ts":1625482450.1493962,"logger":"setup","msg":"unable to start manager","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:175\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}

*we tested traffic from the node hosted the tigera operator toward the API server FQDN above and its working fine !
*if it fail to connect with API server so we expect t all nodes are not ready , but all are ready state
*we have firewall but its allowing traffic to API server FQDN and all calico pods are running expect tigera

image


below describe and logs for tigera-operator

C:\WINDOWS\system32>kubectl describe pod tigera-operator-64bd78b58-99lmc -n tigera-operator

Name: tigera-operator-64bd78b58-99lmc

Namespace: tigera-operator

Priority: 0

Node: aks-systempool-14727861-vmss000000/10.248.56.4

Start Time: Fri, 02 Jul 2021 18:03:40 +0200

Labels: k8s-app=tigera-operator

          name=tigera-operator

          pod-template-hash=64bd78b58

Annotations:

Status: Running

IP: 10.248.56.4

IPs:

IP: 10.248.56.4

Controlled By: ReplicaSet/tigera-operator-64bd78b58

Containers:

tigera-operator:

Container ID:  containerd://15091f8c3c039accf9d559695ff97bbdf28a7ce95ce9823e30e79b487b9dfa7a

Image:         mcr.microsoft.com/oss/tigera/operator:v1.17.1

Image ID:      sha256:bcba4d5a252ae36cbf5909e31e3fed19ec6efb1ef62afa58b74f2687fea87b5b

Port:          <none>

Host Port:     <none>

Command:

  operator

State:          Waiting

  Reason:       CrashLoopBackOff

Last State:     Terminated

  Reason:       Error

  Exit Code:    1

  Started:      Mon, 05 Jul 2021 12:53:40 +0200

  Finished:     Mon, 05 Jul 2021 12:54:10 +0200

Ready:          False

Restart Count:  718

Environment Variables from:

  kubernetes-services-endpoint  ConfigMap  Optional: true

Environment:

  WATCH_NAMESPACE:

  POD_NAME:                            tigera-operator-64bd78b58-99lmc (v1:metadata.name)

  OPERATOR_NAME:                       tigera-operator

  TIGERA_OPERATOR_INIT_IMAGE_VERSION:  v1.17.1

  KUBERNETES_PORT_443_TCP_ADDR:        aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io

  KUBERNETES_PORT:                     tcp://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443

  KUBERNETES_PORT_443_TCP:             tcp://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443

  KUBERNETES_SERVICE_HOST:             aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io

Mounts:

  /var/lib/calico from var-lib-calico (ro)

  /var/run/secrets/kubernetes.io/serviceaccount from tigera-operator-token-4sv2d (ro)

Conditions:

Type Status

Initialized True

Ready False

ContainersReady False

PodScheduled True

Volumes:

var-lib-calico:

Type:          HostPath (bare host directory volume)

Path:          /var/lib/calico

HostPathType:

tigera-operator-token-4sv2d:

Type:        Secret (a volume populated by a Secret)

SecretName:  tigera-operator-token-4sv2d

Optional:    false

QoS Class: BestEffort

Node-Selectors: kubernetes.io/os=linux

Tolerations: :NoExecute op=Exists

             :NoSchedule op=Exists

             CriticalAddonsOnly op=Exists

Events:

Type Reason Age From Message


Normal Pulled 46m (x711 over 2d18h) kubelet Container image "mcr.microsoft.com/oss/tigera/operator:v1.17.1" already present on machine

Warning BackOff 66s (x16784 over 2d18h) kubelet Back-off restarting failed container

Context

tigera-operator pod failing

Your Environment

AKS cluster 1.20.7

Support for ARM64

Container image - quay.io/tigera/operator does not support arm64

Expected Behavior

quay.io/tigera/operator to work on arm64 architecture

Current Behavior

Getting the error

standard_init_linux.go:219: exec user process caused: exec format error

Possible Solution

Create a container image using arm64 architecture

Steps to Reproduce (for bugs)

Create a cluster on AWS using graviton nodes

Context

I have a hybrid cluster and I got it to work by adding a nodeSelector for the operator to run on X86_64, but I was planning to only have ARM nodes in the future.

Your Environment

AWS X86_64 and Graviton arm64 nodes.

Add support for additional labels and annotations to be propagated to all pods

We make use of pod labels and annotations to tag metrics and configure checks for pods with DataDog. However, due to the operator managing all of the underlying Kubernetes resources, there is in't a good way to automatically add the labels and configuration during the installation. We would like to see support for passing additional labels and annotations to each of the underlying components that get deployed by this operator, in particular for the Node, Typha, and Kubecontrollers.

Expected Behavior

The operator should allow passing user specified custom labels and annotations to the pods

Current Behavior

We are currently not able to propogate additional labels or annotations to the operator managed pods.

Possible Solution

The Installation CRD should have a field like:

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
  namespace: management
spec:
    componentLabels:
      - componentName: Node
        podLabels:
            foo: bar

   componentAnnotations:
      - componentName: Typha
        podAnnotations:
          cool: awesome

Steps to Reproduce (for bugs)

N/A

Context

We would like to add additional labels and annotations to the pods for monitoring and running checks through DataDog

Your Environment

  • AWS EKS with Amazon Linux 2 nodes

Urgent: Documentation Needs Fixing

Hi, I work on the Red Hat Operator Enablement team and upon review of the Tigera operator we found the documentation to be severely lacking required information for successful use of the operator; please see attached screenshots

01
01
02
03

I provided an example of an Operator providing some good documentation as a reference.

Also there is a bug open for this and if required we can provide reasonable assistance where necessary to get this resolved: https://bugzilla.redhat.com/show_bug.cgi?id=1853962

Issue running locally due to documentation issues

Issue running locally due to documentation issues

Expected Behavior

User should be able to successfully deploy calioc via tigera operator following the steps in the README for "Running it locally".

Current Behavior

Executing the steps mentioned in the README leads to error due to incorrect path
Issues:
KUBECONFIG=./kubeconfig.yaml go run ./cmd/manager => incorrect path
kubectl create -f ./deploy/crds/operator_v1_installation_cr.yaml => incorrect path

Possible Solution

Fix the documentation with correct path and validate the deployment locally

Steps to Reproduce (for bugs)

Follow the steps mentioned under "Running it Locally"

Your Environment

  • Operating System and version: Ubuntu 18.04.4 LTS

Calico-readiness probe times out

Expected Behavior

Calico-node containers to show as ready especially when health see this in the logs

2021-05-25 23:19:36.079 [INFO][66] felix/health.go 133: Health of component changed lastReport=health.HealthReport{Live:true, Ready:false} name="int_dataplane" newReport=&health.HealthReport{Live:true, Ready:true}
``
## Current Behavior

On some busy machiens we get unready with no data from readiness probe. this appears to be a timeout because if we squelch the operator and manually change it to 5 

eadinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

    Events:
Type     Reason     Age                     From                                   Message
----     ------     ----                    ----                                   -------
Warning  Unhealthy  2m20s (x6668 over 19h)  kubelet, aks-kub8-10616369-vmss000003  Readiness probe failed:

## Possible Solution

Just bump to 3-5 seconds. Period is 10 so that should be fine. 
Alternatively http probe instead of  /bin/calico-node -felix-ready

## Steps to Reproduce (for bugs)

Don't have anytnhing great. Maybe gets some machines and stress them then t
## Context

The nodes actually seem fine but customers complain when they see a not ready daemonset and could effect rollout in the future. 

## Your Environment

Random azure kubernetes service  customer. 

Operator migration: unsupported variables

Expected Behavior

As long as nothing too outlandish is being done, the migration should detect options and migrate to use the calico operator

Current Behavior

While migrating I hit a number of areas where it wouldn't let me go while still doing what seem to be very reasonable things:

  • FELIX_IPV6SUPPORT=true is not supported
  • IP6=autodetect is not supported
  • IP_AUTODETECTION_METHOD=cidr=10.21.32.0/23 is not supported. To fix it, remove the IP_AUTODETECTION_METHOD env var or set it to 'first-found', 'can-reach=', 'interface=', or 'skip-
    interface=*'

I should be able to get around it, but the last one is a particular issue because not all of my nodes have identical hardware, so autodetecting by interface name isn't a good option.

Context

It doesn't look like it's possible for me to migrate without losing at least ipv6 support until after the migration; the IP_AUTODETECTION_METHOD issue makes migration iffy at best, since if I make a mistake the networking between my pods could get messed up; I use zerotier on my nodes to allow me to communicate with them from anywhere and sometimes calico tries to use that as the interface.

Your Environment

Ubuntu 20.04, kubeadm cluster 1.22.0

Support for arm64 architecture

Expected Behavior

Tigera-operator behaving normally and running on arm64 clusters as intended.

Current Behavior

Cannot use tigera-operator on arm64 cluster because of incompatibility with arm64.

Possible Solution

The solution would be to deploy a tigera-operator arm64 capable Docker image to the https://quay.io/repository/tigera/operator registry.

Steps to Reproduce (for bugs)

  1. Download Tigera Operator's YAML configuration: https://docs.projectcalico.org/manifests/tigera-operator.yaml
  2. Apply the manifests on an arm64 server/computer
  3. Notice the operator not working because of incompatibility.

Context

I recently started a Raspberry Pi cluster at home to play with k3s and bare-metal Kubernetes. For my CNI, I chose Calico because I heard great things about it, and then tigera-operator failed to run on my cluster. But, Calico still works and i'm not sure if i need tigera operator at all, but the guide I followed (https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart) featured it.

Your Environment

Tigera Operator is causing a Massive amount of 404s on the apiserver trying to delete an inexistent resource

The Tigera operator running in AKS when selecting Calico causes a lot of 404s when trying to delete the following resource /apis/operator.tigera.io/v1/tigerastatuses/apiserver

we are getting over 17500 404s from the operator trying to delete that exact same resource over 24h, those shouldnt be present

Expected Behavior

0 404s in 24h

Current Behavior

17k 404s in 24h

Steps to Reproduce (for bugs)

  1. Deploy AKS with calico using version 1.20 or above
  2. look at the apiserver audit logs

Manifest contains image tag that doesn't exist

Expected Behavior

I expect to be able to start the operator in my cluster.

Current Behavior

I'm trying to run kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml and getting an image pull backoff. Inspecting the manifest, it looks like the image it's trying to use is quay.io/tigera/operator:v1.10.8. Unfortunately the "v1.10.8" tag doesn't exist in Quay, although "1.10.8" does. Did someone miss a "v" somewhere in the build process?

Possible Solution

Tag the image correctly :)

Tigera Operator attempts to schedule on Fargate nodes

Expected Behavior

This is an improvement, need to add nodeSelector for calico-node DS to "not" schedule on Fargate nodes. Similar to https://github.com/aws/amazon-vpc-cni-k8s/blob/master/config/master/aws-k8s-cni-cn.yaml#L100

Current Behavior

Please see this - aws/amazon-vpc-cni-k8s#1429

Possible Solution

aws/amazon-vpc-cni-k8s#1429 (comment)

Steps to Reproduce (for bugs)

  1. Steps to repro are here - aws/amazon-vpc-cni-k8s#1429

Context

Your Environment

  • Operating System and version:
  • Link to your project (optional):

Failed to create watcher ListRoot="/calico/ipam/v2/assignment/" error=connection is unauthorized: unknown (get IPAMBlocks.crd.projectcalico.org)

calico-kube-controllers is missing rbac with default setup to allow the controller to IPAMBlocks. and keeps returning this logs over and over

2021-05-07 13:37:38.999 [INFO][1] watchercache.go 243: Failed to create watcher ListRoot="/calico/ipam/v2/assignment/" error=connection is unauthorized: unknown (get IPAMBlocks.crd.projectcalico.org) performFullResync=true
2021-05-07 13:37:38.999 [INFO][1] watchercache.go 174: Full resync is required ListRoot="/calico/ipam/v2/assignment/"

Possible Solution

manually adding new cluster role and cluster role binding to service account that runs the service. example patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: calico-kube-controllers-patch
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - endpoints
  - services
  verbs:
  - watch
  - list
  - get
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - watch
  - list
- apiGroups:
  - crd.projectcalico.org
  resources:
  - ippools
  verbs:
  - list
  - get
  - watch
- apiGroups:
  - crd.projectcalico.org
  resources:
  - blockaffinities
  - ipamblocks
  - ipamhandles
  - networksets
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - delete
- apiGroups:
  - crd.projectcalico.org
  resources:
  - clusterinformations
  verbs:
  - get
  - create
  - update
- apiGroups:
  - crd.projectcalico.org
  resources:
  - hostendpoints
  verbs:
  - get
  - list
  - create
  - update
  - delete
- apiGroups:
  - crd.projectcalico.org
  resources:
  - kubecontrollersconfigurations
  verbs:
  - get
  - create
  - update
  - watch
- apiGroups:
  - policy
  resourceNames:
  - calico-kube-controllers
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-kube-controllers-patch
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers-patch
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: calico-system

Steps to Reproduce (for bugs)

install using aws cni yaml https://github.com/aws/amazon-vpc-cni-k8s/blob/master/config/v1.7/calico.yaml on a 1.18 k8s cluster

Your Environment

aws eks 1.18

APIService not available (k8s 1.22)

APIService
The apiregistration.k8s.io/v1beta1 API version of APIService will no longer be served in v1.22.

Migrate manifests and API clients to use the apiregistration.k8s.io/v1 API version, available since v1.10.
All existing persisted objects are accessible via the new API
No notable changes

seems like tigera operator isnt happy on 1.22...
is that because of a feature gate i need to enable

2021/07/08 14:36:49 [INFO] Version: v1.17.4
2021/07/08 14:36:49 [INFO] Go Version: go1.15.2
2021/07/08 14:36:49 [INFO] Go OS/Arch: linux/amd64
{"level":"info","ts":1625755009.7462676,"logger":"setup","msg":"Checking type of cluster","provider":""}
{"level":"info","ts":1625755009.7475083,"logger":"setup","msg":"Checking if TSEE controllers are required","required":false}
{"level":"info","ts":1625755009.856925,"logger":"setup","msg":"starting manager"}
{"level":"info","ts":1625755009.8569715,"logger":"typha_autoscaler","msg":"Starting typha autoscaler","syncPeriod":10}
I0708 14:36:49.858270       1 leaderelection.go:243] attempting to acquire leader lease  tigera-operator/operator-lock...
I0708 14:37:06.950468       1 leaderelection.go:253] successfully acquired lease tigera-operator/operator-lock
{"level":"info","ts":1625755026.9514308,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.0525558,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1625755027.154822,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1625755027.256324,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1625755027.2566109,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.3575156,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.4588928,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.559794,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.6607502,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.7658336,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"error","ts":1625755027.868342,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"APIService.apiregistration.k8s.io","error":"no matches for kind \"APIService\" in version \"apiregistration.k8s.io/v1beta1\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:117\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:159\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1625755027.8686714,"logger":"setup","msg":"problem running manager","error":"no matches for kind \"APIService\" in version \"apiregistration.k8s.io/v1beta1\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:228\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}```

Operator uses template default `LeaderElectionID`

Expected Behavior

Calling kubectl -n tigera-operator get lease should return a lease named for the operator and not a generically named one.

NAME            HOLDER                                                                  AGE
tigera-operator-lock   tigera-operator-5dcfb9df8c-bfkvv_31aa92d3-9894-4720-88b0-9414067296b3   89d

Current Behavior

The LeaderElectionID value hasn't been customized so we get a generic lock name that could collide with another operator and is ambiguous.

NAME            HOLDER                                                                  AGE
operator-lock   tigera-operator-5dcfb9df8c-bfkvv_31aa92d3-9894-4720-88b0-9414067296b3   89d

Possible Solution

I'd be happy to open a PR to rename the LeaderElectionID value to tigera-operator-lock.

Steps to Reproduce (for bugs)

n/a

Context

n/a

Your Environment

n/a

Use apiregistration.k8s.io/v1 instead of apiregistration.k8s.io/v1beta1

We have to use apiregistration.k8s.io/v1 instead of apiregistration.k8s.io/v1beta1.
The fix is already included in release-v1.18 branch.

I am not familiar with tigera-operator's versioning policy.
However, according to https://kubernetes.io/docs/reference/using-api/deprecation-guide/#apiservice-v122,
we have to release tigera/operator v1.18 with or before kubernetes v1.22.

It will be no side effect because apiregistration.k8s.io/v1 has been available since kubernetes v1.10(from long time ago).

Expected Behavior

Tigera operator should be available in kubernetes v1.22.

Current Behavior

Tigera operator fails at kubernetes v1.22.0-alpha.3

Possible Solution

Release tigera/operator v1.18

Steps to Reproduce (for bugs)

Context

Your Environment

  • Operating System and version:
  • Link to your project (optional):

How to deploy specific tigera operator version on OpenShift 4.x

Expected Behavior

Each operator release should also contain complete set of manifests required for deployment on supported platforms (like OpenShift 4.x).

Current Behavior

Currently, documentation says that for deployment on OpenShift 4.x I should download set of files from "https://docs.projectcalico.org/manifests/...", but it is not clearly stated which version will be used or how to use different version than the one which is hardcoded in the above manifests (currently: 1.5.0).

I could try and modify image version after downloading those manifests, but then I'm not sure if some CRDs have not been changed (or added) in the release I'm interested in, or if I have to also change image version for init container (and which one should be used).

Possible Solutions

  1. provide versioned documentation page (https://docs.projectcalico.org/getting-started/openshift/installation) where each version refers to the manifests for specific release, or
  2. add proper files with manifests (as zip and/or tgz archive) to each release on github and update documentation once to refer to these archives instead of specific files with hardcoded operator version.

Provide flexibility of installing calico in an existing namespace as oppose calico-system namespace

The default installation of calico creates a new namespace called calico-system. Some organizations require calico to be installed in an existing namespace. the current Installation CRD does not support this functionality.

Expected Behavior

The installation CRD should probably have a field for installing calico in an existing namespace

Current Behavior

calico-system namespace is created by the operator always.

Context

We do not want to manage another namespace only for calico operations so we cannot deploy the tigera-operator until the Installation spec provides flexibility to install in an existing namespace.

Your Environment

EKS 1.19

  • Operating System and version: bottlerocket-v1.0.8

The CR "installations" deployed by tigera-operator is missing "conditions" as part of the status

The CR installations deployed by tigera-operator is missing "conditions" as part of the status. This creates issue verifying the resources applied. As an example the cli-utils status poller (through applier) fails to fetch status of this resource due to condition not satisfied.
I can understand that the status of calico installation can be tracked through tigerastatus CR, but then is it possible to have "conditions" added for installations for consistency.

kubectl get installations -o json | jq -r ".items[0].status"
{
  "mtu": 1440,
  "variant": "Calico"
}

Expected Behavior

"conditions" should be available as part of the status

Current Behavior

Status has just mtu and variant and missing "conditions"

Possible Solution

add "conditions" to status in addition to mtu and variant

Steps to Reproduce (for bugs)

  1. Deploy tigera-operator
  2. kubectl get installations -o json | jq -r ".items[0].status"

Context

This creates issue verifying the resources applied.

Your Environment

  • Operating System and version:
  • Link to your project (optional):

metrics port hardcoded to 8383 in tigera-operator conflicting with CNV

Expected Behavior

Metrics port in tigera-operator should be configurable, and default value should be different than 8383 which is used by nmstate-handler pods from CNV (Container Native Virtualization in RedHat Openshift 4.x. Instead it is hardcoded to 8383 here: https://github.com/tigera/operator/blob/master/pkg/daemon/daemon.go#L35

Current Behavior

Currently it is not possible to properly deploy CNV (Container Native Virtualization) on RedHat Openshift 4.x with Calico and tigera-operator. Both tigera-operator and one of nmstate-handler pods (part of CNC) run with host network and try to bind to port 8383. Since tigera-operator is deployed first, nmstate-handler keeps crashing and CNV deployment cannot finish properly.

Possible Solution

  1. make metrics port configurable, and
  2. change default port 8383 to something else

Steps to Reproduce (for bugs)

  1. deploy RedHat Openshift 4.3.12 with Calico as specified in RedHat documentation and the following page: https://docs.projectcalico.org/getting-started/openshift/installation
  2. deploy CNV according to RedHat documentation: https://docs.openshift.com/container-platform/4.3/cnv/cnv_install/installing-container-native-virtualization.html
  3. list pods in openshift-cnv namespace and look for crashing nmstate-handler running on the same node as tigera-operator
  4. scale down tigera-operator to 0 replicas to stop tigera-operator pod
  5. restart failed nmstate-handler pod - it should be able to run successfully
  6. inspect status of hco/kubevirt-hyperconverged in openshift-cnv namespace - the last status condition should state "Reconcile completed successfully"

Context

Your Environment

  • Operating System and version: Redhat OpenShift 4.3.12 with Calico deployed using tigera-operator v1.3.3
  • Link to your project (optional):

Add infrastructure for operator to check if it has set a FelixConfig field or the user has the field

Talking in the Calico Community meeting on Dec 8 we were discussing setting FelixConfiguration instead of using environment variables and one of the issues is knowing the difference between if the Operator set the field or if the user had explicitly set it.

There is the managedFields metadata on resources that we should be able to use to know if a field is being managed by the operator or if a user (or something other than the operator set the field). We should add the machinery to manage FelixConfiguration fields and know if they had been written/overwritten by the user would allow us to use FelixConfiguration instead of setting environment variables.

Expected Behavior

Current Behavior

Possible Solution

Steps to Reproduce (for bugs)

Context

Your Environment

  • Operating System and version:
  • Link to your project (optional):

Allow objects to have custom labels

Looks like it's impossible right now to create the calico-system namespace with a custom label.

Expected Behavior

Tigera operator should merge the labels of an existing namespace (or any other resource) with the wanted labels.

Current Behavior

Tigera operator ignores labels in the existing namespace, overriding them with the wanted ones.

Possible Solution

Annotations are already merged in the mergeState util function, we could easily do the same for labels:

currentAnnotations := mapExistsOrInitialize(currentMeta.GetAnnotations())
desiredAnnotations := mapExistsOrInitialize(desiredMeta.GetAnnotations())
mergedAnnotations := mergeAnnotations(currentAnnotations, desiredAnnotations)
desiredMeta.SetAnnotations(mergedAnnotations)

Steps to Reproduce (for bugs)

  1. Create the calico-system namespace
  2. Label the namespace with openpolicyagent.org/webhook: ignore
  3. Install tigera-operator and wait for it to start creating resources
  4. The label openpolicyagent.org/webhook is gone.

Context

We're trying to install the calico using the tigera operator. However, tigera is having issues creating deployments in the calico-system namespace because we have OPA running and configured to block deployment creations without the app.kubernetes.io/name label.
We were already expecting this to happen and created the calico-system namespace beforehand with the label openpolicyagent.org/webhook: ignore (namespaces labeled bypass the OPA admission webhook).

However, that label is removedby the tigera-operator.

Your Environment

  • AWS Configuration (policy only mode)
  • OPA running

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.