Giter Site home page Giter Site logo

cert-manager / istio-csr Goto Github PK

View Code? Open in Web Editor NEW
153.0 6.0 56.0 1.12 MB

istio-csr is an agent that allows for Istio workload and control plane components to be secured using cert-manager.

Home Page: https://cert-manager.io/docs/usage/istio-csr/

License: Apache License 2.0

Makefile 31.95% Go 57.01% Shell 10.53% Mustache 0.51%
istio tls certificate kubernetes

istio-csr's Introduction

cert-manager project logo

Go Report Card artifact hub badge

istio-csr

istio-csr is an agent that allows for Istio workload and control plane components to be secured using cert-manager.

Certificates facilitating mTLS — both inter and intra-cluster — will be signed, delivered and renewed using cert-manager issuers.

istio-csr supports Istio v1.10+ and cert-manager v1.3+


Documentation

Please follow the documentation at cert-manager.io for installing and using istio-csr.

Inner workings

istio-csr has 3 main components: the TLS certificate obtainer, the gRPC server and the CA bundle distributor.

  1. The TLS certificate obtainer is responsible for obtaining the TLS certificate for the gRPC server. It uses the cert-manager API to create a CertificateRequest resource, which will be picked up by cert-manager and signed by the configured issuer.
  2. The gRPC server is responsible for receiving certificate signing requests from istiod and sending back the signed certificate. Herefore, it uses the cert-manager CertificateRequest API to obtain the signed certificate.
  3. The CA bundle distributor is responsible for creating and updating istio-ca-root-cert ConfigMaps in all namespaces (filtered using namespaceSelector).

istio-csr's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

istio-csr's Issues

Istio CSR certificate DNS names

I have a need to send istio-csr requests from "outside" the cluster. istio-csr is fronted by a load balancer and a route53 name. The certificate presented by istio-csr is only good for the internal (to the k8s cluster) name (cert-manager-istio-csr.cert-manager.svc).

I need to add a command line parameter that can allow me to set the dns-names that will be added to istion-csr's CSR ...

I've created a PR that does this, LMK if this makes sense.

Environment details:

  • Kubernetes version: 1.19
  • Cloud-provider/provisioner: AWS EKS
  • cert-manager version: 1.2.0
  • Install method: Helm

/kind feature

Using Vault as an issuer doesn't work

Using Vault as an issuer doesn't seem to generate correct certificates for the pods. When I try to make a request from one service to another I get this error:

~ root@netutils-6bc77c956d-jd8dz:/# curl productpage:9080
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
~ istio-proxy@netutils-6bc77c956d-jd8dz:/$ openssl s_client -showcerts -connect productpage:9080
CONNECTED(00000005)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 313 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

The Vault issuer is correctly installed and istio has been installed pointing to the istio-csr service as described in the README of this repository.

~ kubectl get clusterissuers -n cert-manager -o wide
NAME           READY   STATUS           AGE
vault-issuer   True    Vault verified   3d21h

In the istiod logs I can see how the Vault CA is correctly loaded

info	validationController	Reconcile(enter): CABundle changed
info	x509 cert - Issuer: "CN=dev eu Services Intermediate CA,OU=XX,O=XXXX,L=XXX,ST=XXX,C=XX", Subject: "CN=istiod.istio-system.svc", SN: 4df3c56e250e8464b7466308fa431cb95819f321, NotBefore: "2021-09-27T12:10:48Z", NotAfter: "2021-09-27T13:11:18Z"
info	x509 cert - Issuer: "CN=dev eu Services Root CA,OU=XX,O=XX,L=XXX,ST=XXX,C=XX", Subject: "CN=dev eu Services Intermediate CA,OU=XX,O=XXX,L=XXX,ST=XXX,C=XX", SN: 5688b4cf53441cd2ae9b18f7d617211d4588912a, NotBefore: "2019-04-01T14:46:00Z", NotAfter: "2029-03-29T14:46:00Z"
info	Istiod certificates are reloaded

In the istio-csr logs I can see the following message. I'm not sure if the namespace defined here is right or not, istio-csr is in the cert-manager ns, while istiod is in istio-system and productpage in the default namespace.

info	klog	cert-manager "msg"="signed CertificateRequest" "identity"="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" "name"="istio-csr-6bqpb" "namespace"="istio-system" 

Another interesting fact is that when I sniff the traffic in the istio-proxy sidecars of the origin and destination pods I can see the request's body and response encrypted, but the client still reporting the CERTIFICATE_VERIFY_FAILED issue.

I'm using cert-manager 1.5.3, istio-csr 0.3.0 and Kubernetes 1.20.5

Minimum Required Kubernetes Version

What's the minimum required Kubernetes Version to run this CSR service with Istio 1.8, I'm using Kubernetes 1.16 and running into issues with Istio sidecar not being able to get the CSR to istiod and see the following errors on the istio-proxy container
**unknown service istio.v1.auth.IstioCertificateService**
resource:default failed to generate secret for proxy: rpc error: code = Unimplemented desc = unknown service istio.v1.auth.IstioCertificateService

As far as I know Istio provides integration with external CA for Kubernetes 1.18, but I want to make sure this works with 1.16

readiness probe failed

After running istio-csr for some time, some pods fail readiness probe:

❯ kubectl get po -n cert-manager
NAME                                                    READY   STATUS    RESTARTS   AGE
cert-manager-istio-csr-79ffc5bfd-dm7mv                  1/1     Running   0          7d17h
cert-manager-istio-csr-79ffc5bfd-mv7wj                  0/1     Running   0          20d
cert-manager-istio-csr-79ffc5bfd-t855r                  1/1     Running   0          20d

Describing the pod:

❯ kubectl describe po cert-manager-istio-csr-79ffc5bfd-mv7wj -n cert-manager
Name:         cert-manager-istio-csr-79ffc5bfd-mv7wj
Namespace:    cert-manager
Priority:     0
Node:         ip-10-135-223-54.ec2.internal/10.135.223.54
Start Time:   Wed, 17 Feb 2021 12:28:14 -0500
Labels:       app=cert-manager-istio-csr
              pod-template-hash=79ffc5bfd
Annotations:  kubernetes.io/psp: eks.privileged
Status:       Running
IP:           10.135.215.165
IPs:
  IP:           10.135.215.165
Controlled By:  ReplicaSet/cert-manager-istio-csr-79ffc5bfd
Containers:
  cert-manager-istio-csr:
    Container ID:  docker://2616a2dd356d0b6502b32da6589cae521744e3b22ecc730a8bcde738b50bbaab
    Image:         quay.io/jetstack/cert-manager-istio-csr:v0.1.0
    Image ID:      docker-pullable://quay.io/jetstack/cert-manager-istio-csr@sha256:f9d473fa10520d0a255a4b60350a9f9057834da762129f9e5ecb9681955b1fd0
    Port:          6443/TCP
    Host Port:     0/TCP
    Command:
      cert-manager-istio-csr
    Args:
      --log-level=1
      --readiness-probe-port=6060
      --readiness-probe-path=/readyz
      --serving-address=0.0.0.0:6443
      --serving-certificate-duration=24h
      --root-ca-configmap-name=istio-ca-root-cert
      --certificate-namespace=istio-system
      --issuer-group=cert-manager.io
      --issuer-kind=ClusterIssuer
      --issuer-name=vault-issuer
      --max-client-certificate-duration=24h
      --preserve-certificate-requests=false
    State:          Running
      Started:      Wed, 17 Feb 2021 12:28:16 -0500
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:6060/readyz delay=3s timeout=1s period=7s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from cert-manager-istio-csr-token-ff98x (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  cert-manager-istio-csr-token-ff98x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cert-manager-istio-csr-token-ff98x
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                       From     Message
  ----     ------     ----                      ----     -------
  Warning  Unhealthy  4m37s (x222001 over 17d)  kubelet  Readiness probe failed: Get http://10.135.215.165:6060/readyz: dial tcp 10.135.215.165:6060: connect: connection refused

I would assume that after failed healthchecks, subsequent probes would succeed, but it looks like the pod never enters running state.

deleting the affected pod manually does bring it back and passes healthchecks as expected.

Nothing of note in the logs.

cert-manager-istio-csr:PSP Issues: Error: container has runAsNonRoot and image will run as root

We have slightly restricted cluster with PSP and since we are using GitOps. I have exported the istio-csr [https://github.com/cert-manager/istio-csr]
The generated manifest : https://github.com/VivekNidhi/gitops-istio/blob/main/istio-csr-manifest.yaml, but when we bring up csr getting the below error. Is it possible to know the exact privileges required or any resolution for this.

Error:

Pulling image "XXXX/jetstack/cert-manager-istio-csr:v0.1.2" Pulling 17 May 2021, 16:08:30 17 May 2021, 16:13:37 25  
Error: container has runAsNonRoot and image will run as root

[aks][istio-csr][venafi] errors ``` error klog grpc_server "msg"="failed to authenticate request" "error"="could not get cluster csrdemo-aks-2's kube client" "serving_addr"="0.0.0.0:6443" ```

Hi Team,

We have done similar tests on GKE with no issues, however exactly the same config does fail on AKS, we are able to acquire certs in the regular way, however istio-csr component is not working. Have someone encountered similar issues on Azure Kubernetes Service?

K8s version: 1.18.x
Istio version: 1.8.5
cert-manager: 1.4.2
istio-csr: 0.2.0

│ 2021-08-11T18:19:40.744793Z    error    klog    grpc_server "msg"="failed to authenticate request" "error"="could not get cluster csrdemo-aks-2's kube client" "serving_addr"="0.0.0.0:6443"                                        │
│ 2021-08-11T18:19:42.288067Z    info    klog    tls_provider "msg"="renewing serving certificate"                                                                                                                                    │
│ 2021-08-11T18:19:42.660052Z    info    klog    tls_provider "msg"="serving certificate ready"                                                                                                                                       │
│ 2021-08-11T18:19:42.660360Z    info    spiffe    Added 1 certs to trust domain cluster.local in peer cert verifier                                                                                                                  │
│ 2021-08-11T18:19:42.660957Z    info    klog    tls_provider "msg"="fetched new serving certificate"  "expiry-time"="2021-08-11T19:19:42Z"                                                                                           │
│ 2021-08-11T18:19:42.664437Z    info    klog    tls_provider "msg"="waiting to renew certificate"  "renewal-time"="2021-08-11T18:59:42.220329262Z"                                                                                   │
│ 2021-08-11T18:19:51.778837Z    error    klog    grpc_server "msg"="failed to authenticate request" "error"="could not get cluster csrdemo-aks-2's kube client" "serving_addr"="0.0.0.0:6443"                                        │
│ 2021-08-11T18:19:53.474732Z    error    klog    grpc_server "msg"="failed to authenticate request" "error"="could not get cluster csrdemo-aks-2's kube client" "serving_addr"="0.0.0.0:6443"
│ Containers:                                                                                                                                                                                                                         │
│   cert-manager-istio-csr:                                                                                                                                                                                                           │
│     Container ID:  containerd://4795229c6991b4b23c49b976201868a380028a395f2683a1d90586aab6eefc03                                                                                                                                    │
│     Image:         quay.io/jetstack/cert-manager-istio-csr:v0.2.0                                                                                                                                                                   │
│     Image ID:      quay.io/jetstack/cert-manager-istio-csr@sha256:1f3f5be50800120161a356b4e810c83d982d13ee1320652792421bdb0339425d                                                                                                  │
│     Ports:         6443/TCP, 9402/TCP                                                                                                                                                                                               │
│     Host Ports:    0/TCP, 0/TCP                                                                                                                                                                                                     │
│     Command:                                                                                                                                                                                                                        │
│       cert-manager-istio-csr                                                                                                                                                                                                        │
│     Args:                                                                                                                                                                                                                           │
│       --log-level=1                                                                                                                                                                                                                 │
│       --metrics-port=9402                                                                                                                                                                                                           │
│       --readiness-probe-port=6060                                                                                                                                                                                                   │
│       --readiness-probe-path=/readyz                                                                                                                                                                                                │
│       --certificate-namespace=istio-system                                                                                                                                                                                          │
│       --issuer-name=istio-ca                                                                                                                                                                                                        │
│       --issuer-kind=Issuer                                                                                                                                                                                                          │
│       --issuer-group=cert-manager.io                                                                                                                                                                                                │
│       --preserve-certificate-requests=false                                                                                                                                                                                         │
│       --root-ca-file=                                                                                                                                                                                                               │
│       --serving-certificate-dns-names=cert-manager-istio-csr.cert-manager.svc                                                                                                                                                       │
│       --serving-certificate-duration=1h                                                                                                                                                                                             │
│       --trust-domain=cluster.local                                                                                                                                                                                                  │
│       --cluster-id=Kubernetes                                                                                                                                                                                                       │
│       --max-client-certificate-duration=1h                                                                                                                                                                                          │
│       --serving-address=0.0.0.0:6443                                                                                                                                                                                                │
│       --leader-election-namespace=istio-system                                                                                                                                                                                      │
│       --root-ca-configmap-name=istio-ca-root-cert

Generate workload certificates with DNS in the SAN

We have an application where the DNS in subject or san is validated when checking the communcation with mounted istio-proxy certs. I noticed that Istio used to have the DNS field in the SAN but now its only the URI.

I have tried istio-csr and I like that you can control the issuer and renew the CA certificates but can we use a specific workload certificate configuration as well to include DNS in the SAN?

Failing to integrate with GCP CAS

Description

We're using GCP CAS as our Cluster Issuer, which has been setup and works completely fine e2e using the traditional certificate resources to request certs for Ingress TLS certs. However, we'd like to use the same issuer with istio-csr to be able to issue these certs to our services running in the mesh so we can provide mTLS with our services in the cluster with the same CA which will also allow for traffic east <--> west to also be mTLS with our multi-cluster multi-mesh topology.

I'm deploying the default httpbin service through the istio examples and I can see the following logs:

2021-08-23T05:58:52.349778Z	warn	ca	ca request failed, starting attempt 1 in 103.08759ms
2021-08-23T05:58:52.453167Z	warn	ca	ca request failed, starting attempt 2 in 205.253929ms
2021-08-23T05:58:52.658658Z	warn	ca	ca request failed, starting attempt 3 in 391.517219ms
2021-08-23T05:58:53.050399Z	warn	ca	ca request failed, starting attempt 4 in 720.225847ms
2021-08-23T05:58:53.770959Z	error	googleca	Failed to create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
2021-08-23T05:58:53.771020Z	warn	sds	failed to warm certificate: failed to generate workload certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
2021-08-23T05:58:55.449855Z	warn	Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

I can see my CA being loaded in istiod correctly as I'd expect. (DUMMY CA)

istiod-asm-196-1-75b87cc7f-bgzx4 discovery 2021-08-23T06:13:08.757739Z	info	Istiod certificates are reloaded
istiod-asm-196-1-75b87cc7f-bgzx4 discovery 2021-08-23T06:13:08.757902Z	info	x509 cert [0] - Issuer: "CN=my-root,OU=my-ou,O=my-ca", Subject: "", SN: <>, NotBefore: "2021-08-23T06:12:52Z", NotAfter: "2021-08-23T07:12:51Z"

This lead me to believe that maybe something on the istio-csr side may be causing the issue but I've set verbosity to 5 and still don't have anything further to add context. Any further insights would be greatly appreciated.

I've also drafted up a diagram with the high level integration of the services and how it works together at present - maybe this will highlight something is either missing or incorrect.

Screen Shot 2021-08-23 at 4 48 19 pm

CA isn't properly propagated when using Vault

Hello again, this is a related issue I have after setting up Vault as a CA #105. In the previous issue I was able to create a valid chain by specifying the CA in istio-csr using app.tls.rootCAFile.

Now I'm trying to setup the cluster from scratch and I've found an issue on the certificate chain used by istiod or istio-csr, not sure yet.
The problem I'm facing is that the istio ingress and egress gateway's pods don't get healthy and in the logs we can see an error saying that the certificate is signed with an untrusted authority. At this point I can't tell to which service is trying to reach, I would assume it's istiod or istio-csr.

I've validated that the istio-ca-root-cert is correctly propagated to all the namespaces with the same CA I loaded in istio-csr using app.tls.rootCAFile.
What I've also noticed is that at this point we have two CertificateRequests generated, istio-csr-g84kt and istiod-ww4vx, both of them have the Intermediate in the Ca field instead of the CA I loaded in istio-csr.

My assumption here is that istio-ingressgateway correctly trust the CA loaded in istio-ca-root-cert, but the TLS handshake with istiod or istio-csr is failing because they are returning the Intermediated as the CA.
Additionally, to validate the previous assumption, any deployment fails to create pods with a handshake error as well.

I've also tried to load the CA chain in istio-csr, by loading a file containing the Intermediate and the CA using app.tls.rootCAFile. With this the ingress and egress gateway pods were able to successfully run and normal deployments were able to schedule their pods, but the certificate chain in the istio-proxy of the application now is invalid, as it has the intermediate duplicated.

I think the problem here is that the CA loaded from istio-csr is not correctly propagated to these firsts CertificateRequests.

istio-csr config used:

kubectl create secret generic istio-root-ca --from-file=ca.pem=ca.pem -n cert-manager

helm install -n cert-manager cert-manager-istio-csr jetstack/cert-manager-istio-csr \
--set app.certmanager.issuer.name=vault-issuer \
--set app.certmanager.issuer.kind=ClusterIssuer \
--set app.certmanager.namespace=istio-system \
--set app.certmanager.preserveCertificateRequests=true \
--set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \
--set "volumeMounts[0].name=root-ca" \
--set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \
--set "volumes[0].name=root-ca" \
--set "volumes[0].secret.secretName=istio-root-ca" \
--version 0.3.0
Istio Operator config used
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
spec:
  profile: "demo"
  hub: gcr.io/istio-release
  values:
    global:
      # Change certificate provider to cert-manager istio agent for istio agent
      caAddress: cert-manager-istio-csr.cert-manager.svc:443
      proxy:
        privileged: true # allows to tcpdump envoy proxy connections
  components:
    pilot:
      k8s:
        env:
          # Disable istiod CA Sever functionality
        - name: ENABLE_CA_SERVER
          value: "false"
        overlays:
        - apiVersion: apps/v1
          kind: Deployment
          name: istiod
          patches:

            # Mount istiod serving and webhook certificate from Secret mount
          - path: spec.template.spec.containers.[name:discovery].args[-1]
            value: "--tlsCertFile=/etc/cert-manager/tls/tls.crt"
          - path: spec.template.spec.containers.[name:discovery].args[-1]
            value: "--tlsKeyFile=/etc/cert-manager/tls/tls.key"
          - path: spec.template.spec.containers.[name:discovery].args[-1]
            value: "--caCertFile=/etc/cert-manager/ca/root-cert.pem"

          - path: spec.template.spec.containers.[name:discovery].volumeMounts[-1]
            value:
              name: cert-manager
              mountPath: "/etc/cert-manager/tls"
              readOnly: true
          - path: spec.template.spec.containers.[name:discovery].volumeMounts[-1]
            value:
              name: ca-root-cert
              mountPath: "/etc/cert-manager/ca"
              readOnly: true

          - path: spec.template.spec.volumes[-1]
            value:
              name: cert-manager
              secret:
                secretName: istiod-tls
          - path: spec.template.spec.volumes[-1]
            value:
              name: ca-root-cert
              configMap:
                defaultMode: 420
                name: istio-ca-root-cert

using Vault Issuer

The demo works using a self-signed issuer on istio 1.8.2, when I attempt to use a vault issuer (using helm values to target the vault issuer) I can see that the cert is signed successfully:

❯ kubectl get cert -n istio-system
NAME     READY   SECRET       AGE
istiod   True    istiod-tls   9m32s

The istio-ca-root-cert configmap is also populated correctly and propagated to all the namespaces , so far so good, no errors reported on istiod nor istio-csr.

I then deploy two fresh workloads httpbin and sleep with PeerAuthentication set to STRICT to enforce mTLS. The workloads come up fine; however when trying to curl between the workloads, I'm getting the following error:

❯ kubectl exec -it sleep-f8cbf5b76-7drgg -c sleep -- curl httpbin:8000/headers
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED

Not sure if this setup is supported: I currently have a private root CA hosted in AWS (acm-pca) and the intermediate CA in the chain deployed via Vault's pki engine.

The intent is to ensure that the intermediate CA and associated key never leaves vault.

any suggestions?

Thanks

E2E tests running against the wrong k8s version

E.g. for pull-cert-manager-istio-csr-k8s-v1-22-istio-v1-11

With this build YAML:

env:
- name: K8S_VERSION
  value: 1.22.3
- name: ISTIO_VERSION
  value: 1.11.4

Producing these logs

We see that kind is started with node image v1.21.1 and not v1.22.3:

>> creating kind cluster...
Creating cluster "istio-demo" ...
 • Ensuring node image (kindest/node:v1.21.1) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼

The cluster is created here:

I guess kind doesn't respect K8S_VERSION - we could do something like we do for cert-manager.

Document best-practices for minimal vault role configuration for istio-csr

When configuring a Vault issuer for istio-csr, the least privileged Vault role configurations are not very obvious.

We have been through this particular problem recently and can supply a quick guide around minimal policy for any PKI engine role that is dedicated to istio-csr cert issuance.

We could even show a fully worked example in kind in an examples/ directory under docs/ ?

public ca.crt aka caBundle is not being updated/propagated until the cert-manager and istiod components are restarted

When ca.crt value is updated - it's not being propagated through the cert-manager and istiod sidecars are hitting the well-known issue below: https://istio.io/latest/docs/ops/common-problems/injection/#x509-certificate-related-errors

with the following error message:

Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post "https://istiod.istio-system.svc:443/inject?timeout=30s": x509: certificate signed by unknown authority

Restarting istiod does not address the issue, the cert-manager stack has to be restarted as well in order to address ca.crt propagation.

istiod version: 1.8.6
cert-manager: 1.3.1
cert-manager-istio-csr: 0.3.0

Specifying a trust domain

I would like to deploy Istio CSR to a Istio cluster configured to use a specific trust domain (something other then cluster.local.

I poked around the Helm chart and the code to see if this was possible but I don't think it is. I've created a PR that I've built and deployed in my cluster which seems to do what I need.

Environment details (remove if not applicable):

  • Kubernetes version: 1.18
  • Cloud-provider/provisioner: AWS EKS
  • cert-manager version: 1.2.0
  • Install method: Helm

/kind feature

Getting remote error: tls: bad certificate with istio 1.12.2

Can anyone help me how to debug this issue
{"level":"info","time":"2022-02-02T09:21:52.941064Z","msg":"http: TLS handshake error from 10.15.0.59:48924: remote error: tls: bad certificate"} {"level":"info","time":"2022-02-02T09:24:36.574966Z","msg":"http: TLS handshake error from 10.15.0.59:49800: remote error: tls: bad certificate"} {"level":"info","time":"2022-02-02T09:30:45.256373Z","msg":"http: TLS handshake error from 10.15.0.59:51786: remote error: tls: bad certificate"} {"level":"info","time":"2022-02-02T09:30:45.296865Z","msg":"http: TLS handshake error from 10.15.0.59:51788: remote error: tls: bad certificate"}

The link for download manifest is broken

The command
curl -sSL https://raw.githubusercontent.com/cert-manager/istio-csr/main/docs/istio-config-getting_started.yaml > istio-install-config.yaml

generate a file with "404" content.

The link is broken.

Not supplying static root CA "effectively making istio-csr TOFU"

First, thanks for this project! It glues together istio and cert-manager really well.

I am curious about the recent change in the README.md: It now highly recommends to explicitly supply a static root CA (PR #101):

It is highly recommended that the root CA certificates are statically defined in istio-csr. If they are not, istio-csr will "discover" the root CA certificates when requesting its serving certificate, effectively making istio-csr TOFU.

I am not sure though, how the loading of the root CA cert from the serving certificate request's CertificateRequest.status would make istio-csr TOFU.

  • When supplying it statically, istio-csr would trust K8s to supply the correct root CA file.
  • When not supplying it statically, it would rely on the configured K8s RBAC to prevent anything other than the cert-manager or other trusted components to make changes to CertificateRequest resources. At least in my understanding, this does not mean TOFU, as trust was already established via K8s RBAC mechanisms and service account authentication.

So, as far as I understand, both methods should be reasonably secure, granted RBAC is configured well.

Am I missing something here? Is the goal to protect against a compromised cert-manager?

Allow configuring filter for namespaces to create the RootCA ConfigMap

istio-csr starts a control-loop that creates a ConfigMap istio-ca-root-cert containing the Root Certificate used by istio-proxies.
Not all namespaces requires this ConfigMap since no istio-proxy is deployed there or the Root CA bundle is distributed in a different manner.

Given this context, a parameter to allow filtering the namespaces that receives the configmap, similar to istio-injection=enabled used by Istio to trigger the MutationWebHook.

Support ARM64 images

Is there any scope for adding ARM64 compatibly? People using Graviton 2 instances, or running on other ARM SBC devices will run into this architecture issue. You could use docker buildx to build multi-architecture images.

standard_init_linux.go:228: exec user process caused: exec format error

Support for multiple istio ingress gateways

I am noticing that there is a hard dependency on the istio-system namespace with respect to how the workload certificates can be created. I have an environment, where istiod pods and the regular istio-ingressgateway pods are in istio-system, but I need to use a custom istio-ingressgateway also for my own namespace.

Istio allows you to do this and if not using istio-csr, the "cacerts" setup does work with multiple ingress gateways. but if its needed here, the Issuer for the secret istiod-tls i.e. istio-ca is only available in istio-system and not in custom namespace.

Is this supported?

commonName required for AWS PCA

AWS PCA expects the commonName to be passed in as part of the CSR. Adding commonName: istiod.istio-system.svc in the Certificate.yaml file was all that was needed.

spec:
  dnsNames:
  - istiod.istio-system.svc
  uris:
    - spiffe://cluster.local/ns/istio-system/sa/istiod-service-account
  secretName: istiod-tls
  commonName: istiod.istio-system.svc
...

I don't mind creating a PR for this so let me know if that is preferred.

cert-manager-istio-csr pods lose readiness when new pods are being deployed.

I have been noticing this behavior consistently when new pods are deployed with sidecars, I start seeing the istio-csr pod with 0/1 status. If you need more information, I can try to provide.

I am using Istio 1.9.2 with K8s 1.20

I see this error on the istio-csr pod logs.

E0521 19:48:39.995122 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cert-manager-istio-csr-658d97f5df-sjglt.16812cb7fb46fdf6", GenerateName:"", Namespace:"istio-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Lease", Namespace:"istio-system", Name:"cert-manager-istio-csr-658d97f5df-sjglt", UID:"582a50bf-07bb-4fbc-a533-39e1725d06e1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"24314978", FieldPath:""}, Reason:"LeaderElection", Message:"cert-manager-istio-csr-658d97f5df-sjglt_bd60581f-2123-4ea7-938a-1a0a6a007acb stopped leading", Source:v1.EventSource{Component:"cert-manager-istio-csr-658d97f5df-sjglt_bd60581f-2123-4ea7-938a-1a0a6a007acb", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02221835ff9bbf6, ext:5169901101694, loc:(*time.Location)(0x277d240)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02221835ff9bbf6, ext:5169901101694, loc:(*time.Location)(0x277d240)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = exec (try: 500): database is locked' (will not retry!)

csr readiness probe failed, istio ingress pod also failed

The pod was deployed with latest image but still failed. Nothing of note in the logs.

docker images | grep csr
quay.io/jetstack/cert-manager-istio-csr v0.2.0 5bcc16c3bb27 5 weeks ago 103MB

kubectl describe po cert-manager-istio-csr-58b7d4c64c-f4v82 -n cert-manager
Name: cert-manager-istio-csr-58b7d4c64c-f4v82
Namespace: cert-manager
Priority: 0
Node: k8s-master-0/10.2.3.45
Start Time: Tue, 10 Aug 2021 22:22:06 +0000
Labels: app=cert-manager-istio-csr
pod-template-hash=58b7d4c64c
Annotations: cni.projectcalico.org/podIP: 192.168.0.126/32
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "k8s-pod-network",
"ips": [
"192.168.0.126"
],
"default": true,
"dns": {}
}]
Status: Running
IP: 192.168.0.126
IPs:
IP: 192.168.0.126
Controlled By: ReplicaSet/cert-manager-istio-csr-58b7d4c64c
Containers:
cert-manager-istio-csr:
Container ID: docker://c6ac0463f34f6ae8611f40cfc8d800be6a5e02d5a9d2722cf5887a1b66e003da
Image: quay.io/jetstack/cert-manager-istio-csr:v0.2.0
Image ID: docker-pullable://quay.io/jetstack/cert-manager-istio-csr@sha256:1f3f5be50800120161a356b4e810c83d982d13ee1320652792421bdb0339425d
Ports: 6443/TCP, 9402/TCP
Host Ports: 0/TCP, 0/TCP
Command:
cert-manager-istio-csr
Args:
--log-level=1
--metrics-port=9402
--readiness-probe-port=6060
--readiness-probe-path=/readyz
--certificate-namespace=istio-system
--issuer-name=istio-ca
--issuer-kind=Issuer
--issuer-group=cert-manager.io
--preserve-certificate-requests=false
--root-ca-file=
--serving-certificate-dns-names=cert-manager-istio-csr.cert-manager.svc
--serving-certificate-duration=1h
--trust-domain=cluster.local
--cluster-id=Kubernetes
--max-client-certificate-duration=1h
--serving-address=0.0.0.0:6443
--leader-election-namespace=istio-system
--root-ca-configmap-name=istio-ca-root-cert
State: Running
Started: Tue, 10 Aug 2021 22:22:54 +0000
Ready: False
Restart Count: 0
Readiness: http-get http://:6060/readyz delay=3s timeout=1s period=7s #success=1 #failure=3
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from cert-manager-istio-csr-token-hzcbd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cert-manager-istio-csr-token-hzcbd:
Type: Secret (a volume populated by a Secret)
SecretName: cert-manager-istio-csr-token-hzcbd
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 69s default-scheduler Successfully assigned cert-manager/cert-manager-istio-csr-58b7d4c64c-f4v82 to k8s-master-0
Normal Pulling 61s kubelet, k8s-master-0 Pulling image "quay.io/jetstack/cert-manager-istio-csr:v0.2.0"
Normal Pulled 22s kubelet, k8s-master-0 Successfully pulled image "quay.io/jetstack/cert-manager-istio-csr:v0.2.0"
Normal Created 22s kubelet, k8s-master-0 Created container cert-manager-istio-csr
Normal Started 21s kubelet, k8s-master-0 Started container cert-manager-istio-csr
Warning Unhealthy 5s (x2 over 12s) kubelet, k8s-master-0 Readiness probe failed: HTTP probe failed with statuscode: 500

Then I copy the source code to server and modify the istio-csr helm chart configuration to pass the readiness probe step , it does running with no error logs , and I try to deploy istio with yaml file :istio-config-1.10.0.yaml , but seems the service is still abnormal .

Istio ingress pod logs:

2021-08-10T22:46:41.207977Z warn Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-08-10T22:46:41.560656Z warn sds failed to warm certificate: failed to generate workload certificate: create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 10.111.71.164:443: connect: connection timed out"

Feature Request: Support Multiple Revisions

Due to the explicit certificate naming in the helm chart, istio-csr's helm chart doesn't support non-default revisions.

Are there any plans to remediate this? One approach that comes to mind would be to accept an array of revisions in the helm chart (with the option to exclude the default revision) and loop over that value in the certificate.yaml template.

Feature Request - Certificate Request Timeout/Retries

By design, cert-manager does NOT retry failed CertificateRequests:

Whether issuance of the certificate signing request was successful or not, a retry of the issuance will not happen. It is the responsibility of some other controller to manage the logic and life cycle of CertificateRequests.

Furthermore, there is no timeout mechanism in place for CR's that stay in the "Pending" state for an unacceptable amount of time (often due to a transient or unrecoverable error communicating with the Issuer). I'd like to propose that both of these mechanisms be added to istio-csr. For systems with a low certificate lifetime (like a hardened service mesh), auto-retries are a big help in preventing network communication issues due to expired certs. I've got a couple of ideas around design, but I'd like to solicit feedback first.

Allow override of istiod-tls certificate common name in helm chert (for non-standard istiod deployments)

Hello

The current istio-csr helm chart deploys the istiod-tls serving certificate as a cert-manager Certificate object at installation,
As you can see in link, the common name is statically set to: istiod.istio-system.svc.

In certain installations like Openshift, this will not be the correct address for Istiod as the Redhat Service Mesh operator appends the revision name to the service name for Pilot.
If we could add a parameter to override this default it would be important for this use-case and perhaps other scenarios also.

Does this support Vault issuer?

I followed these steps up to the point of having the vault issuer:
https://learn.hashicorp.com/tutorials/vault/kubernetes-cert-manager?in=vault/kubernetes

then I provided that issuer to istio-csr, by setting values, certificate.name to vault-issuer, so instead of self-signed issuer I have vault.

So I did not do this:
https://github.com/cert-manager/istio-csr/blob/master/hack/demo/cert-manager-bootstrap-resources.yaml
instead I have the issuer from the vault documentation, and the cert
https://github.com/cert-manager/istio-csr/blob/master/deploy/charts/istio-csr/templates/certificate.yaml
is made from that (see below it does show up) and ready True status.

istio-csr comes up, and everything looks like its in place
certificate seems to have gone through and the vault issuer looks ok:

kubectl get certificate -A
NAMESPACE      NAME                       READY   SECRET                     AGE
istio-system   istiod                     True    istiod-tls                 9m37s

kubectl get certificaterequest -A
NAMESPACE      NAME                                  APPROVED   DENIED   READY   ISSUER              REQUESTOR                                         AGE
istio-system   istiod-1609388919                                         True    vault-issuer        system:serviceaccount:cert-manager:cert-manager   9m48s

kubectl get issuer -A
NAMESPACE      NAME           READY   AGE
istio-system   vault-issuer   True    12m

then I submit the istioOperator, based on the docs here and it does accept the manifest with the istio-csr modifications, istiod seems to come up but ingresgateway does not:

kubectl get pods -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-76cd4879b4-4pnp9   0/1     Running   0          29m
istiod-6756549fcd-xpdnm                 1/1     Running   0          29m

from istio ingressgateway:


2021-04-30T21:17:48.050660Z     info    PilotSAN []string{"istiod.istio-system.svc"}
2021-04-30T21:17:48.050674Z     info    MixerSAN []string{"spiffe://cluster.local/ns/istio-system/sa/istio-mixer-service-account"}
2021-04-30T21:17:48.050708Z     info    sa.serverOptions.CAEndpoint == cert-manager-istio-csr.cert-manager.svc:443
2021-04-30T21:17:48.050715Z     info    Using user-configured CA cert-manager-istio-csr.cert-manager.svc:443
2021-04-30T21:17:48.050718Z     info    istiod uses self-issued certificate
2021-04-30T21:17:48.050759Z     info    the CA cert of istiod is: -----BEGIN CERTIFICATE-----
<redacted>
-----END CERTIFICATE-----
...
...


warning envoy config    gRPC config for type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret rejected: Failed to load certificate chain from <inline>
2021-04-30T21:17:49.002741Z     warning envoy main      there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
2021-04-30T21:17:49.003369Z     error   sds     resource:default received error: code:13  message:"Failed to load certificate chain from <inline>". Will not respond until next secret update

...
...
then this just repeats forever:

2021-04-30T21:37:40.008063Z     warning envoy config    StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
2021-04-30T21:37:40.816847Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:42.815700Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:44.815994Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:46.817274Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:48.815762Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:50.818775Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:52.815827Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:54.635857Z     warning envoy config    StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
2021-04-30T21:37:54.815743Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:56.816362Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-30T21:37:58.820236Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

from istiod:

http: TLS handshake error from 10.66.15.64:33293: remote error: tls: bad certificate

secret shows up as well:

kubectl get secrets -n istio-system
NAME                                               TYPE                                  DATA   AGE
...
...
istiod-tls                                         kubernetes.io/tls                     3      33m
...

When Ive researched others who are getting that "envoy proxy is NOT ready spam", it had something to do with the way the cert was presented/formatted, but all I did was follow the instructions and have it generated by vault, which means either it didnt like how vault formatted it, or istiod didnt like how istio-csr formatted it?

I'm not sure what else it could be, are there certain minimal versions I should be aware of? the only thing I see is istio needs to be 1.7+, which mine is 1.7.2, is there a minimal version for cert manager?

or if theres any steps I may have missed please let me know but Ive tried to look at these docs back and forth and I can't figure out what it is.

istio 1.9 support

The README indicates that this only supports istio 1.7 and 1.8. What about 1.9 support?

Multi-cluster Private CA Root Cert , how do i identify the cluster identity on workloads .

Steps followed.

  1. helm_v3 install
    cert-manager jetstack/cert-manager
    --namespace istio-system
    --set installCRDs=true
  2. kubectl create secret tls root-ca-key-pair --cert=root-cert.pem --key=root-key.pem --n istio-system

kubectl apply -n istio-system -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: root-ca-key-pair
namespace: istio-system
spec:
ca:
secretName: root-ca-key-pair
EOF
4. helm_v3 upgrade install -n istio-system cert-manager-istio-csr jetstack/cert-manager-istio-csr
--set app.server.clusterID=anand-eks
--set app.certmanager.issuer.name=root-ca-key-pair
--set app.certmanager.preserveCertificateRequests=true
--set app.istio.revisions={1-12-1}
--set app.tls.certificateDNSNames={cert-manager-istio-csr.istio-system.svc}
--set app.logLevel=3

Istio-csr should expose critical prometheus metrics

Istio-CSR is likely to be a critical path component for the availability of Istio deployment, there should be white box metrics into its behaviour.

If it is down or if the grpc server is having trouble servicing citadel client requests it's important to alert.

The kind of metrics that spring to mind:

  • csr_request_initiated_count.

  • csr_request_completed_count (maybe including a GRPC code label for unexpected or unauthenticated errors and ok for success).
    with these two we can track any incomplete or errors in requests, alternatively we could just use a separate counter for errors.

  • grpc_server_csr_duration_seconds (a histogram or summary for this)

  • general saturation metrics exposed by the go instrumentation will cover any memory / load issues.
    other potentially useful metrics:

  • bootstrap_cert_error (renewal of the tls dialer provider also?)

  • ssl_handshake_failures , which might not be caught otherwise since it doesn't reach the grpc handler i think.

[doc] confusion with `ca.pem` and Readiness probe failed on ingress and egress gateways

Hello all,
In the README.md, I am confused by this line --from-file=ca.pem=ca.pem in the section Load root CAs from file ca.pem (Preferred).

I do not know what ca.pem file I should use and I do not know if this CA has anything to do with Cert-manager Issuer or ClusterIssuer that we have to create. It is not clear for me what ca.pem is and where it comes from. I wish I can choose ca.pem (letsencrypt or custom CA). Maybe this part can be more explained.

Right now, after creating cert-manager CA issuer, I generate a self-signed ca.pem (which has nothing to do with the cert-manager CA issuer) file with openssl, I follow the steps from Load root CAs from file ca.pem (Preferred).

But I have an error : the result is my INgress and Egress gateways have both a Readiness probe failed error and x509 certificate signed by unknown authority error.

I suspect my problem comes from the line kubectl create secret generic istio-root-ca --from-file=ca.pem=ca.pem -n cert-manager and from cert-manager CA issuer.

Version

Istio

I use the IstioOperator install from Readme.

$ istioctl version

client version: 1.11.3
control plane version: 1.11.3
data plane version: none

Kubernetes

I use Kubernetes KIND.

$ kind version
kind v0.11.1 go1.17.1 linux/amd64
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}

Some logs

Pods not ready

kubectl get pods -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
istio-egressgateway-6f9c77d6d9-s7t9j    0/1     Running   0          7h23m
istio-ingressgateway-5f7dc9c95f-n9n6r   0/1     Running   0          7h23m
istiod-76754d847-vqz29                  1/1     Running   0          7h23m

Readiness probe failed on istio-egressgateway and istio-ingressgateway

kubectl describe pods -n istio-system
[...TRUNCATED...]
Events:
  Type     Reason     Age                     From     Message
  ----     ------     ----                    ----     -------
  Warning  Unhealthy  4m1s (x36450 over 20h)  kubelet  Readiness probe failed: Get "http://10.2.1.10:15021/healthz/ready": dial tcp 10.2.1.10:15021: connect: connection refused

More precise logs from gateway pod and container: x509 unknown

kubectl logs -n istio-system istio-egressgateway-6f9c77d6d9-s7t9j -c istio-proxy
2021-10-22T08:58:58.439779Z	info	FLAG: --concurrency="0"
2021-10-22T08:58:58.440209Z	info	FLAG: --domain="istio-system.svc.cluster.local"
2021-10-22T08:58:58.440357Z	info	FLAG: --help="false"
2021-10-22T08:58:58.440520Z	info	FLAG: --log_as_json="false"
2021-10-22T08:58:58.440662Z	info	FLAG: --log_caller=""
2021-10-22T08:58:58.440823Z	info	FLAG: --log_output_level="default:info"
2021-10-22T08:58:58.440881Z	info	FLAG: --log_rotate=""
2021-10-22T08:58:58.441040Z	info	FLAG: --log_rotate_max_age="30"
2021-10-22T08:58:58.441181Z	info	FLAG: --log_rotate_max_backups="1000"
2021-10-22T08:58:58.441317Z	info	FLAG: --log_rotate_max_size="104857600"
2021-10-22T08:58:58.441461Z	info	FLAG: --log_stacktrace_level="default:none"
2021-10-22T08:58:58.441634Z	info	FLAG: --log_target="[stdout]"
2021-10-22T08:58:58.441687Z	info	FLAG: --meshConfig="./etc/istio/config/mesh"
2021-10-22T08:58:58.441865Z	info	FLAG: --outlierLogPath=""
2021-10-22T08:58:58.442010Z	info	FLAG: --proxyComponentLogLevel="misc:error"
2021-10-22T08:58:58.442169Z	info	FLAG: --proxyLogLevel="warning"
2021-10-22T08:58:58.442338Z	info	FLAG: --serviceCluster="istio-proxy"
2021-10-22T08:58:58.442537Z	info	FLAG: --stsPort="0"
2021-10-22T08:58:58.442715Z	info	FLAG: --templateFile=""
2021-10-22T08:58:58.442814Z	info	FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2021-10-22T08:58:58.442922Z	info	Version 1.11.3-6bda7c161d3925c48fbea3f297ffa52461893f3b-Clean
2021-10-22T08:58:58.443456Z	info	Proxy role	ips=[10.2.1.16 fe80::aaaa] type=router id=istio-egressgateway-6f9c77d6d9-s7t9j.istio-system domain=istio-system.svc.cluster.local
2021-10-22T08:58:58.443736Z	info	Apply mesh config from file accessLogFile: /dev/stdout
defaultConfig:
  discoveryAddress: istiod.istio-system.svc:15012
  proxyMetadata: {}
  tracing:
    zipkin:
      address: zipkin.istio-system:9411
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
2021-10-22T08:58:58.448687Z	info	Effective config: binaryPath: /usr/local/bin/envoy
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
proxyMetadata: {}
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2021-10-22T08:58:58.448867Z	info	JWT policy is third-party-jwt
2021-10-22T08:58:58.466682Z	info	Opening status port 15020
2021-10-22T08:58:58.466590Z	info	CA Endpoint cert-manager-istio-csr.cert-manager.svc:443, provider Citadel
2021-10-22T08:58:58.471631Z	info	Using CA cert-manager-istio-csr.cert-manager.svc:443 cert with certs: var/run/secrets/istio/root-cert.pem
2021-10-22T08:58:58.477719Z	info	citadelclient	Citadel client using custom root cert: cert-manager-istio-csr.cert-manager.svc:443
2021-10-22T08:58:58.536343Z	info	ads	All caches have been synced up in 101.910366ms, marking server ready
2021-10-22T08:58:58.537002Z	info	sds	SDS server for workload certificates started, listening on "etc/istio/proxy/SDS"
2021-10-22T08:58:58.537042Z	info	xdsproxy	Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "Kubernetes"
2021-10-22T08:58:58.538024Z	info	sds	Starting SDS grpc server
2021-10-22T08:58:58.539498Z	info	Pilot SAN: [istiod.istio-system.svc]
2021-10-22T08:58:58.539595Z	info	starting Http service at 127.0.0.1:15004
2021-10-22T08:58:58.544922Z	info	Starting proxy agent
2021-10-22T08:58:58.544995Z	info	Epoch 0 starting
2021-10-22T08:58:58.545028Z	info	Envoy command: [-c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --drain-strategy immediate --parent-shutdown-time-s 60 --local-address-ip-version v4 --bootstrap-version 3 --file-flush-interval-msec 1000 --disable-hot-restart --log-format %Y-%m-%dT%T.%fZ	%l	envoy %n	%v -l warning --component-log-level misc:error]
2021-10-22T08:58:58.740974Z	warning	envoy config	StreamAggregatedResources gRPC config stream closed: 14, connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
2021-10-22T08:58:58.851695Z	warning	envoy config	StreamAggregatedResources gRPC config stream closed: 14, connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
2021-10-22T08:58:59.028814Z	warn	ca	ca request failed, starting attempt 1 in 102.093205ms
2021-10-22T08:58:59.131582Z	warn	ca	ca request failed, starting attempt 2 in 217.620363ms
2021-10-22T08:58:59.350302Z	warn	ca	ca request failed, starting attempt 3 in 413.164804ms
2021-10-22T08:58:59.763817Z	warn	ca	ca request failed, starting attempt 4 in 790.034269ms
2021-10-22T08:58:59.764281Z	warning	envoy config	StreamAggregatedResources gRPC config stream closed: 14, connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
2021-10-22T08:59:00.555360Z	warn	sds	failed to warm certificate: failed to generate workload certificate: create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
2021-10-22T08:59:00.946959Z	warning	envoy config	StreamAggregatedResources gRPC config stream closed: 14, connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"

using vault issuer expects common name from cert-manager-istio-csr when attempting to create server cert

When I attempt to run this command:
helm --kube-context minikube install -n cert-manager -f istio-csr-values.yaml cert-manager-istio-csr jetstack/cert-manager-istio-csr

I get the following error:
error: failed to fetch initial serving certificate: failed to sign serving certificate: failed to wait for CertificateRequest cert-manager/istio-csr-hp5m5 to be signed: created CertificateRequest has failed: [{Approved True 2021-07-25 01:56:47 +0000 UTC cert-manager.io Certificate request has been approved by cert-manager.io} {Ready False 2021-07-25 01:56:47 +0000 UTC Failed Vault failed to sign certificate: failed to sign certificate by vault: Error making API request.

URL: POST https://vault.local:8200/v1/pki_int/sign/test-dot-com
Code: 400. Errors:

  • the common_name field is required, or must be provided in a CSR with "use_csr_common_name" set to true, unless "require_cn" is set to false}]

istio-csr-values.yaml has this content:
replicaCount: 3
service:
port: 443
type: NodePort
nodePort: 30443
app:
logLevel: 5
certmanager:
issuer:
name: vault-istio-ca-issuer
kind: Issuer
group: cert-manager.io
namespace: cert-manager
preserveCertificateRequests: true
tls:
trustDomain: yenzara.com
certificateDuration: 20s
certificateDNSNames:
- cert-manager-istio-csr.cert-manager.svc
server:
maxCertificateDuration: 5m
serving:
address: 0.0.0.0
port: 6443
controller:
rootCAConfigMapName: istio-ca-root-cert
leaderElectionNamespace: cert-manager
resources: {}

Failing integrate AWSPCA - CSR must mark the SAN extension critical when it has an empty subject

CSR must mark the SAN extension critical when it has an empty subject

cert-manager and awspca issuer work without any issue, I can create certs via kind: certificate
I've installed istio csr and configure it like this:

  certmanager:
    # -- Namespace to create CertificateRequests for both istio-csr's serving
    # certificate and incoming gRPC CSRs.
    namespace: istio-system
    # -- Don't delete created CertificateRequests once they have been signed.
    preserveCertificateRequests: false
    issuer:
      # -- Issuer name set on created CertificateRequests for both istio-csr's
      # serving certificate and incoming gRPC CSRs.
      name: istio-ca
      # -- Issuer kind set on created CertificateRequests for both istio-csr's
      # serving certificate and incoming gRPC CSRs.
      kind: AWSPCAClusterIssuer
      # -- Issuer group name set on created CertificateRequests for both
      # istio-csr's serving certificate and incoming gRPC CSRs.
      group: awspca.cert-manager.io
  tls:
    # -- The Istio cluster's trust domain.
    trustDomain: "cluster.local"
    # -- An optional file location to a PEM encoded root CA that the root CA
    # ConfigMap in all namespaces will be populated with. If empty, the CA
    # returned from cert-manager for the serving certificate will be used.
    rootCAFile: # /var/certs/ca.pem
    # -- The DNS names to request for the server's serving certificate which is
    # presented to istio-agents. istio-agents must route to istio-csr using one
    # of these DNS names.
    certificateDNSNames:
    - cert-manager-istio-csr.cert-manager.svc
    - istiod.istio-system.svc
    # -- Requested duration of gRPC serving certificate. Will be automatically
    # renewed.
    # Based on NIST 800-204A recommendations (SM-DR13).
    # https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-204A.pdf
    certificateDuration: 2160h
    # -- Requested duration of istio's Certificate. Will be automatically
    # renewed.
    # Based on NIST 800-204A recommendations (SM-DR13).
    # https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-204A.pdf
    # Warning: cert-manager does not allow a duration on Certificates less than
    # 1 hour.
    istiodCertificateDuration: 2160h

when the istio-csr starts it create csr via the istio-csr and get approved and ready, but then it's create one more csr for istiod certificate and fail with this:

{"level":"error","ts":1629914189.9197729,"logger":"controllers.CertificateRequest","msg":"failed to request certificate from PCA","certificaterequest":"istio-system/istiod-xdqrc","error":"operation error ACM PCA: IssueCertificate, https response error StatusCode: 400, RequestID: 79b97ccc-8bb2-410f-8b98-072bb2174d5f, MalformedCSRException: CSR must mark the SAN extension critical when it has an empty subject.","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\ngithub.com/cert-manager/aws-privateca-issuer/pkg/controllers.(*CertificateRequestR

istio-system        istiod-xdqrc                                True                False   istio-ca           system:serviceaccount:cert-manager:cert-manager             0s
istio-system        istio-csr-8fq8r                             True                True    istio-ca           system:serviceaccount:cert-manager:cert-manager-istio-csr   4s
istio-system        istio-csr-8fq8r                             True                True    istio-ca           system:serviceaccount:cert-manager:cert-manager-istio-csr   4s
❯ k get cert                                                                                                                                                                                                 ─╯
NAME       READY   SECRET       AGE
istio-ca   True    istiod-ca    36m
istiod     False   istiod-tls   26m

istio-ca was created via a certificate.yaml

Any idea what went wrong? maybe I'm missing something with the basic integration.

Help Please 🤞🏽 🙏🏽

IstioOperator multiCluster:clusterName auth.go:35 certificate-provider "msg"="failed to authenticate request" "error"="could not get cluster cluster-1's kube client"

Hi Team,

I have tried to run istio-csr in a single Istio(1.8.2) control plane cluster and works perfectly.

After i change the config for IstioOperator to [1]:

  values:
    global:
      # Change certificate provider to cert-manager istio agent for istio agent
      caAddress: cert-manager-istio-csr.cert-manager.svc:443

      multiCluster:
        clusterName: cluster-1
      meshID: mesh1
      network: network1

and run the install the ingress and egress gateways fail to be installed :

This will install the Istio demo profile with ["Istio core" "Istiod" "Ingress gateways" "Egress gateways"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the conditionystem/istio-ingressgateway
Deployment/istio-system/istio-ingressgateway
✘ Egress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-egressgateway

warning envoy config StreamSecrets gRPC config stream closed: 2, failed to get root cert

By checking cert-manager-istio-csr pod logs i can see the following error message :

I0131 23:19:23.407936       1 controller.go:174] ca-root-controller/controller/configmap "msg"="Starting Controller" "configmap-name"="istio-ca-root-cert" "reconciler group"="" "reconciler kind"="ConfigMap" 
I0131 23:19:23.408270       1 controller.go:206] ca-root-controller/controller/configmap "msg"="Starting workers" "configmap-name"="istio-ca-root-cert" "reconciler group"="" "reconciler kind"="ConfigMap" "worker count"=1
I0131 23:19:23.408455       1 controller.go:174] ca-root-controller/controller/namespace "msg"="Starting Controller" "configmap-name"="istio-ca-root-cert" "reconciler group"="" "reconciler kind"="Namespace" 
I0131 23:19:23.408544       1 controller.go:206] ca-root-controller/controller/namespace "msg"="Starting workers" "configmap-name"="istio-ca-root-cert" "reconciler group"="" "reconciler kind"="Namespace" "worker count"=1
E0131 23:22:07.420725       1 auth.go:35] certificate-provider "msg"="failed to authenticate request" "error"="could not get cluster cluster-1's kube client"  
E0131 23:22:07.716655       1 auth.go:35] certificate-provider "msg"="failed to authenticate request" "error"="could not get cluster cluster-1's kube client"  
E0131 23:22:08.198507       1 auth.go:35] certificate-provider "msg"="failed to authenticate request" "error"="could not get cluster cluster-1's kube client"  
E0131 23:22:08.421118       1 auth.go:35] certificate-provider "msg"="failed to authenticate request" "error"="could not get cluster cluster-1's kube client"  
E0131 23:22:08.870275       1 auth.go:35] certificate-provider "msg"="failed to authenticate request" "error"="could not get cluster cluster-1's kube client"  

Is the multiCluster install going to be supported by istio-csr ?

Thanks!

Installation into existing istio cluster

Hi, in the installation instructions it states: Finally, install istio. Istio must be installed using the IstioOperator configuration changes within https://github.com/cert-manager/istio-csr/blob/master/hack/istio-config-1.8.2.yaml This suggests that you would have installed all the previous crd's etc into a cluster without Istio running, is it possible to install this into a cluster that already has IstioOperator running? (The use case is in the instance of creating a GKE cluster with istio enabled, and then installing this once that cluster is running)

Serving CertificateRequest failures should be deleted

I experienced some issue with istio-csr trying to renew its own serving certificate. Upon fetch failure, the CertificateRequest resource is kept and not deleted.

But renewal is happening periodically

for {

As a result my cluster had accumulated a lot of stale CertificateRequest resources.

This then caused the actual Cert Manager controllers that watches for the CertificateRequest to be OOMKilled. Resulting further failure of CertificateRequest.

Helm chart is failing with "certificate.spec.revisionHistoryLimit" issue

Below is the command I ran to deploy charts but getting issues for the same. Earlier the same thing worked for me but now I can't.

istio-csr Version : v0.2.0 and v.0.3.0 has same error

$$ helm install -n cert-manager cert-manager-istio-csr jetstack/cert-manager-istio-csr
--set agent.clusterID=cluster1
--set certificate.name=vault-istio-ca1-issuer
--set certificate.preserveCertificateRequests=true
--set agent.logLevel=3
--dry-run

WARNING: You should switch to "https://charts.helm.sh/stable"
Error: unable to build kubernetes objects from release manifest: error validating "": error validating 
data: ValidationError(Certificate.spec): unknown field "revisionHistoryLimit" in io.cert-manager.v1.Certificate.spec```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.