Giter Site home page Giter Site logo

net-kourier's Introduction

Kourier

This component is GA

Kourier is an Ingress for Knative Serving. Kourier is a lightweight alternative for the Istio ingress as its deployment consists only of an Envoy proxy and a control plane for it.

Kourier is passing the knative serving e2e and conformance tests: Kourier Testgrid.

Getting started

  • Install Knative Serving, ideally without Istio:
kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-core.yaml
  • Then install Kourier:
kubectl apply -f https://github.com/knative/net-kourier/releases/latest/download/kourier.yaml
  • Configure Knative Serving to use the proper "ingress.class":
kubectl patch configmap/config-network \
  -n knative-serving \
  --type merge \
  -p '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
  • (OPTIONAL) Set your desired domain (replace 127.0.0.1.nip.io to your preferred domain):
kubectl patch configmap/config-domain \
  -n knative-serving \
  --type merge \
  -p '{"data":{"127.0.0.1.nip.io":""}}'
  • (OPTIONAL) Deploy a sample hello world app:
cat <<-EOF | kubectl apply -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-go
spec:
  template:
    spec:
      containers:
      - image: gcr.io/knative-samples/helloworld-go
        env:
        - name: TARGET
          value: Go Sample v1
EOF
  • (OPTIONAL) For testing purposes, you can use port-forwarding to make requests to Kourier from your machine:
kubectl port-forward --namespace kourier-system $(kubectl get pod -n kourier-system -l "app=3scale-kourier-gateway" --output=jsonpath="{.items[0].metadata.name}") 8080:8080 19000:9000 8443:8443

curl -v -H "Host: helloworld-go.default.127.0.0.1.nip.io" http://localhost:8080

Deployment

By default, the deployment of the Kourier components is split between two different namespaces:

  • Kourier control is deployed in the knative-serving namespace
  • The kourier gateways are deployed in the kourier-system namespace

To change the Kourier gateway namespace, you will need to:

  • Modify the files in config/ and replace all the namespaces fields that have kourier-system with the desired namespace.
  • Set the KOURIER_GATEWAY_NAMESPACE env var in the kourier-control deployment to the new namespace.

Features

  • Traffic splitting between Knative revisions.
  • Automatic update of endpoints as they are scaled.
  • Support for gRPC services.
  • Timeouts and retries.
  • TLS
  • Cipher Suite
  • External Authorization support.
  • Proxy Protocol (AN EXPERIMENTAL / ALPHA FEATURE)

Setup TLS certificate

Create a secret containing your TLS certificate and Private key:

kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}

Add the following env vars to net-kourier-controller in the "kourier" container :

CERTS_SECRET_NAMESPACE: ${NAMESPACES_WHERE_THE_SECRET_HAS_BEEN_CREATED}
CERTS_SECRET_NAME: ${CERT_NAME}

Cipher Suites

You can specify the cipher suites for TLS external listener. To specify the cipher suites you want to allow, run the following command to patch config-kourier ConfigMap:

kubectl -n "knative-serving" patch configmap/config-kourier \
  --type merge \
  -p '{"data":{"cipher-suites":"ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-ECDSA-CHACHA20-POLY1305"}}'

The default uses the default cipher suites of the envoy version.

External Authorization Configuration

If you want to enable the external authorization support you can set these ENV vars in the net-kourier-controller deployment:

  • KOURIER_EXTAUTHZ_HOST*: The external authorization service and port, my-auth:2222
  • KOURIER_EXTAUTHZ_FAILUREMODEALLOW*: Allow traffic to go through if the ext auth service is down. Accepts true/false
  • KOURIER_EXTAUTHZ_PROTOCOL: The protocol used to query the ext auth service. Can be one of : grpc, http, https. Defaults to grpc
  • KOURIER_EXTAUTHZ_MAXREQUESTBYTES: Max request bytes, if not set, defaults to 8192 Bytes. More info Envoy Docs
  • KOURIER_EXTAUTHZ_TIMEOUT: Max time in ms to wait for the ext authz service. Defaults to 2s
  • KOURIER_EXTAUTHZ_PATHPREFIX: If KOURIER_EXTAUTHZ_PROTOCOL is equal to http or https, path to query the ext auth service. Example : if set to /verify, it will query /verify/ (notice the trailing /). If not set, it will query /
  • KOURIER_EXTAUTHZ_PACKASBYTES: If KOURIER_EXTAUTHZ_PROTOCOL is equal to grpc, sends the body as raw bytes instead of a UTF-8 string. Accepts only true/false, t/f or 1/0. Attempting to set another value will throw an error. Defaults to false. More info Envoy Docs.

* Required

Proxy Protocol Configuration

Note: this is an experimental/alpha feature.

To enable proxy protocol feature, run the following command to patch config-kourier ConfigMap:

kubectl patch configmap/config-kourier \
  -n knative-serving \
  --type merge \
  -p '{"data":{"enable-proxy-protocol":"true"}}'

Ensure that the file was updated successfully:

kubectl get configmap config-kourier --namespace knative-serving --output yaml

LoadBalancer configuration:

We need to:

  • Use your LB provider annotation to enable proxy-protocol.
  • If you are planning to enable external-domain-tls, use your LB provider annotation to specify a custom name to use for the Load balancer, This is used to work around the issue of kube-proxy adding external LB address to node local iptables rule, which will break requests to an LB from in-cluster if the LB is expected to terminate SSL or proxy protocol.
  • Change the external Traffic Policy to local so the LB we'll preserve the client source IP and avoids a second hop for LoadBalancer.

Example (Scaleway provider):

apiVersion: v1
kind: Service
metadata:
  name: kourier
  namespace: kourier-system
  annotations:
    service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: '*'
    service.beta.kubernetes.io/scw-loadbalancer-use-hostname: "true"
  labels:
    networking.knative.dev/ingress-provider: kourier
spec:
  ports:
    - name: http2
      port: 80
      protocol: TCP
      targetPort: 8080
    - name: https
      port: 443
      protocol: TCP
      targetPort: 8443
  selector:
    app: 3scale-kourier-gateway
  externalTrafficPolicy: Local
  type: LoadBalancer

Tips

Domain Mapping is configured to explicitly use http2 protocol only. This behaviour can be disabled by adding the following annotation to the Domain Mapping resource

kubectl annotate domainmapping <domain_mapping_name> kourier.knative.dev/disable-http2=true --namespace <namespace>

A good use case for this configuration is DomainMapping with Websocket

Note: This annotation is an experimental/alpha feature. We may change the annotation name in the future.

License

Apache 2.0 License

net-kourier's People

Contributors

arsenetar avatar bbrowning avatar davidor avatar dprotaso avatar evankanderson avatar gekko0114 avatar izabelacg avatar jmprusi avatar julz avatar kauzclay avatar knative-automation avatar lionelvillard avatar markusthoemmes avatar mattmoor avatar mgencur avatar nak3 avatar nickcao avatar norbjd avatar philipgough avatar pmorie avatar psschwei avatar ramychaabane avatar retocode avatar rhuss avatar ryutoyasugi avatar skonto avatar tardieu avatar tcnghia avatar xxinran avatar yanweiguo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

net-kourier's Issues

Global TLS is disabled when an KIngress is configured with a TLS secret

Serving Issue: knative/serving#12377

Global TLS is configured here:
https://github.com/knative-sandbox/net-kourier#setup-tls-certificate

When you create a KIngress with it's own TLS secret (ie. provided by autoTLS) it elides the configured global tls cert from the envoy's filter chain.

Looks like it's elided here when the SNI listener is created
https://github.com/knative-sandbox/net-kourier/blob/87c1085d407b1a5026cd231eba64c90ca91ec7f4/pkg/generator/caches.go#L237-L256

failed to call a cluster local ksvc from an other ksvc which has istio sidecar injected in OCP.

I am using Knative serving (0.19) on OpenShift (4.6) and encount this issue.

In OCP, by default it uses the kourier for knative networking, and to use custom domain, I installed and use istio (OCP service mesh as well)
Now I have two ksvc:
the first one is an public one, I followed the OCP document it works fine.
custom domain => OCP IP => route for istio ingress => istio ingress => gateway => virtualservice => service entry => kourier => ksvc.
the second one is a cluster local one, which call be access from the cluster dns.

now both of them works fine, the first one can be accessed from outside by the domain, the second one can be accessed from other pods via internal dns. Except the first ksvc cannot access the second ksvc, since the first ksvc has an istio sidecar injected. if I disable the injection, the access will work, if I enable it, I will get the error message like this .

/usr/src/app $ wget http://internal-ksvc.stic-istio-demo-p4d.svc.cluster.local Connecting to internal-ksvc.stic-istio-demo-p4d.svc.cluster.local (172.30.131.80:80) wget: server returned error: HTTP/1.1 404 Not Found

when I chech the log from the 3scale-kourier-gateway pod, I can only get the good access record, I am not able to find anything related to 404 or failure logs.

Did I miss anything.

Readiness probe failed

I have Readiness probe failed for my kourier setup.

kubectl get events -n knative-serving | grep Unhealthy
60m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   Readiness probe failed: 2021/12/17 15:49:17 failed to connect to service at ":18000": context deadline exceeded
60m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   Readiness probe failed: 2021/12/17 15:49:27 failed to connect to service at ":18000": context deadline exceeded
60m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   Readiness probe failed: 2021/12/17 15:49:37 failed to connect to service at ":18000": context deadline exceeded
60m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   Readiness probe failed: 2021/12/17 15:49:47 failed to connect to service at ":18000": context deadline exceeded
59m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   Readiness probe failed: 2021/12/17 15:49:57 failed to connect to service at ":18000": context deadline exceeded
59m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   Readiness probe failed: 2021/12/17 15:50:07 failed to connect to service at ":18000": context deadline exceeded
57m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   (combined from similar events): Readiness probe failed: 2021/12/17 15:52:17 failed to connect to service at ":18000": context deadline exceeded
18m         Warning   Unhealthy   pod/net-kourier-controller-7df5866d8d-xcsv2   Readiness probe failed:

I also noticed:

2m44s       Warning   FailedToUpdateEndpoint   endpoints/net-kourier-controller               Failed to update endpoint knative-serving/net-kourier-controller: Operation cannot be fulfilled on endpoints "net-kourier-controller": the object has been modified; please apply your changes to the latest version and try again 

All this leads to timeout and EOF during requests

3scale-kourier-gateway pod fails to start with istio sidecar injection

issue

  • When enabled sidecar injection on 3scale-kourier-gateway, the pod is not able to start.
  • Even though adding --base-id option, the gateway pod still does not start well.

step to reproduce

0. Deploy Serving with Kourier

$ cd ${KNATIVE_SERVING_CODE}
$ kubectl apply -f third_party/kourier-latest/kourier.yaml
$ ko apply -f config/  --selector=networking.knative.dev/ingress-provider!=istio

1. Deploy Istio

(Please download istioctl 1.6.3 from here https://github.com/istio/istio/releases/tag/1.6.3)

$ wget https://gist.githubusercontent.com/nak3/1f533b1a8be1bdd891e9b2fd8bebb40a/raw/d6acf8e0688e046c1c115ab553fd8669a0c78688/istio-mesh.yaml
$ istioctl manifest apply -f istio-ci-mesh.yaml

2. Enable sidecar injection on kourier-gateway

$ kubectl label namespace kourier-system istio-injection=enabled
$ kubectl  delete pod -n kourier-system --all                   # Restart pods to enable sidecar injection

3. Controller can run but gateway does not.

$ kubectl  get pod -n kourier-system
NAME                                      READY   STATUS             RESTARTS   AGE
3scale-kourier-control-5485f6df9f-fxb2g   2/2     Running            0          87s
3scale-kourier-gateway-67d8847b79-x2t5k   0/2     CrashLoopBackOff   2          60s

$ kubectl  get pod -n kourier-system
NAME                                      READY   STATUS      RESTARTS   AGE
3scale-kourier-control-5485f6df9f-fxb2g   2/2     Running     0          64s
3scale-kourier-gateway-67d8847b79-x2t5k   0/2     Completed   2          37s

4. Adding base-id option

As you can see unable to bind domain socket with id=0 (see --base-id option), the base-id was conflicted.

$ kubectl  -n kourier-system logs 3scale-kourier-gateway-67d8847b79-x2t5k -c istio-proxy
  ...
[2020-06-28 07:04:16.886][14][debug][init] [external/envoy/source/common/init/watcher_impl.cc:27] init manager Server destroyed
unable to bind domain socket with id=0 (see --base-id option)
2020-06-28T07:04:16.890205Z	error	Epoch 0 exited with error: exit status 1
2020-06-28T07:04:16.890235Z	info	No more active epochs, terminating

So add base-id option to kourier-gateway deployment.

    spec:
      containers:
      - args:
        - -c
        - /tmp/config/envoy-bootstrap.yaml
        - --base-id
        - 1

5. still 3scale-kourier-gateway fails with connection refused error.

$ kubectl  -n kourier-system  describe pod 3scale-kourier-gateway-7c98464679-mtb42
  ...
  Warning  Unhealthy  58m                    kubelet, ip-172-20-35-225.ap-southeast-1.compute.internal  Readiness probe failed: Get http://172.20.35.168:15021/healthz/ready: dial tcp 172.20.35.168:15021: connect: connection refused
  Warning  Unhealthy  3m55s (x658 over 58m)  kubelet, ip-172-20-35-225.ap-southeast-1.compute.internal  Readiness probe failed: HTTP probe failed with statuscode: 500

TLS setup is not working in release 0.26?

Hello,
I think there is some kind of regression with TLS setup, I deployed release 0.26 with a knative 0.24, and when I curl my function I got 404 http code.

Looking at the code, the route for TLS is created only if len(ingress.Spec.TLS) != 0 , but I think ingressesToSync, err := getReadyIngresses(ctx, knativeClient.NetworkingV1alpha1()) is returning an ingress without any TLS specifications.

As far as I understand, we don't have to match SNI when configuring TLS with a single cert for all services, so patching the code as following in ingress_translator.go, has created the route
before:

if len(ingress.Spec.TLS) != 0 {
	tlsRoutes = append(tlsRoutes, envoy.NewRoute(
		pathName, matchHeadersFromHTTPPath(httpPath), path, wrs, 0, httpPath.AppendHeaders, httpPath.RewriteHost))
}

after:

if len(ingress.Spec.TLS) != 0 || useHTTPSListenerWithOneCert() {
	tlsRoutes = append(tlsRoutes, envoy.NewRoute(
		pathName, matchHeadersFromHTTPPath(httpPath), path, wrs, 0, httpPath.AppendHeaders, httpPath.RewriteHost))
}

Is this a real issue or I'm doing something wrong? or maybe because I 'm using knative 0.24?

Of course, I've create a secret with a cert and key and added the required env variables

Thanks

Access logging should be configurable

Currently, we log every single request going through Kourier. For anything not trivial, that can quickly cause issues with the logging volume. We should at least make this behavior configurable if not even make the default to not log.

Support SDS in kourier

We tested kourier with knative. The result is great comparing with istio. But one blocker to adapt kourier is it is not support SDS which support by istio https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/.

My user case is for each kube namespace (e.g. test-ns), we will order a https certs for namespace domain (e.g. *.test-ns.root-domain.com), then kn service will have application domain based on it, like app1.test-ns.root-domain.com, app2.test-ns.root-domain.com. We also have other namespace like test-ns1, test-ns2 in the kube cluster.

For istio, what we are doing is ordering https cert, generate kube secret to store the cert, and update istio gateway knative-ingress-gateway to load the kube secret. Then istio will serve the https request for domain *.test-ns.root-domain.com.

- hosts:
    - '*.test-ns.root-domain.com'
    port:
      name: https-test-ns
      number: 443
      protocol: HTTPS
    tls:
      credentialName: test-ns
      mode: SIMPLE
      subjectAltNames: [] 

Any thinking around this, do we plan to support it in Kourier ? Thank you.
@davidor and @jmprusi ^^^ Thank you.

Remove header and other custom envoy configuration

Love the simplicity of this project. However I'm not sure how to get passed this particular issue.

I'm wanting to remove the following headers from the Kourier response:

server: envoy
x-envoy-upstream-service-time: 10

Am I correct in understanding that Kourier makes this impossible? Any thoughts on adding an option to (somehow) provide custom envoy configuration?

I'm really looking for a knative serving ingress that does very little and this is almost perfect. But even at its base, envoy is doing a little too much and I feel like I need to be able to get in there and turn certain things off.

Stop using deprecated function from go-control-plane

Add a readiness probe to kourier-control

When rolling the deployments (i.e. in an upgrade), the deployment rollout cannot tell if the new control pods are really up yet. We should add a readiness probe that keys off of the gRPC server being serving correctly.

/assign

Question: Are the instructions for adding a TLS certificate still accurate?

I created a k8s cluster with knative following https://knative.dev/docs/install/serving/install-serving-with-yaml/#install-a-networking-layer and it's using Kourier.

I'm following the README at https://github.com/knative-sandbox/net-kourier#setup-tls-certificate
It seems to be from the old repo 3scale-archive/kourier@99d4974#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5

Add the following env vars to 3scale-Kourier in the "kourier" container

There is no 3scale-Kourier.

I looked at the source code and "guessed" that it should be added to the net-courier-controller at https://github.com/knative-sandbox/net-kourier/blob/main/config/300-controller.yaml#L40-L43 but that doesn't seem to work.

I'm new to Kourier so I'm not sure where the failure is. I do have a wildcard cert. And I'm using that cert in a different cluster with nginx so I know it works.

If someone could provide some pointers on how to configured Kourier with TLS and certs, that would be appreciated.

Finalization logic seems incorrect

I'm looking at this logic here:
https://github.com/knative-sandbox/net-kourier/blob/48320f242b9a2a8fd09d4c5ca29a0f0bfd447046/pkg/reconciler/ingress/ingress.go#L95-L110

The use of finalizers here seems both overkill and buggy, I'll try to elaborate on both facets of this below.

The use of finalizers here is overkill because we're just trying to make sure that the in-memory index is updated to note the removal of the kingress. Finalizers are generally needed when state outside of the controller process needs cleaning up, and that's not the case here.

The use of finalizers here is buggy because only the "main" controller is guaranteed to process the deletion (since it removes the finalizer itself).

My recommendation here would be that since both the informer cache and the envoy configuration index are both in-memory, that it is both necessary and sufficient to directly delete the configuration in the DeleteFunc informer event and drop the use of finalizers here. We (@vaikas) made a similar change to the in-memory channel in eventing relatively recently for similar reasons.

When using net-kourier, the port of loadBalancer can't be find in local machine.

Here is the svc output:

jw@cci-network-0003:~/workspace/src/knative.dev/serving$ kubectl get svc -A
NAMESPACE         NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP                         PORT(S)                                      AGE
kourier-system    kourier                       LoadBalancer   10.109.246.69    10.109.246.69                       80:30842/TCP,443:30134/TCP                   4m18s

we can see the port mapping from local machine to kourier gateway.

so I use netstat -an to check the listening port:

jw@cci-network-0003:~/workspace/src/knative.dev/serving$ netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:49163         0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:38155         0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:49164         0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:49165         0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:49166         0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:49167         0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN
tcp        0      0 192.168.49.1:56688      192.168.49.2:8443       ESTABLISHED
tcp        0      0 192.168.0.95:22         10.173.89.158:55877     ESTABLISHED
tcp        0      0 192.168.49.1:56486      192.168.49.2:8443       ESTABLISHED
tcp        0      0 192.168.0.95:22         10.173.89.158:61216     ESTABLISHED
tcp        0      0 192.168.49.1:43848      192.168.49.2:22         ESTABLISHED
tcp        0      1 192.168.49.1:35100      10.109.246.69:80        SYN_SENT
tcp        0      0 127.0.0.1:49167         127.0.0.1:43060         ESTABLISHED
tcp        0      0 127.0.0.1:43060         127.0.0.1:49167         ESTABLISHED
tcp        0    208 192.168.0.95:22         10.173.89.158:53041     ESTABLISHED
tcp        0      0 192.168.0.95:22         10.173.89.158:64025     ESTABLISHED
tcp6       0      0 :::22                   :::*                    LISTEN
udp        0      0 127.0.0.53:53           0.0.0.0:*
udp        0      0 0.0.0.0:68              0.0.0.0:*
udp        0      0 127.0.0.1:323           0.0.0.0:*
udp6       0      0 ::1:323                 :::*

but haven't got, I tried istio, it works. Is there any reason for this?

Thanks a lot.

All Hops Encrypted: alpha Kourier support for encrypted backends

Larger description in the Feature Track document

Summary:

Kourier should support calling activator / backends with a known CA key and subject name (provided by the cluster administrator in config-network for the All Hops Encrypted alpha).

Expected config-network keys:

  • activator-ca -- contains the CA public certificate used to sign the activator TLS certificate
  • activator-name -- contains the SAN (Subject Alt Name) used to validate the activator TLS certificate

This probably involves extending the Cluster configuration with an UpstreamTlsContext and CommonTlsContext, and possibly implementing SDS for these certificates.

/kind feature-request

Massive amounts of resyncs due to errors publishing snapshots

c001b3fa-3333-437f-b088-3a9c044e7ce0/net-kourier-controller-f79f469fb-mq68r[controller]: {"severity":"info","timestamp":"2021-09-02T13:29:28.370Z","logger":"net-kourier-controller","caller":"ingress/controller.go:114","message":"Error pushing snapshot to gateway: code: 13 message route: unknown weighted cluster 'serving-tests-alt/domain-mapping-h-a-ruanyiku-00001'"}

Is logged 1088 times in https://prow.knative.dev/view/gs/knative-prow/logs/ci-knative-serving-kourier-stable/1433405592860364800 for example (with differing revision names)

Kourier is not compatible with Knative-Service timeoutSeconds custom value with more than 300 seconds

When a long GRPC connection is established, the data does not return within 5 minutes, and the connection will be interrupted, which can be solved by adjusting the timeoutSeconds parameter. However, when kourier is used as a gateway, the timeoutSeconds value is fixed to 300 seconds by default and cannot be customized.

In use, it is found that in kourier v0.26.0 version, kourier does not support the custom parameter of timeoutSeconds in Knative-Service.

Also does not support, the max-revision-timeout-seconds and revision-timeout-seconds custom parameters in knative-serving comfigmap config-defaults.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: test
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
        autoscaling.knative.dev/metric: concurrency
        autoscaling.knative.dev/minScale: "0"
        autoscaling.knative.dev/maxScale: "1"
    spec: 
      timeoutSeconds: 600   # kourier gw is not compatible with this custom number.
      containers:
      - image: imageurl
        imagePullPolicy: IfNotPresent
        ports:
          - name: h2c
            containerPort: 8080

Reminder: Istio gateway is compatible with timeoutSeconds custom value.

High memory consumption after enabling AutoTLS

We are using wildcard certificates per namespace. During our tests after creating ~200 ksvcs in 20 namespace, the memory usage for kourier gateway reaches ~3GB

kubectl -n kourier-system top pods
NAME                                      CPU(cores)   MEMORY(bytes)
3scale-kourier-gateway-6579576fff-dg5f5   749m         3350Mi
3scale-kourier-gateway-6579576fff-h6g2j   923m         3058Mi
3scale-kourier-gateway-6579576fff-j8nnf   946m         2987Mi
3scale-kourier-gateway-6579576fff-t5mdv   846m         3084Mi
3scale-kourier-gateway-6579576fff-txhs4   910m         3084Mi
3scale-kourier-gateway-6579576fff-vztdx   822m         3354Mi

Seems that kourier is registering tls context for each knative service regardless whether they share the same certificate data or not.

What's the point of Kourier?

I'm having trouble understanding why should I prefer using Kourier as opposed to something else that I'm probably using as standard kubernetes ingress implementation in my cluster eg Contour/Ambassador etc. Why should I spend time deploying and learning Kourier? Cany anybody help me understand this?

Envoy ext_authz filter: `/.well-known/acme-challenge/` path is not ignored

Hello

when enabling both autoTLS and ext_authz filter, the path /.well-known/acme-challenge/ for HTTP01 challenge is not ignored!
The traffic will necessarily pass throughout the external authorization service (for no reason)

I think it would be great if we can add typed_per_filter_config to the route created for HTTP01 challenge in order to disable the filter in the route level as mentioned here, for instance

- name: >-
   (b2eac33b-ee65-404d-a0ab-fceaa6f26210/authztlsv2f98enkds-toto).Rules[1]
  domains:
   - authztlsv2f98enkds-toto.functions.fnc.dev.fr-par.scw.cloud
   - 'authztlsv2f98enkds-toto.functions.fnc.dev.fr-par.scw.cloud:*'
  routes:
   - match:
       prefix: >-
         /.well-known/acme-challenge/G84xbCs9GO2ENE_50aUBuOzs_T12Auy1AacLKcHcQOU
       headers:
         - name: K-Network-Hash
           exact_match: override
     route:
       weighted_clusters:
         clusters:
           - name: >-
               b2eac33b-ee65-404d-a0ab-fceaa6f26210/cm-acme-http-solver-gznmm
             weight: 100
       timeout: 0s
       upgrade_configs:
         - upgrade_type: websocket
           enabled: true
     request_headers_to_add:
       - header:
           key: K-Network-Hash
           value: >-
             55eb12aadbd590cfc4eccc0dda6c0f63df4cfac7bba366aec4224fc628813f78
         append: false
     typed_per_filter_config:
       envoy.filters.http.ext_authz:
         "@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute
         disabled: true

HTTP Policy/behaviour is not supported by KSVC

Hello

I want to change the default behaviour of HTTP traffic from "enabled" to whether "redirected" or "disabled".

According to knative documentation , we can do that in different ways:

  • Global by editing config-network configmap
  • Per service by adding networking.knative.dev/http-protocol: "<option>" annotation

Also There is an inconsistency between this doc and this thread as "disabled" option is not mentioned in the first doc

When testing with Kourier, I noticed that only Global setup is supported

Do you confirm that this is a bug?
I can open a PR if necessary

Thank you!

use kourier for custom service

I am deploying a custom application with knative on a kind cluster and my application uses ingress rules to route traffic to my application with a virtual hostname. I tried doing it with kourier ingress class but it doesn't identify the ingress rule and service in the backend.

Could some one help me to understand if it is possible to have other services using the same ingress?

Installation issue with pre-installed Serverless

I'm starting with an OCP Cluster that has serverless (1.12.0) installed - I am installing kafka and then kourier and kourier fails to start.

Script (nabbed from Matthias) for adding Kafka:

https://raw.githubusercontent.com/rhdemo/2021-demo-cluster-setup/master/backend/strimzi.sh

Script (nabbed from Matthias) for adding Kourier:

https://raw.githubusercontent.com/rhdemo/2021-demo-cluster-setup/master/backend/kourier.sh

The 3scale-kourier-gateway-xxx Pod hangs with:

[2021-02-12 11:24:50.340][1][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure

Resource requirements for kourier

Hey, I recently started evaluating using kourier instead of istio for the serverless solution and I noticed that its labeled as lightweight, but couldnt find any numbers around the resource requirements in the deployments of 3scale-kourier-gateway and net-kourier-controller, what would be good recommended values for that? (requests and limits)

Poor http(s) performance

Currently we are testing http performance using kourier as gateway. The test application basicilly sleep 1s and return 200 response. The load is generated by vegeta.
We found that the performance is very poor compares to istio.

Requests      [total, rate, throughput]         381197, 622.69, 96.22
Duration      [total, attack, wait]             12m53s, 10m12s, 2m41s
Latencies     [min, mean, 50, 90, 95, 99, max]  174.251ms, 1m44s, 2m0s, 2m32s, 3m15s, 3m29s, 5m30s
Bytes In      [total, mean]                     1785067, 4.68
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           19.51%
Status Codes  [code:count]                      0:306819  200:74377  503:1  

The most significant error is ead tcp 172.30.103.114:39195->168.1.225.234:443: read: connection reset by peer
Compares to the same test suite running on istio

Requests      [total, rate, throughput]         2399999, 4000.00, 3780.59
Duration      [total, attack, wait]             10m1s, 10m0s, 1.02s
Latencies     [min, mean, 50, 90, 95, 99, max]  207.704ms, 2.062s, 1.007s, 1.392s, 7.745s, 27.356s, 34.768s
Bytes In      [total, mean]                     54533064, 22.72
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           94.68%
Status Codes  [code:count]                      0:127788  200:2272211  

Both kourier and istio gateways have the same fixed 10 replicas.

The ksvc has the same scale config

"autoscaling.knative.dev/target": "150"
"autoscaling.knative.dev/targetUtilizationPercentage": "100"

So with 4000 rps load the ideal replica should be >= 4000/150 = 26 during the test.
But I noticed that during the test the kpa only scales up to 16 and then drops to 1 while the test is still running. Seems that kourier stopped proxying request to upstream.

kubectl get kpa -w
NAME               DESIREDSCALE   ACTUALSCALE   READY   REASON
perftest-frysy-1   1              1             True
perftest-frysy-1   5              1             True
perftest-frysy-1   12             1             True
perftest-frysy-1   12             2             True
perftest-frysy-1   16             2             True
perftest-frysy-1   16             3             True
perftest-frysy-1   16             4             True
perftest-frysy-1   16             5             True
perftest-frysy-1   16             6             True
perftest-frysy-1   16             7             True
perftest-frysy-1   16             8             True
perftest-frysy-1   16             9             True
perftest-frysy-1   16             10            True
perftest-frysy-1   16             11            True
perftest-frysy-1   16             12            True
perftest-frysy-1   16             13            True
perftest-frysy-1   16             14            True
perftest-frysy-1   16             15            True
perftest-frysy-1   16             16            True
perftest-frysy-1   15             16            True
perftest-frysy-1   15             15            True
perftest-frysy-1   14             15            True
perftest-frysy-1   14             14            True
perftest-frysy-1   13             14            True
perftest-frysy-1   13             13            True
perftest-frysy-1   12             13            True
perftest-frysy-1   12             12            True
perftest-frysy-1   11             12            True
perftest-frysy-1   11             11            True
perftest-frysy-1   10             11            True
perftest-frysy-1   10             10            True
perftest-frysy-1   9              10            True
perftest-frysy-1   9              9             True
perftest-frysy-1   8              9             True
perftest-frysy-1   8              8             True
perftest-frysy-1   7              8             True
perftest-frysy-1   7              7             True
perftest-frysy-1   6              7             True
perftest-frysy-1   6              6             True
perftest-frysy-1   5              6             True
perftest-frysy-1   5              5             True
perftest-frysy-1   4              5             True
perftest-frysy-1   4              4             True
perftest-frysy-1   3              4             True
perftest-frysy-1   3              3             True
perftest-frysy-1   2              3             True
perftest-frysy-1   2              2             True
perftest-frysy-1   1              2             True
perftest-frysy-1   1              1             True
perftest-frysy-1   3              1             True
perftest-frysy-1   3              2             True
perftest-frysy-1   3              3             True
perftest-frysy-1   4              3             True
perftest-frysy-1   6              3             True
perftest-frysy-1   6              4             True
perftest-frysy-1   6              5             True
perftest-frysy-1   6              6             True
perftest-frysy-1   5              6             True
perftest-frysy-1   5              5             True
perftest-frysy-1   4              5             True
perftest-frysy-1   4              4             True
perftest-frysy-1   3              4             True
perftest-frysy-1   3              3             True
perftest-frysy-1   2              3             True
perftest-frysy-1   2              2             True
perftest-frysy-1   1              2             True
perftest-frysy-1   1              1             True

Compares to istio, the desiredscale stables at 27 during the whole test.

kubectl get kpa -w  
NAME               DESIREDSCALE   ACTUALSCALE   READY   REASON
perftest-frysy-1   1              1             True
perftest-frysy-1   17             1             True
perftest-frysy-1   31             1             True
perftest-frysy-1   33             2             True
perftest-frysy-1   33             3             True
perftest-frysy-1   33             4             True
perftest-frysy-1   33             5             True
perftest-frysy-1   33             6             True
perftest-frysy-1   33             7             True
perftest-frysy-1   33             8             True
perftest-frysy-1   33             9             True
perftest-frysy-1   33             10            True
perftest-frysy-1   33             12            True
perftest-frysy-1   33             13            True
perftest-frysy-1   33             14            True
perftest-frysy-1   33             15            True
perftest-frysy-1   33             16            True
perftest-frysy-1   33             17            True
perftest-frysy-1   33             18            True
perftest-frysy-1   33             19            True
perftest-frysy-1   34             19            True
perftest-frysy-1   34             21            True
perftest-frysy-1   34             22            True
perftest-frysy-1   34             23            True
perftest-frysy-1   34             24            True
perftest-frysy-1   34             25            True
perftest-frysy-1   34             26            True
perftest-frysy-1   34             27            True
perftest-frysy-1   34             28            True
perftest-frysy-1   34             29            True
perftest-frysy-1   34             30            True
perftest-frysy-1   34             31            True
perftest-frysy-1   34             32            True
perftest-frysy-1   34             33            True
perftest-frysy-1   34             34            True
perftest-frysy-1   32             34            True
perftest-frysy-1   32             33            True
perftest-frysy-1   32             32            True
perftest-frysy-1   30             32            True
perftest-frysy-1   30             31            True
perftest-frysy-1   30             30            True
perftest-frysy-1   28             30            True
perftest-frysy-1   28             29            True
perftest-frysy-1   28             28            True
perftest-frysy-1   27             28            True
perftest-frysy-1   27             27            True
perftest-frysy-1   26             27            True
perftest-frysy-1   26             26            True
perftest-frysy-1   25             26            True
perftest-frysy-1   25             25            True
perftest-frysy-1   24             25            True
perftest-frysy-1   24             24            True
perftest-frysy-1   23             24            True
perftest-frysy-1   23             23            True
perftest-frysy-1   22             23            True
perftest-frysy-1   22             22            True
perftest-frysy-1   21             22            True
perftest-frysy-1   21             21            True
perftest-frysy-1   20             21            True
perftest-frysy-1   20             20            True
perftest-frysy-1   19             20            True
perftest-frysy-1   19             19            True
perftest-frysy-1   18             19            True
perftest-frysy-1   18             18            True
perftest-frysy-1   17             18            True
perftest-frysy-1   17             17            True
perftest-frysy-1   16             17            True
perftest-frysy-1   16             16            True
perftest-frysy-1   15             16            True
perftest-frysy-1   15             15            True
perftest-frysy-1   14             15            True
perftest-frysy-1   14             14            True
perftest-frysy-1   13             14            True
perftest-frysy-1   13             13            True
perftest-frysy-1   12             13            True
perftest-frysy-1   12             12            True
perftest-frysy-1   11             12            True
perftest-frysy-1   11             11            True
perftest-frysy-1   10             11            True
perftest-frysy-1   10             10            True
perftest-frysy-1   9              10            True
perftest-frysy-1   9              9             True
perftest-frysy-1   8              9             True
perftest-frysy-1   8              8             True
perftest-frysy-1   7              8             True
perftest-frysy-1   7              7             True
perftest-frysy-1   6              7             True
perftest-frysy-1   6              6             True
perftest-frysy-1   5              6             True
perftest-frysy-1   5              5             True
perftest-frysy-1   4              5             True
perftest-frysy-1   4              4             True
perftest-frysy-1   3              4             True
perftest-frysy-1   3              3             True
perftest-frysy-1   2              3             True
perftest-frysy-1   2              2             True
perftest-frysy-1   1              2             True
perftest-frysy-1   1              1             True

TestUpdate looks flaky

5 of the last 7 continuous build failures (as of now) are in TestUpdate.

Looking at a handful of them (example), they appear to be due to 503s:

    util.go:860: Error meeting response expectations: got unexpected status: 503, expected map[200:{}]
    util.go:860: HTTP/1.1 503 Service Unavailable
        Date: Thu, 13 Aug 2020 23:35:09 GMT
        Server: envoy
        Content-Length: 0

If you don't already, I'd recommend enabling Envoy's debug logging during e2e runs, so that you can examine the detailed logs post-mortem more easily. This was pretty helpful when I was chasing similar issues in net-contour.

Usage of RBAC:v1beta1

Installing 0.18, I get:

deployment.apps/3scale-kourier-control created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/3scale-kourier created
serviceaccount/3scale-kourier created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/3scale-kourier created

Would be good to eventually move towards v1 APIs, like here: https://github.com/knative/serving/blob/master/config/core/200-roles/clusterrole.yaml#L16

The test sample HelloWorld go failed

HI!

After the kourier is deployed, it is verified that the verification does not get the results I want.
By checking the log, it is found that the service is missing, but there are some in the resource, What's the matter?

[root@k8s-master ~]# kubectl logs service.serving.knative.dev/helloworld-go
error: no kind "Service" is registered for version "serving.knative.dev/v1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
[root@k8s-master ~]# kubectl api-resources | grep knative
metrics                                           autoscaling.internal.knative.dev/v1alpha1   true         Metric
podautoscalers                    kpa,pa          autoscaling.internal.knative.dev/v1alpha1   true         PodAutoscaler
images                            img             caching.internal.knative.dev/v1alpha1       true         Image
certificates                      kcert           networking.internal.knative.dev/v1alpha1    true         Certificate
ingresses                         kingress,king   networking.internal.knative.dev/v1alpha1    true         Ingress
serverlessservices                sks             networking.internal.knative.dev/v1alpha1    true         ServerlessService
knativeeventings                                  operator.knative.dev/v1alpha1               true         KnativeEventing
knativeservings                                   operator.knative.dev/v1alpha1               true         KnativeServing
configurations                    config,cfg      serving.knative.dev/v1                      true         Configuration
revisions                         rev             serving.knative.dev/v1                      true         Revision
routes                            rt              serving.knative.dev/v1                      true         Route
services                          kservice,ksvc   serving.knative.dev/v1                      true         Service

Response results:

[root@k8s-master ~]# curl -v -H "Host: helloworld-go.default.127.0.0.1.nip.io" http://localhost:8080
* Rebuilt URL to: http://localhost:8080/
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET / HTTP/1.1
> Host: helloworld-go.default.127.0.0.1.nip.io
> User-Agent: curl/7.61.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Thu, 18 Nov 2021 10:23:37 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host localhost left intact

kubernetes environment:

[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}

knative environment: v0.17.4

kourier environment: v0.17.2

knative

Investigate TestRetry failures

This test has been failing consistently enough during nightly testing that it has blocked every single nightly release thus far.

I'm opening this to accompany a TODO next to the logic that skips this.

General purpose ingress?

Hi there.

I was wondering if Kourier can be used with "normal" ingresses? I haven't been able to get this to work, and I can't find any config or documentation for or against this use case.

I have Knative + Kourier installed and configured as described in the docs, and all that works great. Then I have a simple Nginx Deployment, Service, and Ingress:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  labels:
    name: nginx
  annotations:
    "kubernetes.io/ingress.class": "kourier"
spec:
  rules:
  - host: nginx.10.111.4.115.sslip.io
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx
            port: 
              number: 80

No way to change default timeout for envoy

I am using openshift serverless which is knative+kourier. Any long-running request being sent to knative services always timeout in 5 min and return 408 because of the default envoy stream_idle_timeout setting:

sh-4.4# curl -i -H "x-envoy-upstream-rq-timeout-ms: 400000000" sample.knative-serving.svc.cluster.local?sleep=200000000
HTTP/1.1 408 Request Timeout
content-length: 14
content-type: text/plain
date: Mon, 01 Feb 2021 22:34:52 GMT
server: envoy

x-envoy-upstream-rq-timeout-ms header for some reason is ignored as well, timeout is always 5min. Looks like it is not possible to contol the timeout, knative revision timeout setting is ignored as well.

It would be nice to be able to set at least a global timeout value for envoy to override the default 5min timeout.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.