Giter Site home page Giter Site logo

proxy's Introduction

Cilium Proxy

Envoy proxy for Cilium with minimal Envoy extensions and Cilium policy enforcement filters. Cilium uses this as its host proxy for enforcing HTTP and other L7 policies as specified in network policies for the cluster. Cilium proxy is distributed within the Cilium images.

Version compatibility matrix

The following table shows the Cilium proxy version compatibility with supported upstream Cilium versions. Other combinations may work but are not tested.

Cilium Version Envoy version
(main) v1.30.x
v1.16.0 v1.29.7
v1.15.6 v1.28.4
v1.15.5 v1.28.3
v1.15.4 v1.27.4
v1.15.3 v1.27.3
v1.15.2 v1.27.3
v1.15.1 v1.27.3
v1.15.0 v1.27.2
v1.14.12 v1.28.4
v1.14.11 v1.27.5
v1.14.10 v1.27.4
v1.14.9 v1.26.7
v1.14.8 v1.26.7
v1.14.7 v1.26.7
v1.14.6 v1.26.6
v1.14.5 v1.26.6
v1.14.4 v1.26.6
v1.14.3 v1.25.10
v1.14.2 v1.25.9
v1.14.1 v1.25.9
v1.14.0 v1.25.9

Building

Cilium proxy is best built with the provided build containers. For a local host build consult the builder Dockerfile for the required dependencies.

Container builds require Docker Buildkit and optionally Buildx for multi-arch builds. Builds are currently only supported for amd64 and arm64 targets. For arm64 both native and cross compile on amd64 are supported. Container builds produce container images by default. These images can not be run by themselves as they do not contain the required runtime dependencies. To run the Cilium proxy the binary /usr/bin/cilium-envoy needs to be copied from the image to a compatible runtime environment, such as Ubuntu 20.04, or 22.04.

The provided container build tools work on both Linux and macOS.

To build the Cilium proxy in a docker container for the host architecture only:

make docker-image-envoy

This will write the image to the local Docker registry.

Depending on hour host CPU and memory resources a fresh build can take an hour or more. Docker caching will speed up subsequent builds.

If your build fails due to a compiler failure the most likely reason is the compiler running out of memory. You can mitigate this by limiting the number of concurrent build jobs by passing environment variable BAZEL_BUILD_OPTS=--jobs=2 to make. By default the number of jobs is the number of CPUs available for the build, and for some complex C++ sources this may be too much. Note that changing the value of BAZEL_BUILD_OPTS invalidates Docker caches for the build stages.

Multi-arch builds

Build target architecture can be specified by passing ARCH environment variable to make. Supported values are amd64 (only on amd64 hosts), arm64 (on arm64 or amd64 hosts), and multi (on amd64 hosts). multi builds for all the supported architectures, currrently amd64 and arm64:

ARCH=multi make docker-image-envoy

This will try to push the images to the container registry. Appropriate authentication is required. (Pushing to the local Docker registry isn't supported for multi-arch builds. See Docker documentation)

Builds will be performed concurrently when building for multiple architectures on a single machine. You most likely need to limit the number of jobs allowed for each builder, see the note above for details.

Docker builds are done using Docker Buildx by default when ARCH is explicitly passed to make. You can also force Docker Buildx to be used when building for the host platform only (by not defining ARCH) by defining DOCKER_BUILDX=1. A new buildx builder instance will be created for amd64 and arm64 cross builds if the current builder is set to default.

Buildx builds will push the build result to quay.io/cilium/cilium-envoy:<GIT_SHA> by default. You can change the first two parts of this by defining DOCKER_DEV_ACCOUNT=docker.io/me for your own docker hub account. You can also request the build results to be output to your local directory instead by defining DOCKER_BUILD_OPTS=--output=out, where out is a local directory name or use DOCKER_BUILD_OPTS="--output=type=docker" to load it into the local Docker daemon.

Building for the Raspberry Pi kernel

By default Raspberry Pi OS and other OSes using the Raspberry Pi kernel will not be able to use Envoy as their default CONFIG_ARM64_VA_BITS_39 configuration is not compatible with tcmalloc.

A workaround is to compile the Envoy proxy with gperftools:

ARCH=arm64 BAZEL_BUILD_OPTS="--define tcmalloc=gperftools" make docker-image-envoy

This image can then be used in the Envoy DaemonSet mode.

Using custom pre-compiled Envoy dependencies

Docker build uses cached Bazel artifacts from quay.io/cilium/cilium-envoy-builder:main-archive-latest by default. You can override this by defining ARCHIVE_IMAGE=<ref>:

ARCH=multi ARCHIVE_IMAGE=docker.io/me/cilium-envoy-archive make docker-image-envoy

Bazel build artifacts contain toolchain specific data and binaries that are not compatible between native and cross-compiled builds. For now the image ref shown above is for builds on amd64 only (native amd64, cross-compiled arm64).

Define NO_CACHE=1 to clear the local build cache before the build, and NO_ARCHIVE=1 to build from scratch, but be warned that this can take a long time.

Docker caching

By default the build also tries to pull Docker build caches from docker.io/cilium/cilium-dev:cilium-envoy-cache. You can override this with our own build cache, which you can also update with the CACHE_PUSH=1 definition:

ARCH=multi CACHE_REF=docker.io/me/cilium-proxy:cache CACHE_PUSH=1 make docker-image-envoy

NO_CACHE=1 can be used to disable docker cache pulling.

In a CI environment it might be a good idea to push a new cache image after each main branch commit.

Updating the pre-compiled Envoy dependencies

Build and push a new version of the pre-compiled Envoy dependencies by:

ARCH=multi make docker-builder-archive

By default the pre-compiled dependencies image is tagged as quay.io/cilium/cilium-envoy-builder:main-archive-latest. You can override the first two parts of this by defining DOCKER_DEV_ACCOUNT=docker.io/me, BUILDER_ARCHIVE_TAG=my-builder-archive, or completely by defining ARCHIVE_IMAGE=<ref>.

Pre-compiled Envoy dependencies need to be updated only when Envoy version is updated or patched enough to increase compilation time significantly. To do this you should update Envoy version in ENVOY_VERSION and supply NO_CACHE=1 and NO_ARCHIVE=1 on the make line, e.g.:

ARCH=multi NO_CACHE=1 NO_ARCHIVE=1 BUILDER_ARCHIVE_TAG=main-archive-latest make docker-builder-archive

Updating the builder image

The required Bazel version typically changes from one Envoy release to another. To create a new builder image first update the required Bazel version at .bazelversion and then run:

ARCH=multi NO_CACHE=1 NO_ARCHIVE=1 make docker-image-builder

The builder can not be cross-compiled as native build tools are needed for native arm64 builds. This means that for non-native builds QEMU CPU emulation is used instead of cross-compilation. If you have an arm64 machine you can create a Docker buildx builder to use it for native builds.

The builder image is tagged as "quay.io/cilium/cilium-envoy-builder:bazel-". Change the BUILDER_BASE ARG in Dockerfile to use the new builder and commit the result.

For testing purposes you can define DOCKER_DEV_ACCOUNT as explained above to push the builder into a different registry or account.

Running integration tests

To run Cilium Envoy integration tests in a docker container:

make docker-tests

This runs the integration tests after loading Bazel build cache for Envoy dependencies from quay.io/cilium/cilium-envoy-builder:test-main-archive-latest. Define NO_ARCHIVE=1 and NO_CACHE=1 to compile tests from scratch.

This command fails if any of the integration tests fail, printing the failing test logs on console.

Note that cross-compiling is not supported for running tests, so specifying ARCH is only supported for the native platform. ARCH=multi will fail.

Updating the pre-compiled Envoy test dependencies

Build and push a new version of the pre-compiled test dependencies by:

make docker-tests-archive

By default the pre-compiled test dependencies image is tagged as quay.io/cilium/cilium-envoy-builder:test-main-archive-latest. You can override the first two parts of this by defining DOCKER_DEV_ACCOUNT=docker.io/me, TESTS_ARCHIVE_TAG=my-test-archive, or completely by defining ARCHIVE_IMAGE=<ref>.

Pre-compiled Envoy test dependencies need to be updated only when Envoy version is updated or patched enough to increase compilation time significantly. To do this you should update Envoy version in ENVOY_VERSION and supply NO_ARCHIVE=1 and NO_CACHE=1 on the make line, e.g.:

ARCH=amd64 NO_ARCHIVE=1 NO_CACHE=1 make docker-tests-archive

Updating generated API

Cilium project vendors the Envoy xDS API, including Cilium extensions, from this repository. To update the generated API files, run:

rm -r go/envoy/*
make api

rm is needed to clean up API files that are no longer generated for Envoy. Do not remove files at go/cilium/ as some of them are not automatically generated!

Commit the results and update Cilium to vendor this new commit.

proxy's People

Contributors

aanm avatar aditighag avatar chancez avatar eloycoto avatar ferozsalam avatar haiyuewa avatar ishantanu avatar itspngu avatar jaormx avatar jianlin-lv avatar joestringer avatar jrajahalme avatar meyskens avatar mhofstetter avatar raphink avatar renovate[bot] avatar rlenglet avatar rueian avatar sayboras avatar tgraf avatar trevortaoarm avatar vadorovsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

proxy's Issues

Introduce release flow to follow Istio versions?

Hello team!

I am wondering if it make sense this repo to follow version of Istio on releases so as people can pull Envoy as sidecar with Cilium intergation directly by changing Envoy image source in a friendly way.

For example: docker pull cilium/istio-proxy:1.5 / 1.6 / 1.7 etc

WDYT?

The URL of tclap-1-2-1-release-final.tar.gz invalid

Compiling cilium-envoy-builder will encounter the problem that tclap-1-2-1-release-final.tar.gz is invalid.

ERROR: /home/jianlin/.cache/bazel/_bazel_jianlin/744410ec949b116477822364b780c0bd/external/envoy/bazel/repositories.bzl:248:5: //external:tclap depends on @com_github_eile_tclap//:tclap in repository @com_github_eile_tclap which failed to fetch. no such package '@com_github_eile_tclap//': java.io.IOException: _**Error downloading [https://github.com/eile/tclap/archive/tclap-1-2-1-release-final.tar.gz]**_ to /home/jianlin/.cache/bazel/_bazel_jianlin/744410ec949b116477822364b780c0bd/external/com_github_eile_tclap/tclap-1-2-1-release-final.tar.gz: GET returned 404 Not Found
ERROR: Analysis of target '//:cilium-envoy' failed; build aborted: Analysis failed
INFO: Elapsed time: 142.164s

envoyproxy/envoy project have fixed this issue as follow link:
envoyproxy/envoy#9072

I think it is best to update the version of the envoy tarball in the WORKSPACE.

Avoid empty typeURL for cilium.tls_wrapper

Description

As per new envoy build, we faced the below error while envoy proxy was starting. The workaround is to add depecating flag envoy.reloadable_features.no_extension_lookup_by_name.

This issue is to track the longterm solution which should register typeURL for cilium.tls_wrapper type.

cilium/cilium@81e2491

level=error msg="[error initializing configuration '/var/run/cilium/bootstrap.pb': Didn't find a registered implementation for 'cilium.tls_wrapper' with type URL: 'envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext'" subsys=envoy-main threadID=1944
level=info msg="[exiting" subsys=envoy-main threadID=1944
level=info msg="Didn't find a registered implementation for 'cilium.tls_wrapper' with type URL: ''" subsys=envoy-main threadID=1944

CI: cilium_tls_http_integration_test failure

Description

The below run is failed without any related change. This is mainly due to multiple jobs running in parallel, the workaround is to run with job limit to 1. So there could be overlapping resources between the tests.

bazel  test --platforms=//bazel:linux_x86_64 --config=release --jobs=1 --test_timeout=300 --test_output=errors //tests/...

https://github.com/cilium/proxy/actions/runs/4272080973/jobs/7439190389
logs_1256.zip

#14 350.1 //tests:cilium_tls_tcp_integration_test                                  PASSED in 3.6s
#14 350.1 //tests:cilium_websocket_codec_integration_test                          PASSED in 2.8s
#14 350.1 //tests:cilium_websocket_decap_integration_test                          PASSED in 0.4s
#14 350.1 //tests:cilium_websocket_encap_integration_test                          PASSED in 2.6s
#14 350.1 //tests:cilium_tls_http_integration_test                                 FAILED in 10.7s

Add CI test with cilium-cli

Add a CI test workflow where each PR, release, and master thread is tested like this:

After cilium/proxy build for PR, release, or master is available, do:

  • pull cilium\cilium master
  • replace the reference to the new cilium-envoy into images/cilium/Dockerfile
  • build cilium and operator images locally:
    • DOCKER_IMAGE_TAG=test make docker-cilium-image docker-operator-generic-image
  • create a kind cluster:
    • kind create cluster
  • load the locally built cilium images with the new cilium-envoy:
    • kind load docker-image quay.io/cilium/operator-generic:test quay.io/cilium/cilium:test
  • pull latest master of cilium/cilium-cli & build locally
    • could also do the latest released version, but when adding new tests to cilium-cli, pulling the latest source is useful
  • install cilium with cilium-cli, e.g. (while in the cilium/cilium directory)
    • cilium install --chart-directory install/kubernetes/cilium --config monitor-aggregation=none --helm-set loadBalancer.l7.backend=envoy --helm-set tls.secretsBackend=k8s --agent-image=quay.io/cilium/cilium:test --operator-image=quay.io/cilium/operator-generic:test --helm-set image.pullPolicy=IfNotPresent --helm-set operator.image.pullPolicy=IfNotPresent --config debug=true --config debug-verbose=envoy
    • cilium hubble enable
    • cilium hubble port-forward&
    • cilium connectivity test

Support envoy.filters.http.jwt_authn

Similar to #93. JWTs are a pretty common way to secure internal and public facing websites/endpoints, and it would be ideal if using Cilium Ingress with a custom CEC, if we could configure the JWT filter to handle authentication.

Request fails when sent on an existing connection on the connection pool

Cilium main using quay.io/cilium/cilium-envoy:v1.27.2-f19708f3d0188fe39b7e024b4525b75a9eeee61f

Sometimes request fails when "using existing fully connected connection".

In a specific example, an upstream connection was created and then the request was denied by CNP and nothing was sent on the connection. Then, when the connection was reused 43 seconds later for a request that was accepted, it timed out:

2023-12-05T17:09:12.497700658Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] connecting" subsys=envoy-client threadID=290
2023-12-05T17:09:12.497809231Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] socket event: 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497812487Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] write ready" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498622410Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] raising connection event 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498628973Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] connected" subsys=envoy-client threadID=290
2023-12-05T17:09:55.783386414Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] using existing fully connected connection" subsys=envoy-pool threadID=290
2023-12-05T17:09:55.783388679Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] creating stream" subsys=envoy-pool threadID=290
2023-12-05T17:09:55.783424525Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] writing 222 bytes, end_stream false" subsys=envoy-connection threadID=290
2023-12-05T17:09:55.783428022Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] encode complete" subsys=envoy-client threadID=290
2023-12-05T17:09:55.783443311Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] socket event: 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:55.783446486Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] write ready" subsys=envoy-connection threadID=290
2023-12-05T17:09:55.783449202Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] write returns: 222" subsys=envoy-connection threadID=290
2023-12-05T17:10:05.784450945Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] request reset" subsys=envoy-client threadID=290
2023-12-05T17:10:05.784461874Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] closing socket: 1" subsys=envoy-connection threadID=290
2023-12-05T17:10:05.784464850Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] raising connection event 1" subsys=envoy-connection threadID=290
2023-12-05T17:10:05.784467095Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] disconnect. resetting 0 pending requests" subsys=envoy-client threadID=290
2023-12-05T17:10:05.786287775Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] client disconnected, failure reason: " subsys=envoy-pool threadID=290
2023-12-05T17:10:05.786539263Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] destroying stream: 0 remaining" subsys=envoy-pool threadID=290

Possible remedies may include to:

  • only keep connections in the pool if they were successfully used for a request at least once?

In the above case the original connection created for the denied request was connection 67. which was torn down after the denied request. It is unclear why the connection 68 was created right after:

2023-12-05T17:09:12.491565522Z level=debug msg="[cilium.network: in upstream callback" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.491569881Z level=debug msg="[cilium.ipcache: Looking up key: af40192, prefixlen: 32" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.491573377Z level=debug msg="[cilium.ipcache: 10.244.1.146 has ID 10513" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.491584558Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] router decoding headers:" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491631217Z level=debug msg="':authority', '10.244.1.146:8080'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491636276Z level=debug msg="':path', '/private'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491638951Z level=debug msg="':method', 'GET'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491642197Z level=debug msg="':scheme', 'http'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491644782Z level=debug msg="'user-agent', 'curl/8.4.0'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491647077Z level=debug msg="'accept', '*/*'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491649281Z level=debug msg="'x-forwarded-proto', 'http'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491651455Z level=debug msg="'x-envoy-internal', 'true'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491653879Z level=debug msg="'x-request-id', '937e2c24-0c9d-43a1-8086-52cee0efde18'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491656233Z level=debug msg="'x-envoy-expected-rq-timeout-ms', '3600000'" subsys=envoy-router threadID=290
2023-12-05T17:09:12.491658908Z level=debug msg="[queueing stream due to no available connections (ready=0 busy=0 connecting=0)" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.491693553Z level=debug msg="[trying to create new connection" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.491706968Z level=debug msg="[ConnPoolImplBase 0x178d3fd9fa40, ready_clients_.size(): 0, busy_clients_.size(): 0, connecting_clients_.size(): 0, connecting_stream_capacity_: 0, num_active_streams_: 0, pending_streams_.size(): 1 per upstream preconnect ratio: 1" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.491710555Z level=debug msg="[creating a new connection (connecting=0)" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.491721866Z level=debug msg="[Set socket (70) option SO_MARK to dc670b00 (magic mark: b00, id: 56423, cluster: 0), src: 10.244.2.139:52450" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.491973634Z level=debug msg="[[C67] current connecting state: true" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.491989194Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] connecting" subsys=envoy-client threadID=290
2023-12-05T17:09:12.491992589Z level=debug msg="[[C67] connecting to 10.244.1.146:8080" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.491995205Z level=debug msg="[[C67] connection in progress" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.491997919Z level=debug msg="[not creating a new connection, shouldCreateNewConnection returned false." subsys=envoy-pool threadID=290
2023-12-05T17:09:12.492001766Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] decode headers called: filter=envoy.filters.http.upstream_codec status=4" subsys=envoy-http threadID=290
2023-12-05T17:09:12.492004692Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] decode headers called: filter=envoy.filters.http.router status=1" subsys=envoy-http threadID=290
2023-12-05T17:09:12.492008499Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] parsed 87 bytes" subsys=envoy-http threadID=290
2023-12-05T17:09:12.492949747Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] socket event: 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.492966399Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] write ready" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.492970566Z level=debug msg="[[C67] connected" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.492973612Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] raising connection event 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.492977980Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] connected" subsys=envoy-client threadID=290
2023-12-05T17:09:12.492980675Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] attaching to next stream" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.492983610Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] creating stream" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.492986356Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] pool ready" subsys=envoy-router threadID=290
2023-12-05T17:09:12.492989221Z level=debug msg="[cilium.ipcache: Looking up key: af40192, prefixlen: 32" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.493721885Z level=debug msg="[cilium.ipcache: 10.244.1.146 has ID 10513" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.493986468Z level=debug msg="[Cilium L7 PortNetworkPolicyRules(): returning false" subsys=envoy-config threadID=290
2023-12-05T17:09:12.493991116Z level=debug msg="[cilium.l7policy: egress (56423->10513) policy lookup for endpoint 10.244.2.139 for port 8080: DENY" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.493993851Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] Sending local reply with details " subsys=envoy-http threadID=290
2023-12-05T17:09:12.493996998Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] encode headers called: filter=cilium.l7policy status=0" subsys=envoy-http threadID=290
2023-12-05T17:09:12.494008609Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] encoding headers via codec (end_stream=false):" subsys=envoy-http threadID=290
2023-12-05T17:09:12.494011665Z level=debug msg="':status', '403'" subsys=envoy-http threadID=290
2023-12-05T17:09:12.494123664Z level=debug msg="'content-length', '15'" subsys=envoy-http threadID=290
2023-12-05T17:09:12.494283161Z level=debug msg="Proxy stats not found when updating" PolicyID.L4="egress:TCP:8080:0" ciliumEndpointName=cilium-test/client2-88575dbb7-qg7c8 containerID=a28fb0bf3f containerInterface= datapathPolicyRevision=108 desiredPolicyRevision=108 endpointID=386 identity=56423 ipv4=10.244.2.139 ipv6="fd00:10:244:2::ca9e" k8sPodName=cilium-test/client2-88575dbb7-qg7c8 subsys=endpoint
2023-12-05T17:09:12.494372487Z level=debug msg="Proxy stats not found when updating" PolicyID.L4="egress:TCP:8080:0" ciliumEndpointName=cilium-test/client2-88575dbb7-qg7c8 containerID=a28fb0bf3f containerInterface= datapathPolicyRevision=108 desiredPolicyRevision=108 endpointID=386 identity=56423 ipv4=10.244.2.139 ipv6="fd00:10:244:2::ca9e" k8sPodName=cilium-test/client2-88575dbb7-qg7c8 subsys=endpoint
2023-12-05T17:09:12.495366117Z level=debug msg="'content-type', 'text/plain'" subsys=envoy-http threadID=290
2023-12-05T17:09:12.495376166Z level=debug msg="'date', 'Tue, 05 Dec 2023 17:09:12 GMT'" subsys=envoy-http threadID=290
2023-12-05T17:09:12.495433963Z level=debug msg="'server', 'envoy'" subsys=envoy-http threadID=290
2023-12-05T17:09:12.495439474Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] writing 124 bytes, end_stream false" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.495443260Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] encode data called: filter=cilium.l7policy status=0" subsys=envoy-http threadID=290
2023-12-05T17:09:12.495446877Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] encoding data via codec (size=15 end_stream=true)" subsys=envoy-http threadID=290
2023-12-05T17:09:12.495449632Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] writing 15 bytes, end_stream false" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.495452788Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] Codec completed encoding stream." subsys=envoy-http threadID=290
2023-12-05T17:09:12.495455634Z level=debug msg="[item added to deferred deletion list (size=1)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.495458619Z level=debug msg="[[Tags: \"ConnectionId\":\"66\",\"StreamId\":\"17062581073634910404\"] resetting pool request" subsys=envoy-router threadID=290
2023-12-05T17:09:12.495500206Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] request reset" subsys=envoy-client threadID=290
2023-12-05T17:09:12.495503863Z level=debug msg="[item added to deferred deletion list (size=2)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.495571579Z level=debug msg="[[C67] closing data_to_write=0 type=1" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497657408Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] closing socket: 1" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497667897Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] raising connection event 1" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497671574Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] disconnect. resetting 0 pending requests" subsys=envoy-client threadID=290
2023-12-05T17:09:12.497674620Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] client disconnected, failure reason: " subsys=envoy-pool threadID=290
2023-12-05T17:09:12.497677385Z level=debug msg="[item added to deferred deletion list (size=3)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.497680441Z level=debug msg="[item added to deferred deletion list (size=4)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.497691301Z level=debug msg="[creating a new connection (connecting=0)" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.497694767Z level=debug msg="[Set socket (70) option SO_MARK to dc670b00 (magic mark: b00, id: 56423, cluster: 0), src: 10.244.2.139:52450" subsys=envoy-filter threadID=290
2023-12-05T17:09:12.497697803Z level=debug msg="[[C68] current connecting state: true" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497700658Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] connecting" subsys=envoy-client threadID=290
2023-12-05T17:09:12.497703604Z level=debug msg="[[C68] connecting to 10.244.1.146:8080" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497706870Z level=debug msg="[[C68] connection in progress" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497709474Z level=debug msg="[not creating a new connection, shouldCreateNewConnection returned false." subsys=envoy-pool threadID=290
2023-12-05T17:09:12.497712029Z level=debug msg="[item added to deferred deletion list (size=5)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.497715065Z level=debug msg="[item added to deferred deletion list (size=6)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.497718472Z level=debug msg="[item added to deferred deletion list (size=7)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.497721668Z level=debug msg="[enableTimer called on 0x178d3f2c80e0 for 3600000ms, min is 3600000ms" subsys=envoy-misc threadID=290
2023-12-05T17:09:12.497785656Z level=debug msg="[[C67] close during connected callback" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497794262Z level=debug msg="[clearing deferred deletion list (size=7)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.497797409Z level=debug msg="[[Tags: \"ConnectionId\":\"67\"] destroying stream: 0 remaining" subsys=envoy-pool threadID=290
2023-12-05T17:09:12.497800384Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] socket event: 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497803229Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] write ready" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497806165Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] write returns: 139" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497809231Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] socket event: 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497812487Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] write ready" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.497815592Z level=debug msg="[[C68] connected" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498622410Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] raising connection event 2" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498628973Z level=debug msg="[[Tags: \"ConnectionId\":\"68\"] connected" subsys=envoy-client threadID=290
2023-12-05T17:09:12.498632799Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] socket event: 3" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498635595Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] write ready" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498638711Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] read ready. dispatch_buffered_data=0" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498641285Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] read returns: 0" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498643830Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] remote close" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498646705Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] closing socket: 0" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498649591Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] raising connection event 0" subsys=envoy-connection threadID=290
2023-12-05T17:09:12.498661714Z level=debug msg="[[C66] connection on event 0" subsys=envoy-conn_handler threadID=290
2023-12-05T17:09:12.498670460Z level=debug msg="[[Tags: \"ConnectionId\":\"66\"] adding to cleanup list" subsys=envoy-conn_handler threadID=290
2023-12-05T17:09:12.498993125Z level=debug msg="[item added to deferred deletion list (size=1)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.499003795Z level=debug msg="[item added to deferred deletion list (size=2)" subsys=envoy-main threadID=290
2023-12-05T17:09:12.499007392Z level=debug msg="[clearing deferred deletion list (size=2)" subsys=envoy-main threadID=290

find: ‘bazel-proxy/external/envoy/api/envoy’: No such file or directory

Weird error, what could cause this? Except for this other issue I opened up, everything compiled (so far at least): https://github.com/istio/proxy/issues/2175

[pc@localhost cilium-proxy]$ make
tools/check_repositories.sh
tools/install_bazel.sh `cat BAZEL_VERSION`
Checking if Bazel 0.24.1 needs to be installed...
Bazel 0.24.1 already installed, skipping fetch.
rm -f ./bazel-bin/cilium-envoy ./cilium-envoy ./bazel-bin/cilium_integration_test \
	Dockerfile.istio_proxy \
	Dockerfile.istio_proxy_debug
rm -f bazel-out/k8-fastbuild/bin/_objs/cilium-envoy/external/envoy/source/common/common/version_linkstamp.o
bazel  build --jobs=3 //:cilium-envoy 2>&1 | grep -v -e "INFO: From .*:" -e "external/.*: warning: directory does not exist."
Loading: 
Loading: 0 packages loaded
Analyzing: target //:cilium-envoy (0 packages loaded, 0 targets configured)
DEBUG: Rule 'org_golang_x_tools' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "11629171a39a1cb4d426760005be6f7cb9b4182e4cb2756b7f1c5c2b6ae869fe"
INFO: Analysed target //:cilium-envoy (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
[0 / 3] [-----] BazelWorkspaceStatusAction stable-status.txt
[4 / 6] Linking cilium-envoy; 1s linux-sandbox
Target //:cilium-envoy up-to-date:
  bazel-bin/cilium-envoy
INFO: Elapsed time: 3.874s, Critical Path: 3.60s
INFO: 2 processes: 2 linux-sandbox.
INFO: Build completed successfully, 4 total actions
INFO: Build completed successfully, 4 total actions
make -f Makefile.api all
find: ‘bazel-proxy/external/envoy/api/envoy’: No such file or directory
make[1]: Entering directory `/home/pc/workarea/experiments/cilium-proxy'
make[1]: *** No rule to make target `bazel-proxy/external/envoy/api', needed by `all'.  Stop.
make[1]: Leaving directory `/home/pc/workarea/experiments/cilium-proxy'
make: *** [api] Error 2

Cilium istio-proxy 1.10.3 HTTP probes fail

We've been trying to use the cilium-istioctl version 1.10.3 in tandem with Cilium v1.10.3 in order to get the benefits of cilium enhanced envoy proxies on our AKS clusters which already have Istio v1.10.3 installed(we also did a fresh re-install just to make sure nothing is being cached somehow).

But it seems like the image build from this repository results in failing HTTP probes all over our cluster(TCP probes do still work and no other istio components fail except the HTTP healthchecks) :

2021-08-19T07:09:27.645166Z	error	Request to probe app failed: Get "http://10.1.0.147:80/": read tcp 127.0.0.6:55105->10.1.0.147:80: read: connection reset by peer, original URL path = /app-health/nginx/readyz
app URL path = /
2021-08-19T07:09:29.792212Z	error	Request to probe app failed: Get "http://10.1.0.147:80/": read tcp 127.0.0.6:35123->10.1.0.147:80: read: connection reset by peer, original URL path = /app-health/nginx/livez
app URL path = /
[2021-08-19T07:09:27.644Z] "- - -" 0 NR filter_chain_not_found - "-" 0 0 0 - "-" "-" "-" "-" "-" - - 10.1.0.147:80 127.0.0.6:55105 - -
[2021-08-19T07:09:29.791Z] "- - -" 0 NR filter_chain_not_found - "-" 0 0 0 - "-" "-" "-" "-" "-" - - 10.1.0.147:80 127.0.0.6:35123 - -

Is this a known issue or is this not fully tested/supported?

Include fault filter in cilium envoy for service-mesh? (or: use a provided envoy?)

Hello! As mentioned on slack, I was experimenting with using the service-mesh beta's CilumEnovyConfig CRD as a way to do some low-level programming of the envoy proxy, specifically looking to turn on the fault injection filter.

It looks like the envoy in question (this one) is intentionally pretty tailored to the specific use-cases "known" to the cilium mesh, and as part of that it disables a lot of the upstream filters (including the fault filter). That makes sense to me, but also seems somewhat in tension with exposing the raw config as a CRD, as that lead me to believe it was an "escape hatch" for doing things that the mesh didn't yet support.

So, the very specific question here is: would you be amenable to updating the cilium distribution of envoy to include Just One More filter?

Alternatively (and unfortunately this may be a better discussion for cilium/service-mesh-beta), what are your thoughts on a "Bring Your Own Envoy" kind of model? It seems easy enough to override the envoy binary with a volume populated by an init container as a way to try that, but that doesn't get very far unless the cilium agent learns to pass portions of the config it doesn't recognize through unmodified by "preserving unknown fields" (admittedly, I'm way out of my depth with protobuf/go & how envoy's using it, so I don't know the level of effort for what I'm describing).

Error inlined_vector.h -Werror=maybe-uninitialized

When building on CentOS8.2 with GCC 8.3.1, I get the following error:

make PKG_BUILD=1 V=$V DESTDIR=proxy-$BUILD_CILIUM_RELEASE cilium-envoy

ERROR: /home/rpmbuild/.cache/bazel/_bazel_rpmbuild/476f2b41900750ece4c6f35941150caf/external/envoy/source/common/http/BUILD:265:17: C++ compilation of rule '@envoy//source/common/http:header_map_lib' failed (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 122 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 122 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
In file included from external/com_google_absl/absl/container/inlined_vector.h:53,
from bazel-out/k8-opt/bin/external/envoy/include/envoy/http/virtual_includes/header_map_interface/envoy/http/header_map.h:18,
from bazel-out/k8-opt/bin/external/envoy/source/common/http/virtual_includes/header_map_lib/common/http/header_map_impl.h:9,
from external/envoy/source/common/http/header_map_impl.cc:1:
external/com_google_absl/absl/container/internal/inlined_vector.h: In member function 'void Envoy::Http::HeaderString::append(const char*, uint32_t)':
external/com_google_absl/absl/container/internal/inlined_vector.h:437:5: error: '.absl::inlined_vector_internal::Storage<char, 128, std::allocator >::data
' may be used uninitialized in this function [-Werror=maybe-uninitialized]
data
= other_storage.data_;
^~~~~
cc1plus: all warnings being treated as errors
Target //:cilium-envoy failed to build

In doing some research on envoy, I bumped into a bunch of issues that lead me to believe this might be a false-positive, and see that they added it to the cc_wrapper.py:
envoyproxy/envoy#2987

How can we add the following:
-Wno-error=maybe-uninitialized or -Wno-maybe-uninitialized
To the gcc compile flags?

Replace proxylib builder image

Description

Currently, the proxylib builder image is from upstream quay.io/cilium/cilium-builder image, which can cause some problem for auto upgrade.

It will be better to change to something else, which is smaller and also managed by renovate bot.

proxy/Dockerfile

Lines 6 to 8 in f4eb004

# Common Builder image used in cilium/cilium
# We need gcc for cgo cross-compilation at least, we can swap to something smaller later on
ARG PROXYLIB_BUILDER=quay.io/cilium/cilium-builder:832f86bb0f7c7129c1536d5620174deeec645117@sha256:6dbac9f9eba3e20f8edad4676689aa8c11b172035fe5e25b533552f42dea4e9a

Include Lua filter in cilium envoy for service-mesh

I am in the process of testing port to cilium service mesh and CNI from AWS CNI and Istio. One of the requirements is to enable security headers for all responses out of a cluster. It has been achieved in Istio using an approach similar to https://gist.github.com/kabute/ef8e7198031c8a99212a629a139ac83f .

I am trying to achieve the same on cilium cluster mesh using CiliumEnvoyConfig and noticed that the Lua filter is not activated. Would it be possible to please activate it? I think this will simplify the migration from Istio to Cilium .

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • chore(deps): update docker.io/library/ubuntu:22.04 docker digest to 58b8789 (main)
  • chore(deps): update docker.io/library/ubuntu:22.04 docker digest to 58b8789 (v1.28)
  • chore(deps): update docker.io/library/ubuntu:22.04 docker digest to 58b8789 (v1.29)

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

Branch main
dockerfile
Dockerfile
  • docker.io/library/ubuntu 22.04@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658
Dockerfile.builder
  • docker.io/library/ubuntu 22.04@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658
github-actions
.github/workflows/build-envoy-image-ci.yaml
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • sigstore/cosign-installer v3.6.0@4959ce089c160fddf62f7b42464195ba1a56d382
.github/workflows/build-envoy-images-release.yaml
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • sigstore/cosign-installer v3.6.0@4959ce089c160fddf62f7b42464195ba1a56d382
.github/workflows/ci-check-format.yaml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/upload-artifact v4.4.0@50769540e7f4bd5e21e526ee35c689e35e0d6874
.github/workflows/ci-tests.yaml
  • actions/setup-go v5.0.2@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
.github/workflows/cilium-integration-tests.yaml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • helm/kind-action v1.10.0@0025e74a8c7512023d06dc019c617aa3cf561fde
  • actions/setup-go v5.0.2@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32
  • actions/upload-artifact v4.4.0@50769540e7f4bd5e21e526ee35c689e35e0d6874
gomod
go.mod
  • go 1.22
  • github.com/census-instrumentation/opencensus-proto v0.4.1
  • github.com/cilium/checkmate v1.0.3
  • github.com/cilium/kafka v0.0.0-20180809090225-01ce283b732b@01ce283b732b
  • github.com/cncf/xds/go v0.0.0-20240905190251-b4127c9b8d78@b4127c9b8d78
  • github.com/envoyproxy/protoc-gen-validate v1.1.0
  • github.com/golang/protobuf v1.5.4
  • github.com/google/uuid v1.6.0
  • github.com/prometheus/client_model v0.6.1
  • github.com/sasha-s/go-deadlock v0.3.5
  • github.com/sirupsen/logrus v1.9.3
  • github.com/stretchr/testify v1.9.0
  • go.opentelemetry.io/proto/otlp v1.3.1
  • golang.org/x/sync v0.8.0
  • golang.org/x/sys v0.25.0
  • google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1@8af14fe29dc1
  • google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1@8af14fe29dc1
  • google.golang.org/grpc v1.66.2
  • google.golang.org/protobuf v1.34.2
  • k8s.io/klog/v2 v2.130.1
regex
WORKSPACE
  • envoyproxy/envoy v1.30.5@20d3fc67fb757d7d7a644e0e0bfc3988b1df56ab
.github/workflows/build-envoy-image-ci.yaml
  • kubernetes-sigs/bom v0.6.0
.github/workflows/build-envoy-images-release.yaml
  • kubernetes-sigs/bom v0.6.0
.github/workflows/ci-tests.yaml
  • go 1.22.7
.github/workflows/cilium-integration-tests.yaml
  • go 1.22.7
ENVOY_VERSION
  • envoyproxy/envoy 1.30.5
Dockerfile.builder
  • go 1.22.7
Branch v1.28
dockerfile
Dockerfile
  • docker.io/library/ubuntu 22.04@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658
Dockerfile.builder
  • docker.io/library/ubuntu 22.04@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658
github-actions
.github/workflows/build-envoy-image-ci.yaml
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • sigstore/cosign-installer v3.6.0@4959ce089c160fddf62f7b42464195ba1a56d382
.github/workflows/build-envoy-images-release.yaml
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • sigstore/cosign-installer v3.6.0@4959ce089c160fddf62f7b42464195ba1a56d382
.github/workflows/ci-check-format.yaml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/upload-artifact v4.4.0@50769540e7f4bd5e21e526ee35c689e35e0d6874
.github/workflows/ci-tests.yaml
  • actions/setup-go v5.0.2@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
.github/workflows/cilium-integration-tests.yaml
  • actions/github-script v7.0.1@60a0d83039c74a4aee543508d2ffcb1c3799cdea
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • helm/kind-action v1.10.0@0025e74a8c7512023d06dc019c617aa3cf561fde
  • actions/setup-go v5.0.2@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32
  • actions/github-script v7.0.1@60a0d83039c74a4aee543508d2ffcb1c3799cdea
  • actions/upload-artifact v4.4.0@50769540e7f4bd5e21e526ee35c689e35e0d6874
  • actions/github-script v7.0.1@60a0d83039c74a4aee543508d2ffcb1c3799cdea
gomod
go.mod
  • go 1.22
  • github.com/census-instrumentation/opencensus-proto v0.4.1
  • github.com/cilium/checkmate v1.0.3
  • github.com/cilium/kafka v0.0.0-20180809090225-01ce283b732b@01ce283b732b
  • github.com/cncf/xds/go v0.0.0-20240905190251-b4127c9b8d78@b4127c9b8d78
  • github.com/envoyproxy/protoc-gen-validate v1.1.0
  • github.com/golang/protobuf v1.5.4
  • github.com/google/uuid v1.6.0
  • github.com/prometheus/client_model v0.6.1
  • github.com/sasha-s/go-deadlock v0.3.5
  • github.com/sirupsen/logrus v1.9.3
  • github.com/stretchr/testify v1.9.0
  • go.opentelemetry.io/proto/otlp v1.3.1
  • golang.org/x/sync v0.8.0
  • golang.org/x/sys v0.25.0
  • google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1@8af14fe29dc1
  • google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1@8af14fe29dc1
  • google.golang.org/grpc v1.66.0
  • google.golang.org/protobuf v1.34.2
  • k8s.io/klog/v2 v2.130.1
regex
WORKSPACE
  • envoyproxy/envoy v1.28.5@1d7aa735c778acc1fb29f0130f2f01cc89acea6e
.github/workflows/build-envoy-image-ci.yaml
  • kubernetes-sigs/bom v0.6.0
.github/workflows/build-envoy-images-release.yaml
  • kubernetes-sigs/bom v0.6.0
.github/workflows/ci-tests.yaml
  • go 1.22.7
.github/workflows/cilium-integration-tests.yaml
  • go 1.22.7
ENVOY_VERSION
  • envoyproxy/envoy 1.28.5
Dockerfile.builder
  • go 1.22.7
Branch v1.29
dockerfile
Dockerfile
  • docker.io/library/ubuntu 22.04@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658
Dockerfile.builder
  • docker.io/library/ubuntu 22.04@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658
github-actions
.github/workflows/build-envoy-image-ci.yaml
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • sigstore/cosign-installer v3.6.0@4959ce089c160fddf62f7b42464195ba1a56d382
.github/workflows/build-envoy-images-release.yaml
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • sigstore/cosign-installer v3.6.0@4959ce089c160fddf62f7b42464195ba1a56d382
.github/workflows/ci-check-format.yaml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • actions/upload-artifact v4.4.0@50769540e7f4bd5e21e526ee35c689e35e0d6874
.github/workflows/ci-tests.yaml
  • actions/setup-go v5.0.2@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/setup-buildx-action v3.6.1@988b5a0280414f521da01fcc63a27aeeb4b104db
  • docker/login-action v3.3.0@9780b0c442fbb1117ed29e0efdff1e18412f7567
  • actions/cache v4.0.2@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
  • docker/build-push-action v6.7.0@5cd11c3a4ced054e52742c5fd54dca954e0edd85
.github/workflows/cilium-integration-tests.yaml
  • actions/github-script v7.0.1@60a0d83039c74a4aee543508d2ffcb1c3799cdea
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • helm/kind-action v1.10.0@0025e74a8c7512023d06dc019c617aa3cf561fde
  • actions/setup-go v5.0.2@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32
  • actions/github-script v7.0.1@60a0d83039c74a4aee543508d2ffcb1c3799cdea
  • actions/upload-artifact v4.4.0@50769540e7f4bd5e21e526ee35c689e35e0d6874
  • actions/github-script v7.0.1@60a0d83039c74a4aee543508d2ffcb1c3799cdea
gomod
go.mod
  • go 1.22
  • github.com/census-instrumentation/opencensus-proto v0.4.1
  • github.com/cilium/checkmate v1.0.3
  • github.com/cilium/kafka v0.0.0-20180809090225-01ce283b732b@01ce283b732b
  • github.com/cncf/xds/go v0.0.0-20240905190251-b4127c9b8d78@b4127c9b8d78
  • github.com/envoyproxy/protoc-gen-validate v1.1.0
  • github.com/golang/protobuf v1.5.4
  • github.com/google/uuid v1.6.0
  • github.com/prometheus/client_model v0.6.1
  • github.com/sasha-s/go-deadlock v0.3.5
  • github.com/sirupsen/logrus v1.9.3
  • github.com/stretchr/testify v1.9.0
  • go.opentelemetry.io/proto/otlp v1.3.1
  • golang.org/x/sync v0.8.0
  • golang.org/x/sys v0.25.0
  • google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1@8af14fe29dc1
  • google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1@8af14fe29dc1
  • google.golang.org/grpc v1.66.2
  • google.golang.org/protobuf v1.34.2
  • k8s.io/klog/v2 v2.130.1
regex
WORKSPACE
  • envoyproxy/envoy v1.29.8@81568012d317bb2a83de144bc6f268ee85fd681b
.github/workflows/build-envoy-image-ci.yaml
  • kubernetes-sigs/bom v0.6.0
.github/workflows/build-envoy-images-release.yaml
  • kubernetes-sigs/bom v0.6.0
.github/workflows/ci-tests.yaml
  • go 1.22.7
.github/workflows/cilium-integration-tests.yaml
  • go 1.22.7
ENVOY_VERSION
  • envoyproxy/envoy 1.29.8
Dockerfile.builder
  • go 1.22.7

  • Check this box to trigger a request for Renovate to run again on this repository

K8s Gateway API + cillium-envoy : curl 400

Hello! 👋

Context

I've enabled gateway api on cilium, and I am using cilium version 1.14.2.

I created a Gateway resources that has one listener, whose hostname is in a private hosted zone. Moreover, the listener hostname is using wildcard. The LoadBalancer Service that is created in the kubernetes cluster for this Gateway resource already has an EXTERNAL-IP assigned to it (I am using aws-load-balancer-controller).

I also created an HTTPRoute that has the previous Gateway resource as parentRef, and only has one hostname in the list of hostnames. The route is successfully attached to the Gateway, and the kubernetes external-dns successfully created the A and TXT records for the route hostname.

The Problem

Now I am trying to test the route, but when I run curl <URL>, inside the cluster, I am getting BadRequest:

HTTP/1.1 400 Bad Request
content-length: 11
content-type: text/plain
date: Fri, 29 Sep 2023 10:29:55 GMT
server: envoy
connection: close

Trying to Debug

I can't see any useful log in the cilium-agent. I run the cilium monitor but there is no flow trace from my app pod identity, which I think is normal since it is not being reached. But I do reach the app when running curl using the pod IP address.

I then enabled debug logs on cilium-agent and found this (that I hope is related):

[2023-09-29 14:11:31.346][60][trace][main] item added to deferred deletion list (size=1)
[2023-09-29 14:11:31.346][60][trace][main] clearing deferred deletion list (size=1)
[2023-09-29 14:11:32.795][60][debug][filter] tls inspector: new connection accepted
[2023-09-29 14:11:32.795][60][trace][filter] onFileEvent: 1
[2023-09-29 14:11:32.795][60][trace][filter] recv returned: 23
[2023-09-29 14:11:32.795][60][trace][filter] tls inspector: recv: 23
[2023-09-29 14:11:32.795][60][debug][misc] EGRESS POD IP: 10.4.18.60, destination IP: 10.4.38.122
[2023-09-29 14:11:32.795][60][trace][filter] cilium.ipcache: Looking up key: a04123c, prefixlen: 32
[2023-09-29 14:11:32.795][60][debug][filter] cilium.ipcache: 10.4.18.60 has ID 2
[2023-09-29 14:11:32.795][60][trace][filter] cilium.ipcache: Looking up key: a04267a, prefixlen: 32
[2023-09-29 14:11:32.795][60][debug][filter] cilium.ipcache: 10.4.38.122 has ID 1
[2023-09-29 14:11:32.795][60][trace][filter] cilium.ipcache: Looking up key: a0420d8, prefixlen: 32
[2023-09-29 14:11:32.795][60][debug][filter] cilium.ipcache: 10.4.32.216 has ID 8
[2023-09-29 14:11:32.795][60][debug][filter] Cilium SocketOption(): source_identity: 8, ingress: false, port: 30217, pod_ip: 10.4.32.216, source_addresses: /10.4.32.216:0/, mark: 80b00 (magic mark: b00, cluster: 0, ID: 8)
[2023-09-29 14:11:32.795][60][trace][misc] enableTimer called on 0x14d77f78d960 for 3600000ms, min is 3600000ms
[2023-09-29 14:11:32.795][60][debug][filter] cilium.network: onNewConnection
[2023-09-29 14:11:32.795][60][trace][connection] [C5] raising connection event 2
[2023-09-29 14:11:32.795][60][debug][conn_handler] [C5] new connection from 10.4.18.60:22978
[2023-09-29 14:11:32.795][60][trace][main] item added to deferred deletion list (size=1)
[2023-09-29 14:11:32.795][60][trace][main] clearing deferred deletion list (size=1)
[2023-09-29 14:11:32.795][60][trace][connection] [C5] socket event: 3
[2023-09-29 14:11:32.795][60][trace][connection] [C5] write ready
[2023-09-29 14:11:32.795][60][trace][connection] [C5] read ready. dispatch_buffered_data=0
[2023-09-29 14:11:32.795][60][trace][connection] [C5] read returns: 23
[2023-09-29 14:11:32.795][60][trace][connection] [C5] read returns: 0
[2023-09-29 14:11:32.795][60][trace][filter] cilium.network: onData 23 bytes, end_stream: false
[2023-09-29 14:11:32.795][60][trace][http] [C5] parsing 23 bytes
[2023-09-29 14:11:32.795][60][trace][http] [C5] message begin
[2023-09-29 14:11:32.795][60][debug][http] [C5] new stream
[2023-09-29 14:11:32.795][60][trace][misc] enableTimer called on 0x14d77f78d730 for 300000ms, min is 300000ms
[2023-09-29 14:11:32.795][60][debug][http] [C5][S706132633623396296] Sending local reply with details http1.codec_error
[2023-09-29 14:11:32.795][60][trace][misc] enableTimer called on 0x14d77f78d730 for 300000ms, min is 300000ms
[2023-09-29 14:11:32.795][60][trace][http] [C5][S706132633623396296] encode headers called: filter=cilium.l7policy status=0
[2023-09-29 14:11:32.795][60][debug][http] [C5][S706132633623396296] closing connection due to connection close header
[2023-09-29 14:11:32.795][60][debug][http] [C5][S706132633623396296] encoding headers via codec (end_stream=false):
':status', '400'
'content-length', '11'
'content-type', 'text/plain'
'date', 'Fri, 29 Sep 2023 14:11:32 GMT'
'server', 'envoy'
'connection', 'close'

[2023-09-29 14:11:32.795][60][trace][connection] [C5] writing 145 bytes, end_stream false
[2023-09-29 14:11:32.795][60][trace][misc] enableTimer called on 0x14d77f78d730 for 300000ms, min is 300000ms
[2023-09-29 14:11:32.795][60][trace][http] [C5][S706132633623396296] encode data called: filter=cilium.l7policy status=0
[2023-09-29 14:11:32.795][60][trace][http] [C5][S706132633623396296] encoding data via codec (size=11 end_stream=true)
[2023-09-29 14:11:32.795][60][trace][connection] [C5] writing 11 bytes, end_stream false
[2023-09-29 14:11:32.795][60][debug][http] [C5][S706132633623396296] doEndStream() resetting stream
[2023-09-29 14:11:32.795][60][debug][http] [C5][S706132633623396296] stream reset
[2023-09-29 14:11:32.795][60][trace][main] item added to deferred deletion list (size=1)
[2023-09-29 14:11:32.795][60][trace][misc] enableTimer called on 0x14d77f78d960 for 3600000ms, min is 3600000ms
[2023-09-29 14:11:32.795][60][trace][main] item added to deferred deletion list (size=2)
[2023-09-29 14:11:32.795][60][debug][connection] [C5] closing data_to_write=156 type=2
[2023-09-29 14:11:32.795][60][debug][connection] [C5] setting delayed close timer with timeout 1000 ms
[2023-09-29 14:11:32.796][60][debug][http] [C5] dispatch error: http/1.1 protocol error: HPE_INVALID_METHOD
[2023-09-29 14:11:32.796][60][debug][connection] [C5] closing data_to_write=156 type=2
[2023-09-29 14:11:32.796][60][debug][connection] [C5] remote close
[2023-09-29 14:11:32.796][60][debug][connection] [C5] closing socket: 0

Please note the log lines for [debug][http] and [trace][http]

I didn't define any CiliumClusterwideNetworkPolicy, nor CiliumNetworkPolicy, but when the attachment of the HTTPRoute to the Gateway was completed, a CiliumEnvoyConfig was created:

apiVersion: cilium.io/v2
kind: CiliumEnvoyConfig
metadata:
  creationTimestamp: "2023-09-27T15:35:34Z"
  generation: 4
  name: cilium-gateway-test
  namespace: core
  ownerReferences:
  - apiVersion: gateway.networking.k8s.io/v1beta1
    kind: Gateway
    name: test
    uid: 39c861c8-a2ea-4ecc-a0dc-53e0e2a4ecff
  resourceVersion: "35413694"
  uid: ad3c6afd-0d5d-4ebb-85df-a3568200a62a
spec:
  backendServices:
  - name: hello-world
    namespace: core
    number:
    - "80"
  resources:
  - '@type': type.googleapis.com/envoy.config.listener.v3.Listener
    filterChains:
    - filterChainMatch:
        transportProtocol: raw_buffer
      filters:
      - name: envoy.filters.network.http_connection_manager
        typedConfig:
          '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          httpFilters:
          - name: envoy.filters.http.router
            typedConfig:
              '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
          rds:
            routeConfigName: listener-insecure
          statPrefix: listener-insecure
          upgradeConfigs:
          - upgradeType: websocket
          useRemoteAddress: true
    listenerFilters:
    - name: envoy.filters.listener.tls_inspector
      typedConfig:
        '@type': type.googleapis.com/envoy.extensions.filters.listener.tls_inspector.v3.TlsInspector
    name: listener
    socketOptions:
    - description: Enable TCP keep-alive (default to enabled)
      intValue: "1"
      level: "1"
      name: "9"
      state: STATE_LISTENING
    - description: TCP keep-alive idle time (in seconds) (defaults to 10s)
      intValue: "10"
      level: "6"
      name: "4"
      state: STATE_LISTENING
    - description: TCP keep-alive probe intervals (in seconds) (defaults to 5s)
      intValue: "5"
      level: "6"
      name: "5"
      state: STATE_LISTENING
    - description: TCP keep-alive probe max failures.
      intValue: "10"
      level: "6"
      name: "6"
      state: STATE_LISTENING
  - '@type': type.googleapis.com/envoy.config.route.v3.RouteConfiguration
    name: listener-insecure
    virtualHosts:
    - domains:
      - <hostname>
      - <hostnam>:*
      name: <hostname>
      routes:
      - directResponse:
          body:
            inlineString: ""
          status: 500
        match:
          headers:
          - name: version
            stringMatch:
              exact: "2"
          prefix: /
      - match:
          prefix: /
        requestHeadersToAdd:
        - header:
            key: my-header
            value: foo
        route:
          cluster: core/hello-world:80
          maxStreamDuration:
            maxStreamDuration: 0s
  - '@type': type.googleapis.com/envoy.config.cluster.v3.Cluster
    connectTimeout: 5s
    name: core/hello-world:80
    outlierDetection:
      splitExternalLocalOriginErrors: true
    type: EDS
    typedExtensionProtocolOptions:
      envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
        '@type': type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
        commonHttpProtocolOptions:
          idleTimeout: 60s
        useDownstreamProtocolConfig:
          http2ProtocolOptions: {}
  services:
  - listener: ""
    name: cilium-gateway-test
    namespace: core

I was wondering if you could help me here...? 😺 🙇‍♀️

Run tests on Ubuntu 22.04

Now that Cilium 1.13 is the latest stable release, our tests started failing as Cilium 1.13 is built on newer glibc than what is available on Ubuntu 20.04. Fix this by running tests on Ubuntu 22.04 instead. Continue building on 20.04 so that cilium-envoy keeps on running on older Ubuntu as well.

As an interim fix for the CI lets pull proxylib from Cilium 1.12 instead the latest stable.

Question: How to use example r2d2 policy in Environment

Hi everyone!

I am trying to get the r2d2 Cilium envoy proxy go extension to work in my environment, mainly following this tutorial. Ultimately, my goal is to write a custom go extension for my own protocol and have that deployed in my environment, but I want to start by getting the example to work first.

I have a setup in my KinD cluster where my requests to service A forwards it to service B. I'm hoping to apply the r2d2 policy to capture/manipulate traffic between A->B.
Currently I have built the r2d2 image (hooking r2d2 here), and referenced that image by building Cilium from this directory with the following command:

cilium install \
--chart-directory ./install/kubernetes/cilium/ \
--set ingressController.enabled=false \
--set ingressController.loadbalancerMode=dedicated \
--set-string extraConfig.enable-envoy-config=true \
--namespace kube-system \
--set envoy.enabled=true \
--set envoy.image.repository=<r2d2-image> \
--set envoy.image.tag=<r2d2-image-tag>
--set envoy.image.pullPolicy=IfNotPresent \
--set envoy.image.digest=<r2d2-digest> \
--set envoy.image.useDigest=false

I apply a CiliumNetworkPolicy like so:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: r2d2test
  namespace: kube-system
spec:
  description: r2d2test
  endpointSelector:
    matchLabels:
      app: service-b
  ingress:
    - fromEndpoints:
        - {}
      toPorts:
        - ports:
            - port: "8888"
              protocol: ANY
          rules:
            l7proto: r2d2
  egress:
    - toEndpoints:
        - {}
      toPorts:
        - ports:
          - port: "8888"
            protocol: ANY
          rules:
            l7proto: r2d2

With this, I expect to see logs related to the extension or anything that would take effect in the Cilium Proxy pod when I make a request to service A. At least see the Envoy Access Logs. However, nothing is showing up, but the request does flow through.

A few questions:

  1. Is the CiliumNetworkPolicy configured correctly? When I attach l7proto: r2d2 in the rules for both egress and ingress for service B, it should mean that service B incoming and outgoing traffic should go through the go extension which is running r2d2 policy right?
  2. Where can I find logs? How can I write logs related to the request that is captured by the go extension and access them? I was convinced just by writing access logs with p.connection.Log(...) I should expect to see logs being printed out by the cilium-envoy pod.
  3. If necessary, could I use this go extension to manipulate a l7 protocol, for example, add a custom header to http? if so, how are the key points, onData? perhaps using Inject()?

Details about my setup:

cilium-cli: v0.15.10 compiled
go1.21.2
linux/amd64
cilium image: 1.15.0-dev
kind v0.20.0

make docker-istio-proxy failed due to unknown flag: --load

root@builder:/src/github.com/cilium/proxy# make docker-istio-proxy
DOCKER_BUILDKIT=1 docker build --load --build-arg BAZEL_BUILD_OPTS="" -f Dockerfile.istio_proxy -t cilium/istio_proxy:1.10.6 .
unknown flag: --load
See 'docker build --help'.
Makefile.docker:205: recipe for target 'docker-istio-proxy' failed
make: *** [docker-istio-proxy] Error 125

Docker version 20.10.8, build 3967b7d

Linux builder 4.15.0-1021-aws #21-Ubuntu SMP Tue Aug 28 10:23:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Backend Connectivity Issues

I've been having issues with the Envoy Gateway and Ingress with cilium for some months now. I'm not sure if it's cilium's or envoy's fault, envoy logs "conntrack lookup failed: Success" on every request (it always returns an error because it does not receive a response from the backend) but it's just an info log.

My issue in cilium/cilium with infos etc.:
cilium/cilium#29406

Building without Bazel or in a multi-stage container

Bazel is not an ideal build environment for building secure binaries. In fact this particular build process uses go 1.13.5 (7 versions behind the current go13 branch), and also fails to compile because ninja is not in the build environment (which also begs the question why two build environments). In addition, bazel include a slew of unnecessary dependencies with their own collection of vulnerabilities (is Java necessary to build a go/c/c++ microservice?)

With the increased threat of vulnerabilities appearing in microservice oriented projects, CNCF projects in general should be allowing developers that prioritize security a means to build projects such as these with only the necessary dependencies and compilers (gcc, make, go).

Can the team provide a means to build the envoy binary on the latest version of go?

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: .github/renovate.json5
Error type: Invalid JSON5 (parsing failed)
Message: JSON5.parse error: JSON5: invalid character '\"' at 132:7

dev: Add proto tools/proto_format/proto_format.sh in make {fix,check}

Description

After envoy 1.24.x upgrade, make fix is no longer running proto format.

vagrant@ubuntu-jammy:~/proxy$ make fix
BUILDING on amd64 for amd64 using //bazel:linux_x86_64
Using Docker Buildx builder "default" with build flags "".
tools/install_bazel.sh `cat .bazelversion`
Checking if Bazel 6.0.0 needs to be installed...
Bazel 6.0.0 already installed, skipping fetch.
bazel  build --platforms=//bazel:linux_x86_64 --config=release //:check_format.py
INFO: Build options --compilation_mode, --crosstool_top, --define, and 3 more have changed, discarding analysis cache.
INFO: Analyzed target //:check_format.py (118 packages loaded, 359 targets configured).
INFO: Found 1 target...
Target //:check_format.py up-to-date:
  bazel-bin/check_format.py
INFO: Elapsed time: 0.944s, Critical Path: 0.02s
INFO: 6 processes: 6 internal.
INFO: Build completed successfully, 6 total actions
CLANG_FORMAT=clang-format-15 BUILDIFIER=~/go/bin/buildifier BUILDOZER=~/go/bin/buildozer ./bazel-bin/check_format.py --skip_envoy_build_rule_check --add-excluded-prefixes "./linux/" "./proxylib/" --bazel_tools_check_excluded_paths="." --build_fixer_check_excluded_paths="./" fix
Please note: `tools/code_format/check_format.py` no longer checks API `.proto` files, please use `tools/proto_format/proto_format.sh` if you are making changes to the API files

Cilium build fails with "error executing shell command"

Hi,
I am on Red Hat Enterprise 8.3 and cannot build. The log below says it all. Any help much appreciated!
Thanks,
chris

$ make
tools/check_repositories.sh
bazel build //:cilium-envoy
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:cilium-envoy (386 packages loaded, 15001 targets configured).
INFO: Found 1 target...
ERROR: /home/cgi/.cache/bazel/_bazel_cgi/d2e242848a4c11f578659ff4a2d64649/external/envoy/bazel/foreign_cc/BUILD:355:21: error executing shell command: '/bin/bash -c #!/usr/bin/env bash
function cleanup_function() {
local ecode=$?
if [ $ecode -eq 0 ]; then
cleanup_on_success
else
cleanup_on_failure
fi
}
set -e
function cleanup_on_success() {
printf...' failed (Exit 127): bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox

rules_foreign_cc: Build failed!
rules_foreign_cc: Keeping temp build directory and dependencies directory for debug.
rules_foreign_cc: Please note that the directories inside a sandbox are still cleaned unless you specify '--sandbox_debug' Bazel command line flag.

rules_foreign_cc: Printing build logs:

_____ BEGIN BUILD LOGS _____
Bazel external C/C++ Rules #0.0.8. Building library 'zlib'
Environment:______________
EXT_BUILD_ROOT=/home/cgi/.cache/bazel/_bazel_cgi/d2e242848a4c11f578659ff4a2d64649/sandbox/linux-sandbox/2/execroot/cilium
INSTALLDIR=/home/cgi/.cache/bazel/_bazel_cgi/d2e242848a4c11f578659ff4a2d64649/sandbox/linux-sandbox/2/execroot/cilium/bazel-out/host/bin/external/envoy/bazel/foreign_cc/zlib
PWD=/home/cgi/.cache/bazel/_bazel_cgi/d2e242848a4c11f578659ff4a2d64649/sandbox/linux-sandbox/2/execroot/cilium
BUILD_TMPDIR=/tmp/tmp.lL4Slc4KH2
TMPDIR=/tmp
EXT_BUILD_DEPS=/tmp/tmp.fCjShvjgf0
SHLVL=2
BUILD_LOG=bazel-out/host/bin/external/envoy/bazel/foreign_cc/zlib/logs/CMake.log
BUILD_SCRIPT=bazel-out/host/bin/external/envoy/bazel/foreign_cc/zlib/logs/CMake_script.sh
PATH=/home/cgi/.cache/bazel/_bazel_cgi/d2e242848a4c11f578659ff4a2d64649/sandbox/linux-sandbox/2/execroot/cilium:/bin:/usr/bin:/usr/local/bin
_=/bin/env


bazel-out/host/bin/external/envoy/bazel/foreign_cc/zlib/logs/CMake_script.sh: line 36: cmake: command not found

_____ END BUILD LOGS _____
Printing build script:

_____ BEGIN BUILD SCRIPT _____
#!/usr/bin/env bash
function children_to_path() {
if [ -d $EXT_BUILD_DEPS/bin ]; then
local tools=$(find $EXT_BUILD_DEPS/bin -maxdepth 1 -mindepth 1)
for tool in $tools;
do
if [[ -d "$tool" ]] || [[ -L "$tool" ]]; then
export PATH=$PATH:$tool
fi
done
fi
}
function replace_in_files() {
if [ -d "$1" ]; then
find -L $1 -type f ( -name ".pc" -or -name ".la" -or -name "-config" -or -name ".cmake" ) -exec sed -i 's@'"$2"'@'"$3"'@g' {} ';'
fi
}
printf ""
printf "Bazel external C/C++ Rules #0.0.8. Building library 'zlib'\n"
printf ""
set -e

export EXT_BUILD_ROOT=$(pwd)
export BUILD_TMPDIR=$(mktemp -d)
export EXT_BUILD_DEPS=$(mktemp -d)
export INSTALLDIR=$EXT_BUILD_ROOT/bazel-out/host/bin/external/envoy/bazel/foreign_cc/zlib
export PATH="$EXT_BUILD_ROOT:$PATH"
mkdir -p $INSTALLDIR
printf "Environment:\n"
env
printf "
\n"
children_to_path $EXT_BUILD_DEPS/bin
export PATH="$EXT_BUILD_DEPS/bin:$PATH"
cd $BUILD_TMPDIR
export INSTALL_PREFIX="zlib"
CC="/usr/bin/gcc" CXX="/usr/bin/gcc" CFLAGS="-U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -D_FORTIFY_SOURCE=1 -DNDEBUG -ffunction-sections -fdata-sections -fno-canonical-system-headers -Wno-builtin-macro-redefined -D__DATE="redacted" -D__TIMESTAMP="redacted" -D__TIME="redacted"" CXXFLAGS="-U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -D_FORTIFY_SOURCE=1 -DNDEBUG -ffunction-sections -fdata-sections -std=c++0x -fno-canonical-system-headers -Wno-builtin-macro-redefined -D__DATE="redacted" -D__TIMESTAMP="redacted" -D__TIME
="redacted"" ASMFLAGS="-U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -D_FORTIFY_SOURCE=1 -DNDEBUG -ffunction-sections -fdata-sections -fno-canonical-system-headers -Wno-builtin-macro-redefined -D__DATE__="redacted" -D__TIMESTAMP__="redacted" -D__TIME__="redacted"" cmake -DCMAKE_AR="/usr/bin/ar" -DCMAKE_SHARED_LINKER_FLAGS="-shared -fuse-ld=gold -Wl,-no-as-needed -Wl,-z,relro,-z,now -B/usr/bin -pass-exit-codes -lm -Wl,--gc-sections -l:libstdc++.a" -DCMAKE_EXE_LINKER_FLAGS="-fuse-ld=gold -Wl,-no-as-needed -Wl,-z,relro,-z,now -B/usr/bin -pass-exit-codes -lm -Wl,--gc-sections -l:libstdc++.a" -DCMAKE_CXX_COMPILER_FORCED="on" -DCMAKE_C_COMPILER_FORCED="on" -DSKIP_BUILD_EXAMPLES="on" -DBUILD_SHARED_LIBS="off" -DZLIB_COMPAT="on" -DZLIB_ENABLE_TESTS="off" -DWITH_OPTIM="on" -DWITH_SSE4="off" -DWITH_NEW_STRATEGIES="off" -DUNALIGNED_OK="off" -DCMAKE_BUILD_TYPE="Bazel" -DCMAKE_PREFIX_PATH="$EXT_BUILD_DEPS" -DCMAKE_INSTALL_PREFIX="$INSTALL_PREFIX" -DCMAKE_RANLIB="" -GNinja $EXT_BUILD_ROOT/external/net_zlib
ninja -v
ninja -v install
cp -L -r --no-target-directory "$BUILD_TMPDIR/$INSTALL_PREFIX" "$INSTALLDIR"
replace_in_files $INSTALLDIR $BUILD_TMPDIR ${EXT_BUILD_DEPS}
replace_in_files $INSTALLDIR $EXT_BUILD_DEPS ${EXT_BUILD_DEPS}
mkdir -p $EXT_BUILD_ROOT/bazel-out/host/bin/external/envoy/bazel/foreign_cc/copy_zlib/zlib
cp -L -r --no-target-directory "$INSTALLDIR" "$EXT_BUILD_ROOT/bazel-out/host/bin/external/envoy/bazel/foreign_cc/copy_zlib/zlib"
touch $EXT_BUILD_ROOT/bazel-out/host/bin/external/envoy/bazel/foreign_cc/empty_zlib.txt
cd $EXT_BUILD_ROOT
_____ END BUILD SCRIPT _____
rules_foreign_cc: Build script location: bazel-out/host/bin/external/envoy/bazel/foreign_cc/zlib/logs/CMake_script.sh
rules_foreign_cc: Build log location: bazel-out/host/bin/external/envoy/bazel/foreign_cc/zlib/logs/CMake.log

Target //:cilium-envoy failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 9.784s, Critical Path: 0.19s
INFO: 616 processes: 614 internal, 2 linux-sandbox.
FAILED: Build did NOT complete successfully
make: *** [Makefile.dev:38: envoy-default] Error 1

Socket re-use in proxy causing EADDRNOTAVAIL

We are using https://docs.cilium.io/en/stable/gettingstarted/tls-visibility/ in our Kubernetes clusters to restrict outbound network access via L7 filters. We have a Kubernetes pod that uses pyarrow S3FileSystem to access data from S3 into a python computation. This library has a large upfront cost, sending a large number of requests to S3 endpoints when it is initialized.

I believe that we have found a bug in this library due to this socket re-use. Around 1% of our requests are failing regularly, and we can also get into a state where no requests successfully get through S3. When disabling the cilium L7 filters, these errors never occur.

In order to debug this issue, we started cilium-agent with --debug-verbose=envoy --envoy-log=/tmp/envoy.log to see the proxy debug logs. Within this log, it is clear that the 503s are returned from envoy immediately after it receives an EADDRNOTAVAIL on a specific socket.

[2022-03-23 12:53:23.323][401][debug][pool] creating a new connection
[2022-03-23 12:53:23.323][401][trace][filter] Set socket (137) option SO_MARK to 32030b00 (magic mark: b00, id: 12803, cluster: 0), src: 10.0.43.18:37372
[2022-03-23 12:53:23.323][401][debug][client] [C29] connecting
[2022-03-23 12:53:23.323][401][debug][connection] [C29] connecting to 52.217.196.218:443
[2022-03-23 12:53:23.323][401][debug][connection] [C29] immediate connect error: 99
[2022-03-23 12:53:23.323][401][trace][pool] not creating a new connection, shouldCreateNewConnection returned false.
[2022-03-23 12:53:23.323][401][trace][connection] [C21] socket event: 2
[2022-03-23 12:53:23.323][401][trace][connection] [C21] write ready
[2022-03-23 12:53:23.323][401][trace][connection] [C29] socket event: 3
[2022-03-23 12:53:23.323][401][debug][connection] [C29] raising immediate error
[2022-03-23 12:53:23.323][401][debug][connection] [C29] closing socket: 0
[2022-03-23 12:53:23.323][401][trace][connection] [C29] raising connection event 0
[2022-03-23 12:53:23.323][401][debug][client] [C29] disconnect. resetting 0 pending requests
[2022-03-23 12:53:23.323][401][debug][pool] [C29] client disconnected, failure reason: immediate connect error: 99
[2022-03-23 12:53:23.323][401][debug][router] [C10][S10684640357075680258] upstream reset: reset reason: connection failure, transport failure reason: immediate connect error: 99
[2022-03-23 12:53:23.323][401][debug][http] [C10][S10684640357075680258] Sending local reply with details upstream_reset_before_response_started{connection_failure,immediate_connect_error:_99}
[2022-03-23 12:53:23.323][401][trace][router] Cilium access log msg sent: timestamp: 1648040003323830591
entry_type: Response
policy_name: "10.0.43.18"
source_security_id: 12803
source_address: "10.0.43.18:37370"
destination_address: "52.217.196.218:443"
destination_security_id: 16777230
http {
  http_protocol: HTTP11
  scheme: "https"
  host: "REDACTED.s3.us-east-1.amazonaws.com"
  path: "/?REDACTED"
  method: "GET"
  headers {
    key: "x-request-id"
    value: "REDACTED"
  }
  headers {
    key: "content-length"
    value: "146"
  }
  headers {
    key: "content-type"
    value: "text/plain"
  }
  status: 503
}

Taking a deeper look at what is occurring here, we see that there is an issue with sockets re-using the same quartet of src host:port, dest host:port. Looking at a specific example of a set of log lines:

#### Connection 11 starts
[2022-03-23 12:53:22.095][401][trace][filter] Set socket (131) option SO_MARK to 32030b00 (magic mark: b00, id: 12803, cluster: 0), src: 10.0.43.18:37370
[2022-03-23 12:53:22.096][401][debug][connection] [C11] connection in progress

#### Connection 13 fails
[2022-03-23 12:53:23.046][401][trace][filter] Set socket (133) option SO_MARK to 32030b00 (magic mark: b00, id: 12803, cluster: 0), src: 10.0.43.18:37370
[2022-03-23 12:53:23.046][401][debug][client] [C13] connecting
[2022-03-23 12:53:23.046][401][debug][connection] [C13] connecting to 52.217.196.218:443
[2022-03-23 12:53:23.046][401][debug][connection] [C13] immediate connect error: 99

#### Retries again for [C14-16]

[2022-03-23 12:53:23.068][401][trace][filter] Set socket (133) option SO_MARK to 32030b00 (magic mark: b00, id: 12803, cluster: 0), src: 10.0.43.18:37370
[2022-03-23 12:53:23.068][401][debug][client] [C14] connecting
[2022-03-23 12:53:23.068][401][debug][connection] [C14] connecting to 52.217.196.218:443
[2022-03-23 12:53:23.068][401][debug][connection] [C14] immediate connect error: 99

[2022-03-23 12:53:23.088][401][trace][filter] Set socket (133) option SO_MARK to 32030b00 (magic mark: b00, id: 12803, cluster: 0), src: 10.0.43.18:37370
[2022-03-23 12:53:23.088][401][debug][client] [C15] connecting
[2022-03-23 12:53:23.088][401][debug][connection] [C15] connecting to 52.217.196.218:443
[2022-03-23 12:53:23.088][401][debug][connection] [C15] immediate connect error: 99

[2022-03-23 12:53:23.124][401][trace][filter] Set socket (133) option SO_MARK to 32030b00 (magic mark: b00, id: 12803, cluster: 0), src: 10.0.43.18:37370
[2022-03-23 12:53:23.124][401][debug][client] [C16] connecting
[2022-03-23 12:53:23.124][401][debug][connection] [C16] connecting to 52.217.196.218:443
[2022-03-23 12:53:23.124][401][debug][connection] [C16] immediate connect error: 99

#### Returns a 503 (see matching timestamps from C16, no other requests at this time)

[2022-03-23 12:53:23.124][401][trace][router] Cilium access log msg sent: timestamp: 1648040003124779592
entry_type: Response
policy_name: "10.0.43.18"
source_security_id: 12803
source_address: "10.0.43.18:37372"
destination_address: "52.217.196.218:443"
destination_security_id: 16777230
http {
  http_protocol: HTTP11
  <DELETED_FOR_BREVITY>
  status: 503
}

#### Connection 11 is still established after these failed attempts
[2022-03-23 12:53:23.304][401][trace][http] [C11] parsed 5 bytes

Tracing the code seems to identify #53 as the issue here. Every time we get this 503 response, it is preceded by 4 connection attempts that fail with this connect error. Looking at the envoy config, we've got 3 retries set on 5xx responses, so the connection attempts align with that setting as well.

$ curl --unix-socket /var/run/cilium/envoy-admin.sock http://localhost/config_dump?include_eds
...
                 "route": {
                  "cluster": "egress-cluster-tls",
                  "timeout": "3600s",
                  "retry_policy": {
                   "retry_on": "5xx",
                   "num_retries": 3,
                   "per_try_timeout": "0s"
                  },
...

We have additional data that we can share if desired, please feel free to reach out on the cilium slack or through this issue.

Support envoy.filters.http.oauth2

OAuth2 is a pretty common way to secure public facing websites/endpoints, and it would be ideal if using Cilium Ingress with a custom CEC, if we could configure the oauth2 filter to handle authentication.

Building cilium proxy from latest release branch of envoy

I noticed a PR for updating cillium-envoy to 1.15.0, but envoy is now at 1.16.1 release.

The latest build of cilium-envoy looks like its pulling 1.14.5 envoy. Would it be possible to modify this repo to allow the developer to pull specific release branches of envoy and add the cilium filters...or just use the native envoy proxy?

Proxylib with ubuntu 22.04

Cilium master currently has a breaking change that causes proxylib built
from there not work on Ubuntu 20.04:
"cilium.network: Cannot load go module 'proxylib/libcilium.so': /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by proxylib/libcilium.so)"

as mentioned in #87

Support envoy.http.stateful_header_formatters.preserve_case

Envoy defaults to normalizing the casing in HTTP headers: https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/header_casing#http-1-1-header-casing

While this is compliant with the HTTP specification, and doesn't usually cause issues in practice, there are cases where third party software does not abide by the standard and treats HTTP headers as case sensitive.

Compiling envoy with the envoy.http.stateful_header_formatters.preserve_case extension enabled would allow users to disable this behaviour when using cilium+envoy for network policies or L7 routing. Currently trying to do so results in the following errors being logged by the cilium agent:

{"level":"info","msg":"[libprotobuf WARNING external/com_google_protobuf/src/google/protobuf/text_format.cc:2157] Can't print proto content: prot
o type type.googleapis.com/envoy.extensions.http.header_formatters.preserve_case.v3.PreserveCaseFormatterConfig not found","subsys":"envoy-upstre
am","threadID":"46"}
{"level":"info","msg":"[libprotobuf WARNING external/com_google_protobuf/src/google/protobuf/text_format.cc:2157] Can't print proto content: prot
o type type.googleapis.com/envoy.extensions.http.header_formatters.preserve_case.v3.PreserveCaseFormatterConfig not found","subsys":"envoy-upstre
am","threadID":"46"}
{"level":"warning","msg":"NACK received for versions after 40 and up to 41; waiting for a version update before sending again","subsys":"xds","xd
sAckedVersion":"40","xdsClientNode":"host~127.0.0.1~no-id~localdomain","xdsDetail":"Error adding/updating listener(s) thingy/thingy-router/thingy: D
idn't find a registered implementation for 'envoy.http.stateful_header_formatters.preserve_case' with type URL: 'envoy.extensions.http.header_for
matters.preserve_case.v3.PreserveCaseFormatterConfig'\n","xdsNonce":"41","xdsStreamID":2,"xdsTypeURL":"type.googleapis.com/envoy.config.listener.
v3.Listener"}
{"ciliumEnvoyConfigName":"thingy-router","error":"NACK received: Error adding/updating listener(s) thingy/thingy-router/thingy: Didn't find a registe
red implementation for 'envoy.http.stateful_header_formatters.preserve_case' with type URL: 'envoy.extensions.http.header_formatters.preserve_cas
e.v3.PreserveCaseFormatterConfig'\n","k8sApiVersion":"","k8sNamespace":"thingy","k8sUID":"4717a906-b971-4532-bdf2-a02e95459bbf","level":"warning",
"msg":"Failed to update CiliumEnvoyConfig","subsys":"k8s-watcher"}
{"level":"warning","msg":"[gRPC config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) thingy/thingy-router/thingy: Didn't find a registered implementation for 'envoy.http.stateful_header_formatters.preserve_case' with type URL: 'envoy.ext
ensions.http.header_formatters.preserve_case.v3.PreserveCaseFormatterConfig'","subsys":"envoy-config","threadID":"46"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.