Giter Site home page Giter Site logo

tc-kafka-service-mesh's Introduction

Service Mesh and Proxies: Examples for Kafka

1. Docker Examples

The examples at ./docker-examples can be run locally using Docker. They use Envoy proxies with Kafka filters. Note that Kafka filters are an experimental feature and are only included in the contrib images of Envoy.

1.1. Prerequisites

  • Docker and docker-compose


1.2. Double-Proxy with TLS

When using a Service Mesh, this usually means that communication is routed via a double-proxy. Istio automatically configures the sidecars and in simple use cases, no additional configuration is needed for the proxies. To get a better understanding of how the sidecars in Istio are configured, this double-proxy example builds the below architecture in Docker:

envoy double proxy kafka

For routing, no Kafka filters are needed since this takes place at layer 4. Nevertheless, the Kafka Broker filter can be applied additionally to make use of the metrics.

Istio also enables automatic mTLS. In Envoy, it is also possible to manually configure TLS authentication. In this example, the broker needs to authenticate against the client and therefore needs to provide certificates. Certificates and keys are mounted into the containers of the client proxy and the broker proxy.

The proxies re-route the requests to the configured ports. This functionality can also be used when the location of a service changed or should not be known.

The example can be executed using the following steps:

1) Change into the directory and start all containers:

cd ./docker-examples/double-proxy-kafka-tls
docker-compose up -d

2) Start an additional container as client to test the proxies:

 docker run --rm -it --network host ueisele/apache-kafka-server:2.8.0 bash

3) Test different commands using the tools kafka-topics, kafka-console-producer or kafka-console-consumer. The client proxy is available at localhost:19090. As an example:

kafka-topics.sh --bootstrap-server localhost:19090 --list

1.3. Kafka Mesh Filter Example

The Kafka Mesh filter was merged in September 2021 into Envoy’s contrib image. It enables to use a single endpoint that proxies multiple clusters. The messages are forwarded by topic-prefixes. In this example, the prefixes a, b or c can be used.

The filter was developed by Adam Kotwasinski, who also mainly contributed to the Kafka Broker filter. He also published a detailed blog post about it. The envoy configuration in this repository bases on his example in the article.

The docker-compose file builds the following setup:

kafka mesh filter

Note that the Kafka Mesh filter can be combined with the Kafka Broker filter. In contrast to that, it is not possible to combine it with the TCP proxy filter! Nevertheless, the Kafka Mesh filter already enables basic routing to different clusters. Further constraints of this filter are described in the documentation.

To test this example, execute the below steps:

1) Change into the directory and start all containers:

cd ./docker-examples/proxy-kafka-mesh
docker-compose up -d

2) Create a Kafka client container to test the proxy:

 docker run --rm -it --network host ueisele/apache-kafka-server:2.8.0 bash

3) Produce some messages to different topics. Note that the Kafka Mesh filter only allows to produce messages, it is not possible to consume messages or to use functionalities of the kafka-topics tool! The proxy is available at localhost:19090. As topic prefixes, a, b or c can be used. For example:

kafka-console-producer.sh --bootstrap-server localhost:19090 --topic apples
> apple
> twoapples
kafka-console-producer.sh --bootstrap-server localhost:19090 --topic berries
> berry
> twoberries
kafka-console-producer.sh --bootstrap-server localhost:19090 --topic cherries
> cherry
> twocherries

4) To test if the messages were delivered successfully, you can use the kafka-console-consumer tool. Since the Kafka Mesh filter can only be used for producing, the brokers need to be accessed without the proxy:

kafka-console-consumer.sh --bootstrap-server localhost:19091 --topic apples --from-beginning
kafka-console-consumer.sh --bootstrap-server localhost:29091 --topic berries --from-beginning
kafka-console-consumer.sh --bootstrap-server localhost:39091 --topic cherries --from-beginning

2. Kubernetes Examples

The examples for Kubernetes show how to configure and use Istio with Kafka. Since Istio automatically sets most of the required configuration, only small adjustments need to be done.

2.1. Prerequisites

  • a local minikube installation or access to a Kubernetes cluster

  • installation of istioctl


2.2. Kafka Metrics in Istio

Currently, the functionalities of Istio are limited if it used with Kafka. The reason for this is that Istio mainly supports HTTP/HTTPS on layer 7. For the Kafka protocol, it is only possible to add metrics and to use layer 4 functionalities (e.g., mTLS).

To enable that the Prometheus metrics will be picked up by Istio, one important property is added to all pod configurations:

template:
    metadata:
      annotations:
        proxy.istio.io/config: |-
          proxyStatsMatcher:
            inclusionRegexps:
            - ".*"

This configures Istio proxy to record additional statistics. The regex expression can be adjusted for specific needs. More details about this can be found in the official documentation.
You can also use the collected Prometheus metrics to build your own Grafana dashboards.

The Istio will build the below environment:

istio service mesh kafka

1) Start minikube:

minikube start

2) Activate istio in your minikube kubernetes cluster:

istioctl install --set profile=demo -y

3) To activate monitoring tools, navigate to <istioctl-installation-directory>/samples. To install Kiali, Prometheus, Jaeger, Zipkin and Grafana:

cd <istioctl-installation-directory>/samples
kubectl apply -f ./addons

4) List all istio services to get an overview of the tools that can be used:

kubectl get svc -n istio-system

Note: It can take a few minutes until the services are fully available.

5) Port-forward the service you want to access:

kubectl port-forward svc/kiali -n istio-system 20001

The service can be accessed via your browser, e.g. http://localhost:20001 for Kiali.

6) Install Kafka, Zookeeper, Consumer and Producer:

cd ./k8s-examples/k8s
./install.sh

It can take a few seconds to minutes until all pods are in a running state.

7) Check the status of the pods. They can also be monitored using the tools used in step 5.

kubectl get pods -n kafka

2.3. mTLS Authentication

Istio uses automatic mTLS which means that communication between the sidecars is automatically secured by mTLS. However, the default configuration for automatic mTLS is set to PERMISSIVE which means that mTLS will only be used if possible. In case one communication partner cannot use mTLS, unsecured communication will also be allowed.
The authentication policy in ./k8s-examples/istio/istio-peer-authentication.yaml will set mTLS mode to STRICT.

Kiali should also display mTLS as enabled for the kafka namespace. To test if mTLS works, you can deploy kafka-sample-other-namespace.yaml in another namespace where Istio is not (!) enabled. The pod will then not contain sidecars and containers will therefore not be able to communicate using mTLS. If the logs of kafka-consumer or kafka-producer are examined, it will be visible that the communication will fail.

If the mTLS mode is changed to PERMISSIVE, communication with and without mTLS will be allowed again and producing/consuming will succeed. For more details, refer to the official documentation.

3. Envoy Admin Interface in Docker and Istio

Envoy offers an admin interface which enables to access Envoy’s logs.

Docker:
To increase the log levels, the config has to be set when the container is running:

docker exec -it <envoy-container-name> bash
apt-get update
apt-get install curl
curl -X POST localhost:9901/logging?level=debug

Istio:

kubectl exec -it <pod-name> -c istio-proxy -n kafka -- /bin/bash
curl -X GET localhost:15000/stats
# for increased log level:
curl -X POST localhost:15000/logging?level=DEBUG

Find further details about the admin interface and its functionalities.

4. Envoy Contrib Images

Envoy moved experimental filters to a separate Docker image in release 1.20. For Istio, this means that some filters can only be used when Istio version 1.11 is used. Istio version 1.12 will already include Envoy 1.20 and therefore only officially supported features. On Github, it is already discussed whether Istio should maintain official and contrib images.

5. References and Further Resources

To build these examples, mainly the following references were used:

For more information about Kafka, Istio and Service Meshes, consider these articles and videos:


5.1. Aeraki Framework

As already explained above, Istio currently mainly supports HTTP/HTTPS. But there are many other protocols were the functionalities of Istio would be beneficial. To support a new protocol, it is currently necessary to implement new filters for Envoy. Aeraki framework aims to make this process easier for other layer 7 protocols in Istio.

At the moment, Aeraki provides the most benefits for Dubbo and Thrift protocol. Nevertheless, the project seems to be under active development and aims to support more protocols in the future. It is also possible to implement an interface to add new filter functionalities.

However, for Kafka, Aeraki currently only supports metrics which is already easy to configure without Aeraki.

tc-kafka-service-mesh's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

musibs ueisele

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.