Giter Site home page Giter Site logo

rajeshkarnena / kandalf Goto Github PK

View Code? Open in Web Editor NEW

This project forked from hellofresh/kandalf

0.0 0.0 0.0 319 KB

RabbitMQ to Kafka bridge

Home Page: https://www.hellofresh.com

License: MIT License

Shell 4.86% Go 93.50% Makefile 1.19% Dockerfile 0.45%

kandalf's Introduction

Kandalf

Build Status codecov GoDoc Go Report Card

Note

As of version 0.7 docker images migrated to Docker Hub


RabbitMQ to Kafka bridge

The main idea is to read messages from provided exchanges in RabbitMQ and send them to Kafka.

Application uses intermediate permanent storage for keeping read messages in case of Kafka unavailability.

Service is written in Go language and can be build with go compiler of version 1.14 and above.

Configuring

Application configuration

Application is configured with environment variables or config files of different formats - JSON, TOML, YAML, HCL, and Java properties.

By default it tries to read config file from /etc/kandalf/conf/config.<ext> and ./config.<ext>. You can change the path using -c <file_path> or --config <file_path> application parameters. If file is not found config loader does fallback to reading config values from environment variables.

Environment variables

  • RABBIT_DSN - RabbiMQ server DSN
  • STORAGE_DSN - Permanent storage DSN, where Scheme is storage type. The following storage types are currently supported:
    • Redis - requires, key as DSN query parameter as redis storage key, e.g. redis://localhost:6379/?key=kandalf
  • LOG_* - Logging settings, see hellofresh/logging-go for details
  • KAFKA_BROKERS - Kafka brokers comma-separated list, e.g. 192.168.0.1:9092,192.168.0.2:9092
  • KAFKA_MAX_RETRY - Total number of times to retry sending a message to Kafka (default: 5)
  • KAFKA_PIPES_CONFIG - Path to RabbitMQ-Kafka bridge mappings config, see details below (default: /etc/kandalf/conf/pipes.yml)
  • STATS_DSN - Stats host, see hellofresh/stats-go for usage details.
  • STATS_PREFIX - Stats prefix, see hellofresh/stats-go for usage details.
  • STATS_PORT - Stats port, used only for prometheus metrics, metrics are exposed on localhost:<port>/metrics (default: 8080).
  • WORKER_CYCLE_TIMEOUT - Main application bridge worker cycle timeout to avoid CPU overload, must be valid duration string (default: 2s)
  • WORKER_CACHE_SIZE - Max messages number that we store in memory before trying to publish to Kafka (default: 10)
  • WORKER_CACHE_FLUSH_TIMEOUT - Max amount of time we store messages in memory before trying to publish to Kafka, must be valid duration string (default: 5s)
  • WORKER_STORAGE_READ_TIMEOUT - Timeout between attempts of reading persisted messages from storage, to publish them to Kafka, must be at least 2x greater than WORKER_CYCLE_TIMEOUT, must be valid duration string (default: 10s)
  • WORKER_STORAGE_MAX_ERRORS - Max storage read errors in a row before worker stops trying reading in current read cycle. Next read cycle will be in WORKER_STORAGE_READ_TIMEOUT interval. (default: 10)

Config file (YAML example)

Config should have the following structure:

logLevel: "info"                                    # same as env LOG_LEVEL
rabbitDSN: "amqp://user:password@rmq"               # same as env RABBIT_DSN
storageDSN: "redis://redis.local/?key=storage:key"  # same as env STORAGE_DSN
kafka:
  brokers:                                          # same as env KAFKA_BROKERS
    - "192.0.0.1:9092"
    - "192.0.0.2:9092"
  maxRetry: 5                                       # same as env KAFKA_MAX_RETRY
  pipesConfig: "/etc/kandalf/conf/pipes.yml"        # same as env KAFKA_PIPES_CONFIG
stats:
  dsn: "statsd.local:8125"                          # same as env STATS_DSN
  prefix: "kandalf"                                 # same as env STATS_PREFIX
worker:
  cycleTimeout: "2s"                                # same as env WORKER_CYCLE_TIMEOUT
  cacheSize: 10                                     # same as env WORKER_CACHE_SIZE
  cacheFlushTimeout: "5s"                           # same as env WORKER_CACHE_FLUSH_TIMEOUT
  storageReadTimeout: "10s"                         # same as env WORKER_STORAGE_READ_TIMEOUT
  storageMaxErrors: 10                              # same as env WORKER_STORAGE_MAX_ERRORS

You can find sample config file in assets/config.yml.

Pipes configuration

The rules, defining which messages should be send to which Kafka topics, are defined in Kafka Pipes Config file and are called "pipes". Each pipe has the following structure:

- kafkaTopic: "loyalty"                                # name of the topic in Kafka where message will be sent
  rabbitExchangeName: "customers"                      # name of the exchange in RabbitMQ
  rabbitTransientExchange: false                       # determines if the exchange should be declared as durable or transient
  rabbitRoutingKey: "badge.received"                   # routing key for exchange
  rabbitQueueName: "kandalf-customers-badge.received"  # the name of RabbitMQ queue to read messages from
  rabbitDurableQueue: true                             # determines if the queue should be declared as durable
  rabbitAutoDeleteQueue: false                         # determines if the queue should be declared as auto-delete

You can find sample Kafka Pipes Config file in assets/pipes.yml.

How to build a binary on a local machine

  1. Make sure you have go and make utility installed on your machine;
  2. Run: make to install all required dependencies and build binaries;
  3. Binaries for Linux and MacOS X would be in ./dist/.

How to run service in a docker environment

For testing and development you can use docker-compose file with all the required services.

For production you can use minimalistic prebuilt hellofresh/kandalf image as base image or mount pipes configuration volume to /etc/kandalf/conf/.

Todo

  • Handle dependencies in a proper way (gvt, glide or smth.)
  • Tests

Contributing

To start contributing, please check CONTRIBUTING.

kandalf's People

Contributors

vgarvardt avatar endeveit avatar italolelis avatar startnow65 avatar asemt avatar aj-vrod avatar awurster avatar iuriinedostup avatar lucasmdrs avatar nsimaria avatar rafaeljesus avatar siad007 avatar mend-for-github-com[bot] avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.