Giter Site home page Giter Site logo

raystack / raccoon Goto Github PK

View Code? Open in Web Editor NEW
187.0 14.0 29.0 11.34 MB

Raccoon is a high-throughput, low-latency service to collect events in real-time from your web, mobile apps, and services using multiple network protocols.

Home Page: https://raystack.github.io/raccoon/

License: Apache License 2.0

Dockerfile 0.13% Makefile 0.72% Go 48.02% Java 12.16% JavaScript 38.96%
clickstream kafka eventsourcing dataops

raccoon's Introduction

Raccoon

build workflow package workflow License Version

Raccoon is high throughput, low-latency service that provides an API to ingest clickstream data from mobile apps, sites and publish it to Kafka. Raccoon uses the Websocket protocol for peer-to-peer communication and protobuf as the serialization format. It provides an event type agnostic API that accepts a batch (array) of events in protobuf format. Refer here for proto definition format that Raccoon accepts.

Key Features

  • Event Agnostic - Raccoon API is event agnostic. This allows you to push any event with any schema.
  • Event Distribution - Events are distributed to kafka topics based on the event meta-data
  • High performance - Long running persistent, peer-to-peer connection reduce connection set up overheads. Websocket provides reduced battery consumption for mobile apps (based on usage statistics)
  • Guaranteed Event Delivery - Server acknowledgements based on delivery. Currently it acknowledges failures/successes. Server can be augmented for zero-data loss or at-least-once guarantees.
  • Reduced payload sizes - Protobuf based
  • Metrics: - Built-in monitoring includes latency and active connections.

To know more, follow the detailed documentation

Use cases

Raccoon can be used as an event collector, event distributor and as a forwarder of events generated from mobile/web/IoT front ends as it provides an high volume, high throughput, low latency event-agnostic APIs. Raccoon can serve the needs of data ingestion in near-real-time. Some domains where Raccoon could be used is listed below

  • Adtech streams: Where digital marketing data from external sources can be ingested into the organization backends
  • Clickstream: Where user behavior data can be streamed in real-time
  • Edge systems: Where devices (say in the IoT world) need to send data to the cloud.
  • Event Sourcing: Such as Stock updates dashboards, autonomous/self-drive use cases

Resources

Explore the following resources to get started with Raccoon:

  • Guides provides guidance on deployment and client sample.
  • Concepts describes all important Raccoon concepts.
  • Reference contains details about configurations, metrics and other aspects of Raccoon.
  • Contribute contains resources for anyone who wants to contribute to Raccoon.

Run with Docker

Prerequisite

  • Docker installed

Run Docker Image

Raccoon provides Docker image as part of the release. Make sure you have Kafka running on your local and run the following.

# Download docker image from docker hub
$ docker pull raystack/raccoon

# Run the following docker command with minimal config.
$ docker run -p 8080:8080 \
  -e SERVER_WEBSOCKET_PORT=8080 \
  -e SERVER_WEBSOCKET_CONN_ID_HEADER=X-User-ID \
  -e PUBLISHER_KAFKA_CLIENT_BOOTSTRAP_SERVERS=host.docker.internal:9093 \
  -e EVENT_DISTRIBUTION_PUBLISHER_PATTERN=clickstream-%s-log \
  raystack/raccoon

Run Docker Compose You can also use docker-compose on this repo. The docker-compose provides raccoon along with Kafka setup. Then, run the following command.

# Run raccoon along with kafka setup
$ make docker-run
# Stop the docker compose
$ make docker-stop

You can consume the published events from the host machine by using localhost:9094 as kafka broker server. Mind the topic routing when you consume the events.

Running locally

Prerequisite:

  • You need to have GO 1.18 or above installed
  • You need protoc installed
# Clone the repo
$ git clone https://github.com/raystack/raccoon.git

# Build the executable
$ make

# Configure env variables
$ vim .env

# Run Raccoon
$ ./out/raccoon

Note: Read the detail of each configurations here.

Running tests

# Running unit tests
$ make test

# Running integration tests
$ cp .env.test .env
$ make docker-run
$ INTEGTEST_BOOTSTRAP_SERVER=localhost:9094 INTEGTEST_HOST=localhost:8080 INTEGTEST_TOPIC_FORMAT="clickstream-%s-log" GRPC_SERVER_ADDR="localhost:8081" go test ./integration -v

Contribute

Development of Raccoon happens in the open on GitHub, and we are grateful to the community for contributing bugfixes and improvements. Read below to learn how you can take part in improving Raccoon.

Read our contributing guide to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to Raccoon.

To help you get your feet wet and get you familiar with our contribution process, we have a list of good first issues that contain bugs which have a relatively limited scope. This is a great place to get started.

This project exists thanks to all the contributors.

License

Raccoon is Apache 2.0 licensed.

raccoon's People

Contributors

akbaralishaikh avatar chakravarthyvp avatar jensoncs avatar nncrawler avatar prakharmathur82 avatar punit-kulal avatar rajathbk avatar ramey avatar ravisuhag avatar riteeksrivastav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

raccoon's Issues

Incorrect max connection error code and empty reason is reported to the client

Description
Incorrect max connection error code and empty reason is reported to the client

To Reproduce
Step 1: Set the env variable SERVER_WEBSOCKET_MAX_CONN = 1
Step 2: Create the multiple connections.

Expected vs actual behavior
expected: Code_CODE_MAX_CONNECTION_LIMIT_REACHED
actual: Code_CODE_MAX_USER_LIMIT_REACHED

Expected behavior
When the max connection threshold is reached the client should received the Code_CODE_MAX_USER_LIMIT_REACHED error code and the reason message instead of empty

Make raccoon imports standardized

Problem

Raccoon's module name currently is raccoon instead of github.com/odpf/raccoon which is not according to standard go module names. As a result, all the packages are imported as raccoon/config, raccoon/metrics, or raccoon/server etc.

What is the impact?
Raccoon packages cannot be imported into any other package.

Which version was this found?
NA

Solution
change module name to github.com/odpf/raccoon and change all the internal imports likewise.

support connection type as unique identifier along with id

Problem
Raccoon supports unique websocket connections per user by specifying SERVER_WEBSOCKET_CONN_UNIQ_ID_HEADER configuration. There is a requirement to support connection types that is part of another header. Connection types combined with id will be used to maintain unique connections instead.

Which version was this found?
v0.1.0

Is there any workaround?
An alternative approach is to make user id globally unique.

What is the impact?
The current behavior is still supported. However, if SERVER_WEBSOCKET_CONN_TYPE_HEADER config is provided, Raccoon will use the header value together with id for uniqueness.

Solution

  • Add another configuration to specify connection type header key and use that connection type header value as connection identifier.
  • Add conn_type tags to each sensible metrics.

Disk-backed Persistent queue for channels in Raccoon

Summary
Currently, Raccoon uses Channels for the intermediate processing of EventRequests, which are then forwarded to the message broker. But it does not solve the problem of loss of events that are there in the channel and could not get forwarded to Kafka when the server dies.

Proposed solution
Implement some disk-backed queueing for intermediate persistence, similar to this project.
https://github.com/jhunters/bigqueue

Raccoon needs to add ingestion time to every event

Problem

We (GoJek) use Raccoon currently to source clickstream events from the gojek app. The concrete product proto contains an event_timestamp field which the downstream systems such as DWH can use to partition the data on. However we see some amount of data arrives in partitions in future dates while some other arrive at different days for the same event timestamp date. There are 2 scenarios that causes this issue:

  1. The time/clock in the mobile app is reset by the user to a future date
  2. The app was inactive and those events were sent at a later point of time by the mobile sdk

Is there any workaround?
The DWH can partition based on a field which is like an ingestion time into the warehouse. However this needs backfills & repartitions on existing data and the upstream applications may need to change the way they query.

What is the impact?
Upstream applications' & services' query returns erroneous results

Which version was this found?
NA

Solution
Raccoon needs to provide an ingestion time for each event. The ingestion time should be considered as the time it was ingested into raccoon. This enables DWH to partition data based on the ingestion time as an alternate option to event_timestamp.

Documentation for protocol agnostic Raccoon

Problem
New features have been added to Raccoon that allows clients to send data using HTTP/gRPC along with the support for multiple data formatting options like JSON. It has also resulted in a new code design which is also missing in the existing documentation.

Is there any workaround?
NA

What is the impact?
NA

Which version was this found?
NA

Solution
Add the missing documentation or change the existing one where required.

Inconsistent JSON <> Protobuf API standard

Bug

  • json message isn't serialised with correct Protobuf based JSON encoding standard.
  • Resulting in failure when sending JSON requests from valid protobuf encoded json string.

Context:

  1. Protobuf style guide states that JSON keys should be in camelCase, and protobuf keys/field names should be in snake_case.
  2. The standard for encoding Timestamp is to convert it into a string of RFC 3339.
  3. Thus, when a Request Payload is serialised. It uses CamelCase and also converts the timestamp/sent_time into a string.
  4. However, since raccoon uses standard encoding/json package to deserialise, it does not correctly deserialise camelCase keys of Json.
  5. It also fails to deserialise the date string.

Fix

  1. Start using protobuf's official encoding/protojson package for deserialisation.
  2. It adhere's to the style guide of protobuf, which supports deserialisation of both snake_case and camelCase keys in JSON.
  3. However, existing JSON contract will break since new json contract will expect sent_time to be of type string instead of an object {seconds: number, nanos: number}. This could be fixed by updating existing clients to use protobuf's json encoders instead of language's default json encoders.

Event type should be added to metrics to enable dashboards group by the "event type"

Problem
Currently the metric dashboards provides useful information to events throughput does not however let see the throughput per the type set in the Event proto.
Having this tag over the metric will help slice the throughput for each type. Other metrics where relevant could add type as the tag to statsd.

Is there any workaround?
NA

What is the impact?
NA

Which version was this found?
All versions

Solution
metrics to add additional type tag to the existing metric
2 metrics such as: kafka total messages delivered and events lost should be able produce aggregation based on the type

Perf(Websockets): Changes to improve WebSockets Performance

Problem
It has been observed that Websocket performance has degraded from previous versions

Is there any workaround?
N/A

What is the impact?
Performance issues can create bottlenecks if throughput is high.

Solution
Code changes to improve performance

Local setup not straightforward

Problem

Local setup of raccoon is not too straightforward. For someone trying to setup raccoon locally either they have to take care of the dependencies like Kafka, telegraf etc or need to do multiple things to make it work.

Is there any workaround?
NA

What is the impact?
NA

Which version was this found?
All versions

Solution
Update docker-compose.yml file to make setup in a single command

Allow server acknowledgements after events are published

Problem Summary

Currently, Raccoon sends acknowledgments to the clients after pushing events to BufferChannel and not when published to Kafka. As a result, clients can't retry or resend events in case of downtimes or producer failures.

Proposed solution

Add a configuration parameter EVENT_ACK that allows Raccoon to run in different states. The following states are proposed -

  • 0 - events are acknowledged after pushing to BufferChannel
  • 1 - events are acknowledged after publishing to Kafka

Impact

Clients are aware of publishing failures and can retry publishing of events.

Which version was this found?
NA

Additional context

Increased end-to-end latency in case of EVENT_ACK = 1 as acknowledgments are send after dequeuing from the buffer channel and publishing to Kafka

Support for other HTTP based protocols like GRPC

Problem

Currently, Raccoon only supports Data ingestion using WebSockets and protobufs as the only supported data serializing format. The idea is to allow support for other protocols like GRPC, HTTP/1.l1(REST) with other data serialization formats like JSON.

Is there any workaround?
NA

What is the impact?

Upstream services can ingest data into Raccoon using various transport protocols which makes it easier to adopt.

Which version was this found?

NA

Solution

Use the existing server that exposes WebSocket endpoint and add an additional support for POST Method to allow HTTP/1.1 support. We can use the same API to support various serialization formats like JSON/protobufs based on Content-Type header.
GRPC can be served by another server on a different port.

Add support for HTTP

Raccoon only supports data ingestion through WebSockets. It can be extended to support HTTP interface as well. Raccoon should provide an interface to extend its APIs to any protocol.

One possible suggestion is:
api/ http websocket grpc ...

Websocket Checkorigin is wrongly implemented

Problem

SERVER_WEBSOCKET_CHECK_ORIGIN should not check origin when set to False. Should check origin and reject if violating CORS when set to true.
Currently, it rejects every connection when set to false. And accept every connection when set to true.

Root Cause

In this line , the checkOrigin function is overridden and return true or false depending on the SERVER_WEBSOCKET_CHECK_ORIGIN value.

Solution

The toggle should be use to toggle checkOrigin function. False value should map to custom CheckOrigin function which always return true. True value should map to nil CheckOrigin function which will lead to use default checkOrigin function.

Add benchmark doumentation

Problem

As Raccoon is a high throughput application, users don't have visibility on the throughput levels.

Is there any workaround?
NA

What is the impact?
NA

Which version was this found?
All versions

Solution

Add benchmarking documentation.

Raccoon Client - Go

Summary
This reflects on how the raccoon-go-client can be used to communicate with the raccoon service using the different communication protocols.

Proposed solution

The idea is to keep the initialisations of the WebSocket, HTTP, and GRPC clients distinct so that users can select the one they are interested in using.

Message Encoding:
The client will take care of the serialization/deserialization of the JSON and PROTO messages. The user does not have to worry about it; the client will handle it.

Observability:

Client Stats using StatsD:
The client is also intending to provide the stats for API calls. The following stats are planned for export to the user.

sent_tt_ms
ack_tt_ms
total_bytes_sent
total_bytes_recieved
total_conn_err

Logging:

The client provides the interface and the default console logging that the user can use, disable, or provide its own implementation.

Client Info (version.go):

The client will emit the following information about itself, which will be provided during the go build command using the “-ldflags”
Name
Version
BuildDate

Request Guid:

The client will auto generate the request_guid. The client will also provide the provision to pass the request_guid.

Serialization:

The user can pass the array of high level proto/json etc. to the send API, and the client will internally serialize the user data and wrap it for the raccoon request.
The client comes with Json/Proto serializers built in, and even users can configure any serialization.

Websocket

Event Ack:

In the case of WebSocket, the client will also provide the async delivery channel to which the user can subscribe to receive the event acknowledgements.

Ping/Pong -:

The client will internally handle the ping/pong communication with the raccoon, will provide the setting for interval time.

Retry

The user can configure the retry options on which the client will retry based on the configuration provided.

This proposal gives the user the flexibility to use any serialization for converting the event bytes before sending them to the client, default will be proto.

The client will ship the Json/Proto serializer as part of the package.

   // Request message
   type Event struct {
       Type string
       Data interface{}
   }
 
   // Response message
   type Response struct {
       Status   string
       SentTime int64
       Reason   string
       Data     map[string]string
   }
 
   type Client interface {
       Send([]*Event) (string, *Response, error)
       SetUserClientInfo(*App)
   }
 
   // interfaces
   type WebSocket interface {
       Client
       // WebSocket specific methods
   }
 
   type Grpc interface { Client }
   type Rest interface { Client }
   type RestClient struct { Opts Options }
 
   type ClientOption interface {
       Apply(*Options)
   }
 
   type Options struct {
       Ser Serializer
   }
 
   func (s *RestClient) Apply(op *Options) {
       s.serialize = op.Ser
   }
 
  func WithSerializer(ser Serializer) ClientOption {
       return &RestClient{ serialize: ser }
   }
 
   
   // NewRest creates the rest client.
   func NewRest(url string, opts ...ClientOption) *RestClient {
       var o Options
       for _, opt := range opts {
           opt.Apply(&o)
       }
       return &RestClient{
           Serialize: o.Serializer,
       }
   }
 
   // Serializer
   type Serializer func(interface{}) ([]byte, error)
 
   // set the app info
   func (*RestClient) SetAppInfo(app *App) {}
 
   // Send sends the events to the raccoon service
   func (c *RestClient) Send(events []*Event) (string, *Response, error) {
       e := []*pb.Event{}
       for _, ev := range events {
           // serialize the bytes based on the config
           b, _ := c.serialize(ev.Data)
           e = append(e, &pb.Event{
               EventBytes: b,
               Type:       ev.Type,
           })
       }
       reqId := uuid.NewString()
       raccoonReq := &pb.SendEventRequest{
           ReqGuid: reqId,
           Events:  e,
       }
       // log & Send racoonReq to raccoon service
       log.Print(raccoonReq)
 
       return reqId, &Response{}, nil
   }
 
   // Client usage example
   func ClientExample() {
 
       JSON := func(i interface{}) ([]byte, error) {
           return json.Marshal(i)
       }
 
       rest := NewRest("http://localhost:8080",
           WithSerializer(JSON),
       )
 
       rest.SetAppInfo(&App{
           Id:      "goplay-123",
           Name:    "goplay-backend",
           Version: "1.0",
       })
 
       reqId, resp, err := rest.Send([]*Event{
           &Event{
               Type: "page",
               Data: &struct {
                   name string
               }{"page-name"},
           },
       })
 
       if err != nil {
           log.Printf("%v, %s", err, reqId)
       }
 
       log.Print(resp.Status)
   }

Buf lint changes

Problem

buf lint was breaking raccoon protos, so changes(PR #126) were done in proton repo to fix those linting issues. Those changes will break raccoon code base.

Is there any workaround?
NA

What is the impact?
NA

Which version was this found?
All versions

Solution
Update raccoon code base wr.t. changes in raccoon proto.

Refactor to simplify http package

Problem

HTTP package is where we put http based servers and handlers. Currently, the entry point of this package (server.go) has all the 3 servers initialization logic. This introduces complexity to the initialization logic of the servers. Dependencies specifically belonging to each server are leaked here potentially making it more difficult to change in the future.
For example, table and ping are specific to websocket. Which introduces complexity.

Is there any workaround?
N/A

What is the impact?
Complexity grows as we add more protocol

Solution
Make the initialization logic independent of each other.

Improve the ack changes for synchronous

Summary
Due to the current ack changes, connections are being disconnected as soon as the server read deadline is reached because they are unable to read the next message until the current message answer is acknowledged.

Proposed solution
The solution is to create the ack go-routine to handle the response messages for the connection
Once the message is received, it will put into the acknowledge channel, and then the response is returned once the message published to the kafka is successful.

And the next message of the current connections will not be blocked it will be keep on reading.
And there will be a single go-routine for all the ack message handling.

performance degradation because of additional data in collector

Problem

In Raccoon following buffered channel is used to collect events coming from clients.

bufferChannel := make(chan collection.CollectRequest, config.Worker.ChannelSize)

CollectRequest struct is defined as follows-

type CollectRequest struct {
	ConnectionIdentifier identification.Identifier
	TimeConsumed         time.Time
	TimePushed           time.Time
	*pb.SendEventRequest
}

SendEventRequest is part of auto generated go code from proto. The autogenerated go code has a lot of extra data that is passed in the channel which also includes objects of type sync.Mutex and unsafe.Pointer

After changing the function definition of ProduceBulk method for publisher.KafkaProducer from

ProduceBulk(events []*pb.Event, deliveryChannel chan kafka.Event) error

to

ProduceBulk(request collection.CollectRequest, deliveryChannel chan kafka.Event)

There was an increase in event_processing_duration_milliseconds

What is the impact?

Intermittent latency spikes.

Which version was this found?**

Issue was observed in v0.1.3

Solution

Change definition for CollectRequest to just pass the data required by the worker to process the event like EventBytes, Type and SentTime.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.