Comments (2)
Hi,
The primary difference is this Consumer is using Kafka Low Level Consumer API whereas Spark-streaming-Kafka package consumer is using Kafka High Level Consumer API.
In Spark 1.1.x the Spark provided High Level Consumer has a data loss issue on Receiver failure, which prompt me to write this consumer.
In Spark 1.2 they have provided a reliable version of High Level Consumer , which solves the data loss issue, but it has other issues related to recovery from failure, it stops intermittently , not available once failed etc. Also using Kafka High Level API itself has issues which you can find here : https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
Now this Low Level Consumer is till now is most reliable way to consume from Kafka without any data loss and can recover from any underlying failures ( Spark worker failures, Kafka broker failures , Spark internal BlockManager failures etc). We at Pearson has been using this for months without any downtime.
Just to mention , Spark 1.3 has another version of Reliable Consumer API ( This time they used Low Level API ) and promised to solved the exactly-once semantics of message processing. But this is yet to be tested in production scenarios.
from kafka-spark-consumer.
Thank you very much for publishing your work and the detailed explanation!
On Wed, Mar 4, 2015 at 3:12 PM, Dibyendu Bhattacharya <
[email protected]> wrote:
Hi,
The primary difference is this Consumer is using Kafka Low Level Consumer
API whereas Spark-streaming-Kafka package consumer is using Kafka High
Level Consumer API.In Spark 1.1.x the Spark provided High Level Consumer has a data loss
issue on Receiver failure, which prompt me to write this consumer.In Spark 1.2 they have provided a reliable version of High Level Consumer
, which solves the data loss issue, but it has other issues related to
recovery from failure, it stops intermittently , not available once failed
etc. Also using Kafka High Level API itself has issues which you can find
here :
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-DesignNow this Low Level Consumer is till now is most reliable way to consume
from Kafka without any data loss and can recover from any underlying
failures ( Spark worker failures, Kafka broker failures , Spark internal
BlockManager failures etc). We at Pearson has been using this for months
without any downtime.Just to mention , Spark 1.3 has another version of Reliable Consumer API (
This time they used Low Level API ) and promised to solved the exactly-once
semantics of message processing. But this is yet to be tested in production
scenarios.—
Reply to this email directly or view it on GitHub
#7 (comment)
.
Kind regards,
Alexander.
from kafka-spark-consumer.
Related Issues (20)
- Why appear this exception information? HOT 8
- Not working with Spark 2.2.0 HOT 11
- How to use in kerberized context ? HOT 3
- AbstractMethodError with Spark 1.6.0 and Kafka 0.10.2 HOT 9
- Exception: Could not compute split, block not found HOT 6
- Hello, Compilation failed after changing Kafka version 0.10.0.0 HOT 5
- Kafka Headers Support HOT 7
- After long time running, the processing time of "ProcessedOffsetManager.persists(partitonOffset_stream, props)" incresing. HOT 13
- How to recover the failed receiver on a partition which has exception of " Offsets out of range with no configured reset policy for partitions:" HOT 18
- May I have a Scala sample of messageHandler to filter out some playload which includes some strings? HOT 20
- The Spark Streaming can not read kafka message HOT 8
- It works well in local model,but when I submit it in cluster model,the fixed rate is too small HOT 5
- Does this support spark structured streaming HOT 1
- java.lang.NoClassDefFoundError: kafka/api/OffsetRequest HOT 22
- Offset is still updated when exception occurs during processing HOT 28
- Manipulation of offsetRanges in each batch
- example doesn't build HOT 1
- Trying to fetch Multi topic In Local , But It is showing warning like this HOT 2
- Can a higher version of kafka be supported HOT 2
- Can a higher version of Spark be supported? Spark 3.2.0 for example. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kafka-spark-consumer.