nodefluent / kafka-connect Goto Github PK
View Code? Open in Web Editor NEWequivalent to kafka-connect :wrench: for nodejs :sparkles::turtle::rocket::sparkles:
License: MIT License
equivalent to kafka-connect :wrench: for nodejs :sparkles::turtle::rocket::sparkles:
License: MIT License
3.0.2
to 3.0.3
.This version is covered by your current version range and after updating it in your project the build failed.
coveralls is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
As suggested by NPM and Snyk.
The new version differs by 1 commits.
aa2519c
dependency security audit fixes from npm & snyk (#210)
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
4.1.0
to 4.1.1
.This version is covered by your current version range and after updating it in your project the build failed.
debug is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.
The new version differs by 4 commits.
68b4dc8
4.1.1
7571608
remove .coveralls.yaml
57ef085
copy custom logger to namespace extension (fixes #646)
d0e498f
test: only run coveralls on travis
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
I am evaluating nodefluent kafka-connect framework for an ETL project. While testing the framework with the native consumer/producer, I noticed SourceConfig.js is calling the NProducer.buffer() method with possibly incorrect parameters. This is the call
return this.producer.buffer(
this.config.topic,
record.key,
record,
this.config.produceCompressionType || 0
);
The NProducer buffer API looks like this:
async buffer(topic, identifier, payload, partition = null, version = null, partitionKey = null)
Looks like compressionType is the right parameter for the non-native Producer but incorrect for NProducer.
I am using "kafka-connect": "^3.6.0". Is this a real bug or am I not reading the code correctly?
To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:
.travis.yml
If you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.
Greenkeeper has checked the engines
key in any package.json
file, the .nvmrc
file, and the .travis.yml
file, if present.
engines
was only updated if it defined a single version, not a range..nvmrc
was updated to Node.js 10.travis.yml
was only changed if there was a root-level node_js
that didn’t already include Node.js 10, such as node
or lts/*
. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.For many simpler .travis.yml
configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected 🤖
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
3.5.3
to 3.5.4
.This version is covered by your current version range and after updating it in your project the build failed.
bluebird is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.
The new version differs by 6 commits.
e0222e3
Release v3.5.4
4b9fa33
missing --expose-gc
flag (#1586)
63b15da
docs: improve and compare Promise.each and Promise.mapSeries (#1565)
9dcefe2
.md syntax fix for coming-from-other-languages.md (#1584)
b97c0d2
added proper version check supporting VSCode (#1576)
499cf8e
Update jsdelivr url in docs (#1571)
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
Test is failing because node-rdkafka is not the part of dependencies in package.json file
now After installing node-rdkakfa
Now My Question is Why this wasn't added into package.json dependencies because this way developer have to install node-rdkafka again and again where ever he/she wants to use this package
6.22.0
to 6.22.1
.This version is covered by your current version range and after updating it in your project the build failed.
sinek is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.
The new version differs by 5 commits.
6c803a5
6.22.1
8262b76
Merge pull request #73 from nodefluent/add-auto-option-to-type-defintion
6276c7f
Merge pull request #74 from nodefluent/optional-identifier-in-nproducer-type-definition
fff9268
The identifier parameter in NProducer is optional
c3338a2
Add "auto" as value for defaultPartitionCount in NProducer
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
Hi,
I'm having a small question - I'm trying to understand how batching works here. Does it mean that the fetch will (potentially) retrieve multiple messages only? what about committing them all together (so we can have at-least-once policy)? is it something that existing?
For instance - here, it looks like that only one offset will be committed at a time. And if we take a look at the gcloud pubsub sink impl it seems like all messages are stored in memory and getting processed only when buffer > batchSize
, so if the server is down for some reason, all messages will be gone because their offset already committed (so no at-least-once policy here).
Also, does config.batch
only have guarantee on minimum bytes or something like that?
Thanks!
11.1.1
to 11.1.2
.This version is covered by your current version range and after updating it in your project the build failed.
prom-client is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.
The new version differs by 9 commits.
25255c3
11.1.2
707bff9
prepare for 11.1.2
45e0c3e
Fix histogram scrape performance (#219)
6a71f47
chore: upgrade prettier (#218)
7fd5477
Remove Sinon dev dependency (#217)
00f1b79
Allow setting Gauge values to NaN. (#202)
d5d8b7d
adding prefix to DefaultMetricsCollectorConfiguration, related to the release v11.1.0. (#208)
8d792aa
Add getMetricsAsArray to index.d.ts (#211)
f651c41
Updated README.md (#204)
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
Hi, there.
I'm making the monitoring system of my kafka-connect instances which has spawned by pm2. But I guess there's no support of prom-client cluster mode to gather all of instances metrics. Could you look into how it's possible to implement prom-client cluster mode on this kafka-connect, or may I contribute the code? I'll try to do this.
Thank you.
3.5.2
to 3.5.3
.This version is covered by your current version range and after updating it in your project the build failed.
bluebird is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.
Bugfixes:
The new version differs by 7 commits.
a5a5b57
Release v3.5.3
c8a7714
update packagelock
8a765fd
Update getting-started.md (#1561)
f541801
deps: update acorn and add acorn-walk (#1560)
247e512
Update promise.each.md (#1555)
e2756e5
fixed browser cdn links (#1554)
7cfa9f7
Changed expected behaviour when promisifying (#1545)
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
class SinkConfig
static _messageToRecord(message) {
//check if a converter has already turned this message into a record
if (message && typeof message.value === "object" &&
message instanceof SinkRecord) {
return message;
}
try {
const record = new SinkRecord();
record.kafkaOffset = message.offset;
record.key = message.key;
record.partition = message.partition;
record.keySchema = message.value.keySchema;
record.timestamp = message.value.timestamp;
record.value = message.value.value;
record.valueSchema = message.value.valueSchema;
return record;
} catch (error) {
debug("Failed to turn message into sink record.", error.message);
super.emit("error", "Failed to turn message into SinkRecord: " + error.message);
return message;
}
}
I am using multiple Instances of the sinker objects and I can refresh those instances in 24hours intervals So I have multiple Questions
In file index.d.ts
:
Line 146 in 73d7b50
I think the a correct type definition for the middlewares
field should be:
middlewares: ((req, res, next) => void)[];
I have a Kafka topic that is partitioned into 32 partitions.
I've tried to write a simple sink to call an REST API once per Kafka message.
But it seems like it's processing my Kafka messages in sequence.
I've set maxTasks: 32
in my config, without any result.
How can I make it either process n partitions in parallel. And/or process multiple messages from each partition? It seems the put
method gets an array, but for me it's always an array of one element.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.