Giter Site home page Giter Site logo

event-streams-samples's Issues

kafka-nodejs-console-sample does not build on MacOS Catalina 10.15.6

After updating to node-rdkafka 2.9.1 npm install is successful.

Master build fails with:

../src/binding.cc:134:12: error: no matching member function for call to 'Set'
  exports->Set(Nan::New("errorCodes").ToLocalChecked(), errorCodes);
  ~~~~~~~~~^~~
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3670:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
                                    ^
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3673:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
                                    ^
../src/binding.cc:145:12: error: no matching member function for call to 'Set'
  exports->Set(Nan::New("topic").ToLocalChecked(), topicConstants);
  ~~~~~~~~~^~~
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3670:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
                                    ^
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3673:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
                                    ^
../src/binding.cc:147:12: error: no matching member function for call to 'Set'
  exports->Set(Nan::New("err2str").ToLocalChecked(),
  ~~~~~~~~~^~~
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3670:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
                                    ^
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3673:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
                                    ^
../src/binding.cc:150:12: error: no matching member function for call to 'Set'
  exports->Set(Nan::New("features").ToLocalChecked(),
  ~~~~~~~~~^~~
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3670:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
                                    ^
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3673:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
                                    ^
../src/binding.cc:155:3: warning: 'AtExit' is deprecated: Use the three-argument variant of AtExit() or AddEnvironmentCleanupHook() [-Wdeprecated-declarations]
  AtExit(RdKafkaCleanup);
  ^
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/node.h:838:1: note: 'AtExit' has been explicitly marked deprecated here
NODE_DEPRECATED(
^
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/node.h:108:20: note: expanded from macro 'NODE_DEPRECATED'
    __attribute__((deprecated(message))) declarator
                   ^
../src/binding.cc:162:12: error: no matching member function for call to 'Set'
  exports->Set(Nan::New("librdkafkaVersion").ToLocalChecked(),
  ~~~~~~~~~^~~
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3670:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
                                    ^
/Users/jurajnyiri/Library/Caches/node-gyp/14.9.0/include/node/v8.h:3673:37: note: candidate function not viable: requires 3 arguments, but 2 were provided
  V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
                                    ^
1 warning and 5 errors generated.
make: *** [Release/obj.target/node-librdkafka/src/binding.o] Error 1
rm 11a9e3388a67e1ca5c31c1d8da49cb6d2714eb41.intermediate
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack     at ChildProcess.emit (events.js:314:20)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
gyp ERR! System Darwin 19.6.0
gyp ERR! command "/usr/local/Cellar/node/14.9.0/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/jurajnyiri/repos/event-streams-samples/kafka-nodejs-console-sample/node_modules/node-rdkafka
gyp ERR! node -v v14.9.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok 
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

Clarification regarding SASL Authentication with Event Streams

Hello,
I just wanted to clarify something regarding what fields we will out and where we get that information from.

In my event streams instance, I have credentials as follows:

{
  "api_key": "<API_KEY>",
  "apikey": "<API_KEY>",
  "iam_apikey_description": "Auto-generated for key <ID>",
  "iam_apikey_name": "event-streams-credentials",
  "iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
  "iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/<ID>::serviceid:ServiceId-<SERVICE_ID>",
  "instance_id": "<ID>",
  "kafka_admin_url": "<ADMIN_URL>",
  "kafka_brokers_sasl": [
    "broker-3-url",
    "broker-2-url",
    "broker-4-url",
    "broker-1-url",
    "broker-0-url",
    "broker-5-url"
  ],
  "kafka_http_url": "<URL>",
  "password": "<API_KEY>",
  "user": "token"
}

My Kafka Producer is authenticated as follows:

const Kafka = require('node-rdkafka');

const hosts = [
    "broker-3-url",
    "broker-2-url",
    "broker-4-url",
    "broker-1-url",
    "broker-0-url",
    "broker-5-url"
  ]; 

var producer = new Kafka.Producer({
    'debug' : 'security',
    'metadata.broker.list': hosts.join(','),
    'dr_cb': true, //delivery report callback
    'security.protocol': 'sasl_plaintext',
    // 'ssl.ca.location': '/etc/ssl/certs/',
    'sasl.mechanisms': 'PLAIN',
    'sasl.username': 'token',
    'sasl.password': '<API_KEY>',
    'retries': 10,
    'retry.backoff.ms': 10000
});

But when I try to connect I get the following:

{ Error: Local: Broker transport failure
    at Function.createLibrdkafkaError [as create] (/Users/user/node_modules/node-rdkafka/lib/error.js:334:10)
    at /Users/user/node_modules/node-rdkafka/lib/client.js:339:28
  origin: 'local',
  message: 'broker transport failure',
  code: -195,
  errno: -195,
  stack:
   'Error: Local: Broker transport failure\n    at Function.createLibrdkafkaError [as create] (/Users/user/node_modules/node-rdkafka/lib/error.js:334:10)\n    at /Users/user/node_modules/node-rdkafka/lib/client.js:339:28' }

I was wondering if I am not properly including the authentication params or if it is an issue on node-rdkafka's side of things. I couldn't find clear documentation regarding this configuration so any help would be appreciated, thanks!

Error: Invalid value for configuration property "security.protocol" for local installation

Hi,
I'm trying the sample locally on OS X and got this issue:
Error: Invalid value for configuration property "security.protocol"
at KafkaConsumer.Client (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/node_modules/node-rdkafka/lib/client.js:54:18)
at new KafkaConsumer (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/node_modules/node-rdkafka/lib/kafka-consumer.js:118:10)
at Object.exports.buildConsumer (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/consumerLoop.js:38:16)
at runLoops (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/app.js:205:33)
at /Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/app.js:165:9
at _fulfilled (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/node_modules/q/q.js:854:54)
at self.promiseDispatch.done (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/node_modules/q/q.js:883:30)
at Promise.promise.promiseDispatch (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/node_modules/q/q.js:816:13)
at /Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/node_modules/q/q.js:624:44
at runSingle (/Users/btong/MyWork/github/message-hub-samples/kafka-nodejs-console-sample/node_modules/q/q.js:137:13)

I'm not sure if I generate the .pem key correctly. Could you please point me to instructions on how to do so?

Is there an alternative to set java.security.auth.login.config?

I know it seems to be necessary to set the property "java.security.auth.login.config" to the path of the jaas.conf, in order to authenticate with the message hub service, based on the Java code example. However, is there an alternative way to set the credentials in the java without loading the credentials from the jaas.conf file? I have already got all the credentials for the message hub service. I am wondering if there is another way to load the credentials.
Thank you.

offset: smallest that haven't been processed?

Hello,

I'm trying to leverage message hub for a project and I'm running into an issue that I am hoping someone can help with:

{ 'auto.offset.reset': ’smallest’ } -- brings back all messages for the topic in the given time period
{ 'auto.offset.reset': ’largest’ } -- bring back all new messages

The issue is if my consumer goes down, I want to pull back the "smallest", but only the smallest that I haven't yet processed. This seems like it should be very simple, but I'm not having much luck finding anything anywhere.

Thanks,

--d

MirrorMaker Kafka 0.9 connection to Kafka Brokers 0.10 (IBM Message Hub)

Hi,
I am trying to connect my MirroMaker Kafka 0.9 to the Kafka Brokers 0.10 (IBM Message Hub) without success.
The links I have followed are the followings, but they are mostly for Kafka clients 0.10:
https://console.bluemix.net/docs/services/MessageHub/messagehub050.html#kafka_using
https://console.bluemix.net/docs/services/MessageHub/messagehub063.html#kafka_connect

Do you know the steps for Kafka clients 0.9 and how to use the MessageHubLoginModule and the jaas creation?

Thank you,
Edoardo

node-rdkafka 0.8.0 fails to connect because of SASL Handshake not supported by broker

I wanted to upgrade the node.js sample to the most up to date version (0.8.0) of node-rdkafka but it seems there has been an incompatible change in the way how SASL authentication is handle. It produces the following error:

{ severity: 3, fac: 'FAIL', message: '[thrd:sasl_ssl://kafka05-prod01.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka05-prod01.messagehub.services.us-south.bluemix.net:9093/bootstrap: Failed to initialize SASL authentication: SASL Handshake not supported by broker (required by mechanism PLAIN): try api.version.request=true' }

any way to avoid the ca_location flag?

The code currently uses the ca_location flag to set the ssl certificate path:
self.opts['ca_location'] = '/etc/ssl/certs'

Is there anyway to avoid this step?
I tried passing False, None, '' but this failed in various way.
Any help would be welcome.

Thanks, Allon.

Getting an SSL Error. Please help

Hi,

I'm getting this error. Can somebody help me please?

[2/16/18 12:20:49:702 EST] 00000039 SystemOut O 12:20:49.701 [kafka-producer-network-thread | cpqi-messagehub] DEBUG o.a.k.c.network.SslTransportLayer - SSLEngine.closeInBound() raised an exception.
javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) ~[na:1.8.0_151]
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666) ~[na:1.8.0_151]
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1634) ~[na:1.8.0_151]
at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1561) ~[na:1.8.0_151]
at org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:797) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:257) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:79) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:460) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:398) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163) [kafka-clients-1.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
[2/16/18 12:20:49:703 EST] 00000039 SystemOut O 12:20:49.703 [kafka-producer-network-thread | cpqi-messagehub] DEBUG o.a.kafka.common.network.Selector - [Producer clientId=cpqi-messagehub] Connection with kafka02-prod02.messagehub.services.eu-gb.bluemix.net/159.8.179.150 disconnected due to authentication exception
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1478) ~[na:1.8.0_151]
at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535) ~[na:1.8.0_151]
at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1214) ~[na:1.8.0_151]
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1186) ~[na:1.8.0_151]
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469) ~[na:1.8.0_151]
at org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:435) ~[kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:301) ~[kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:255) ~[kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:79) ~[kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:460) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:398) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) [kafka-clients-1.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163) [kafka-clients-1.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) ~[na:1.8.0_151]
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728) ~[na:1.8.0_151]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:304) ~[na:1.8.0_151]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296) ~[na:1.8.0_151]
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1514) ~[na:1.8.0_151]
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216) ~[na:1.8.0_151]
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1026) ~[na:1.8.0_151]
at sun.security.ssl.Handshaker$1.run(Handshaker.java:966) ~[na:1.8.0_151]
at sun.security.ssl.Handshaker$1.run(Handshaker.java:963) ~[na:1.8.0_151]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_151]
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1416) ~[na:1.8.0_151]
at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:389) ~[kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:469) ~[kafka-clients-1.0.0.jar:na]
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:328) ~[kafka-clients-1.0.0.jar:na]
... 8 common frames omitted

Issue while deploying to bluemix

I am trying to push the example code in bluemix but it is failing. Please find the sample below

-----> Downloaded app package (12K)

-----> IBM SDK for Node.js Buildpack v3.12-20170505-0656
Based on Cloud Foundry Node.js Buildpack v1.5.24
-----> Creating runtime environment
NPM_CONFIG_LOGLEVEL=error
NPM_CONFIG_PRODUCTION=true
NODE_ENV=production
NODE_MODULES_CACHE=true
-----> Installing binaries
engines.node (package.json): 6.10
engines.npm (package.json): unspecified (use default)
Resolving node version 6.10 via 'node-version-resolver'
Installing IBM SDK for Node.js (6.10.2) from cache
Using default npm version: 3.10.10
-----> Restoring cache
Skipping cache restore (new runtime signature)
-----> Checking and configuring service extensions before installing dependencies
-----> Building dependencies
Installing node modules (package.json)
> [email protected] install /tmp/staged/app/node_modules/node-rdkafka
> node-gyp rebuild
make: Entering directory /tmp/staged/app/node_modules/node-rdkafka/build' ACTION configuring librdkafka... deps/librdkafka/config.h checking for OS or distribution... ok (Ubuntu) checking for C compiler from CC env... failed checking for gcc (by command)... ok checking for C++ compiler from CXX env... failed checking for C++ compiler (g++)... ok checking executable ld... ok checking executable nm... ok checking executable strip... ok checking for pkgconfig (by command)... ok checking for install (by command)... ok checking for PIC (by compile)... ok checking for GNU-compatible linker options... ok checking for GNU linker-script ld flag... ok checking for __atomic_32 (by compile)... ok checking for __atomic_64 (by compile)... ok checking for socket (by compile)... ok parsing version '0x000905ff'... ok (0.9.5) checking for libpthread (by pkg-config)... failed checking for libpthread (by compile)... ok checking for zlib (by pkg-config)... ok checking for zlib (by compile)... ok checking for libcrypto (by pkg-config)... ok checking for libcrypto (by compile)... ok checking for liblz4 (by pkg-config)... failed checking for liblz4 (by compile)... failed (disable) checking for libssl (by pkg-config)... ok checking for libssl (by compile)... ok checking for libsasl2 (by pkg-config)... failed checking for libsasl2 (by compile)... failed (disable) checking for libsasl (by pkg-config)... failed checking for libsasl (by compile)... failed (disable) checking for regex (by compile)... ok checking for librt (by pkg-config)... failed checking for librt (by compile)... ok checking for strndup (by compile)... ok checking for nm (by env NM)... ok (cached) checking for python (by command)... ok Generated Makefile.config Generated config.h Configuration summary: prefix /usr/local ARCH x86_64 CPU generic GEN_PKG_CONFIG y ENABLE_DEVEL n ENABLE_VALGRIND n ENABLE_REFCNT_DEBUG n ENABLE_SHAREDPTR_DEBUG n ENABLE_LZ4_EXT y ENABLE_SSL y ENABLE_SASL y MKL_APP_NAME librdkafka MKL_DISTRO Ubuntu LD ld NM nm OBJDUMP objdump LIB_LDFLAGS -shared -Wl,-soname,$(LIBFILENAME) CFLAGS MKL_APP_DESC_ONELINE The Apache Kafka C/C++ library CC gcc CXX g++ STRIP strip CPPFLAGS -g -O2 -fPIC -Wall -Wsign-compare -Wfloat-equal -Wpointer-arith -Wcast-align PKG_CONFIG pkg-config INSTALL install LDFLAG_LINKERSCRIPT -Wl,--version-script= RDKAFKA_VERSION_STR 0.9.5 MKL_APP_VERSION 0.9.5 LIBS -lpthread -lz -lz -lcrypto -lcrypto -lssl -lcrypto -lssl -lrt CXXFLAGS -Wno-non-virtual-dtor SYMDUMPER $(NM) -D exec_prefix /usr/local bindir /usr/local/bin sbindir /usr/local/sbin libexecdir /usr/local/libexec datadir /usr/local/share sysconfdir /usr/local/etc sharedstatedir /usr/local/com localstatedir /usr/local/var libdir /usr/local/lib includedir /usr/local/include infodir /usr/local/info mandir /usr/local/man Generated config.cache Now type 'make' to build TOUCH Release/obj.target/deps/librdkafka_config.stamp CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_range_assignor.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdstring.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_op.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/lz4.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_transport.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/tinycthread.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdcrc32.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_partition.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_request.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_pattern.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_timer.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_conf.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_offset.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_event.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/snappy.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/xxhash.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdaddr.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_feature.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_roundrobin_assignor.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_queue.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_metadata_cache.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_assignor.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_broker.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_metadata.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdrand.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdlist.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdregex.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_subscription.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdgz.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_topic.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdports.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdavl.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/lz4hc.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_msg.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/lz4frame.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdlog.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_cgrp.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_buf.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/regexp.o CC(target) Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_sasl.o ../deps/librdkafka/src/rdkafka_sasl.c: In function ‘rd_kafka_sasl_io_event’: ../deps/librdkafka/src/rdkafka_sasl.c:129:23: error: ‘rd_kafka_transport_t’ has no member named ‘rktrans_sasl’ return rktrans->rktrans_sasl.recv(rktrans, ^ In file included from /usr/include/string.h:635:0, from ../deps/librdkafka/src/rd.h:45, from ../deps/librdkafka/src/rdkafka_int.h:44, from ../deps/librdkafka/src/rdkafka_sasl.c:29: ../deps/librdkafka/src/rdkafka_sasl.c: In function ‘rd_kafka_sasl_client_new’: ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ ../deps/librdkafka/src/rdkafka_sasl.c:157:32: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (!strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) { ^ In file included from ../deps/librdkafka/src/rd.h:68:0, from ../deps/librdkafka/src/rdkafka_int.h:44, from ../deps/librdkafka/src/rdkafka_sasl.c:29: ../deps/librdkafka/src/rdkafka_sasl.c:168:40: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ rk->rk_conf.sasl.mechanisms, ^ ../deps/librdkafka/src/rdposix.h:83:36: note: in definition of macro ‘rd_snprintf’ #define rd_snprintf(...) snprintf(__VA_ARGS__) ^ In file included from ../deps/librdkafka/src/rdkafka_sasl.c:29:0: ../deps/librdkafka/src/rdkafka_sasl.c:181:31: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ rk->rk_conf.sasl.service_name, hostname, ^ ../deps/librdkafka/src/rdkafka_int.h:367:43: note: in definition of macro ‘rd_rkb_log’ level, fac, __VA_ARGS__); ^ ../deps/librdkafka/src/rdkafka_sasl.c:178:9: note: in expansion of macro ‘rd_rkb_dbg’ rd_rkb_dbg(rkb, SECURITY, "SASL", ^ ../deps/librdkafka/src/rdkafka_sasl.c:182:31: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ rk->rk_conf.sasl.mechanisms); ^ ../deps/librdkafka/src/rdkafka_int.h:367:43: note: in definition of macro ‘rd_rkb_log’ level, fac, __VA_ARGS__); ^ ../deps/librdkafka/src/rdkafka_sasl.c:178:9: note: in expansion of macro ‘rd_rkb_dbg’ rd_rkb_dbg(rkb, SECURITY, "SASL", ^ In file included from /usr/include/string.h:635:0, from ../deps/librdkafka/src/rd.h:45, from ../deps/librdkafka/src/rdkafka_int.h:44, from ../deps/librdkafka/src/rdkafka_sasl.c:29: ../deps/librdkafka/src/rdkafka_sasl.c: In function ‘rd_kafka_sasl_conf_validate’: ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c:231:24: error: ‘rd_kafka_conf_t’ has no member named ‘sasl’ if (strcmp(rk->rk_conf.sasl.mechanisms, "GSSAPI")) ^ ../deps/librdkafka/src/rdkafka_sasl.c: In function ‘rd_kafka_sasl_io_event’: ../deps/librdkafka/src/rdkafka_sasl.c:135:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ make: *** [Release/obj.target/librdkafka/deps/librdkafka/src/rdkafka_sasl.o] Error 1 make: Leaving directory /tmp/staged/app/node_modules/node-rdkafka/build'
gyp ERR! build error
gyp ERR! stack Error: make failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/tmp/staged/app/vendor/node/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
gyp ERR! stack at emitTwo (events.js:106:13)
gyp ERR! stack at ChildProcess.emit (events.js:191:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:215:12)
gyp ERR! System Linux 4.4.0-45-generic
gyp ERR! command "/tmp/staged/app/vendor/node/bin/node" "/tmp/staged/app/vendor/node/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /tmp/staged/app/node_modules/node-rdkafka
gyp ERR! node -v v6.10.2
gyp ERR! node-gyp -v v3.4.0
gyp ERR! not ok
[email protected] /tmp/staged/app
└─┬ [email protected]
└── [email protected]
npm ERR! Linux 4.4.0-45-generic
npm ERR! argv "/tmp/staged/app/vendor/node/bin/node" "/tmp/staged/app/vendor/node/bin/npm" "install" "--unsafe-perm" "--userconfig" "/tmp/staged/app/.npmrc"
npm ERR! node v6.10.2
npm ERR! npm v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! [email protected] install: node-gyp rebuild
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the node-rdkafka package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs node-rdkafka
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls node-rdkafka
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /tmp/staged/app/npm-debug.log
-----> Build failed
Check our support community to get help on common issues:
http://ibm.biz/bluemixcommunitysupport
If you need additional help and your subscription includes support, submit a ticket so we can help:
http://ibm.biz/bluemixsupport
Staging failed: Buildpack compilation step failed

Server returned HTTP response code: 403 for URL

Hi Team,
I have setup the sample code by following the instructions from the document link. When I run the producer command, I see the below error.

INFO Creating the topic kafka-java-console-sample-topic (com.eventstreams.samples.EventStreamsConsoleSample)
ERROR REST POST request failed with exception: java.io.IOException: Server returned HTTP response code: 403 for URL: https://xxxx.containers.appdomain.cloud/admin/topics (com.eventstreams.samples.rest.RESTRequest)
java.io.IOException: Server returned HTTP response code: 403 for URL: https://xxxx.containers.appdomain.cloud/admin/topics
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection$10.run(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection$10.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(Unknown Source)
at com.eventstreams.samples.rest.RESTRequest.post(RESTRequest.java:162)
at com.eventstreams.samples.rest.RESTAdmin.createTopic(RESTAdmin.java:46)
at com.eventstreams.samples.EventStreamsConsoleSample.main(EventStreamsConsoleSample.java:176)

After doing some research over the internet, I also tried adding the code in
connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.67 Safari/537.36");
in the RestRequest, but still no luck.

I get response from the browser, but not from the java code. Please help and let me know what am I missing here.

Message hub service goes down everyday

Iam using message hub service for a project.Iam running in to an issue that message hub service goes down everyday and the app needs to be started everyday.How can I handle this.

Unable to consume topic: Error: Request returned status code 502 but it was not in the accepted list.
2016-10-20T10:32:13.768+0530[App/0]OUTgot error: { [Error: Request returned status code 502 but it was not in the accepted list. ] statusCode: 502 }

Can someone help if they are using messagehub for production and how they are handling this kind of thing.

login library on a maven repo

Would be good to publish messagehub.login-1.0.0.jar to a maven reposistory so that applications can directly reference it instead of having to getting it from here.

Docker image fails node-kafkard install - nodeJS sample

Hi guys. I get an error when I install run docker build with the current Dockerfile in the nodeJS sample.

The error:

/usr/src/app/node_modules/bindings/bindings.js:88
throw e
^

Error: /usr/src/app/node_modules/node-rdkafka/build/Release/node-librdkafka.node: invalid ELF header
at Error (native)
at Object.Module._extensions..node (module.js:604:18)
at Module.load (module.js:494:32)
at tryModuleLoad (module.js:453:12)
at Function.Module._load (module.js:445:3)
at Module.require (module.js:504:17)
at require (internal/module.js:20:19)
at bindings (/usr/src/app/node_modules/bindings/bindings.js:81:44)
at Object. (/usr/src/app/node_modules/node-rdkafka/librdkafka.js:10:32)
at Module._compile (module.js:577:32)

My current fixes are the following two steps:

  1. Boron without "extensions" seem to be installing an older version of node - so I use node:8.9.0-slim, it also works well with alpine, thanks to "webmastersteave"

2)There seem to be a problem with installing node-kafkard from the package file, so I have had success in putting the npm command directly in the dockerfile.

Does this make sense?

[Question] Out of curiosity, is `ssl.ca.location` now necessary to connect with Event Streams?

I used kafka-node(now does not work) to connect with IBM Event Streams. The full configuration was

{
      kafkaHost: kafka_brokers_sasl,
      sslOptions: { rejectUnauthorized: false },
      sasl: {
        mechanism: "plain",
        username: user,
        password: password
      }
};

However, when looking at the nodejs sample that uses node-rdkafka, we need to pass in an extra value: ssl.ca.location comparing to kafka-node.

Saw this answer and saw that ssl.ca.location is needed to enable the Kafka client to verify the broker keys.

Was cert unnecessary for Event Streams before then becomes necessary recently?

Out of curiosity, I also wondering why kafka-node stopped working with Event Streams.

Thanks

Docker build fails for kafka-nodejs-console-sample due to ubuntu:latest base image

The kafka-nodejs-console-sample docker build currently fails due to apt packages no longer being available on the ubuntu:latest container image. Since nothing else has changed in this folder of the repository in some time, I figured this was an issue with the underlying Ubuntu image versioning rather than code.

The output of the docker build as run today is below. You can see the issues with no candidates for nodejs-dev and libssl1.0-dev available. I have found that specifying the previous LTS version of Ubuntu 18.04 as the base image resolves this issue. I have built, tested, and verified the new container image based on Ubuntu 18.04 locally.

Error:

$ docker build -t eventstreams-samples-nodejs .
Sending build context to Docker daemon  58.37kB
Step 1/8 : FROM ubuntu
latest: Pulling from library/ubuntu
a4a2a29f9ba4: Pull complete
127c9761dcba: Pull complete
d13bf203e905: Pull complete
4039240d2e0b: Pull complete
Digest: sha256:35c4a2c15539c6c1e4e5fa4e554dac323ad0107d8eb5c582d6ff386b383b7dce
Status: Downloaded newer image for ubuntu:latest
 ---> 74435f89ab78
Step 2/8 : RUN  apt-get update -qqy   && apt-get install -y --no-install-recommends      build-essential      node-gyp      nodejs-dev      libssl1.0-dev      liblz4-dev      libpthread-stubs0-dev      libsasl2-dev      libsasl2-modules      make      python      nodejs npm ca-certificates   && rm -rf /var/cache/apt/* /var/lib/apt/lists/*
 ---> Running in f3379cc61eb1
Reading package lists...
Building dependency tree...
Reading state information...
Package libssl1.0-dev is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

Package nodejs-dev is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  libnode-dev

E: Package 'nodejs-dev' has no installation candidate
E: Package 'libssl1.0-dev' has no installation candidate
The command '/bin/sh -c apt-get update -qqy   && apt-get install -y --no-install-recommends      build-essential      node-gyp      nodejs-dev      libssl1.0-dev      liblz4-dev      libpthread-stubs0-dev      libsasl2-dev      libsasl2-modules      make      python      nodejs npm ca-certificates   && rm -rf /var/cache/apt/* /var/lib/apt/lists/*' returned a non-zero code: 100

Fix:
Change https://github.com/ibm-messaging/event-streams-samples/blob/master/kafka-nodejs-console-sample/Dockerfile#L8 to:
FROM ubuntu:18.04

Message hub docker for local testing

Hi there,

I'm using node-rdkafka to connect to message hub following to this example: https://github.com/ibm-messaging/message-hub-samples/tree/master/kafka-nodejs-console-sample

Now, i would like to have a Docker image of message hub for local testing purpose. I have tried with this Kafka docker image but couldn't create a message hub in local.
I wonder if we have a message hub docker for local testing (something like we have for mqlight)?
Do you have any suggestion please?

Is running a standalone Thread from Liberty Servlet the right approach?

Hi there -- I've been looking at the Liberty sample that shows hows how to dequeue messages from MessageHub.
It looks very similar to the JSE sample, but wrapped up into a Servlet.
I was surprised to see the Servlet create a new Thread inside the initi() method, is there no better way of doing this?
E.g.
http://stackoverflow.com/questions/4691132/how-to-run-a-background-task-in-a-servlet-based-web-application

For example, if there was a JMS Resource Adapter for MessageHub then JEE applications could integrate via an MDB.

Any plans to offer a RAR/MDB style approach?

App fails to start when using cups service

Hi,

I'm trying to use a cf cups command to link to a public MH service from a private environment to test out the sample code and connectivity. Unfortunately, the app fails to start as it can't find the messagehub service from a "user-provided" service type. The error is in the npm package called message-hub-rest. Could you please take a look?

Thanks!

2018-09-11T23:49:39.93-0700 [CELL/0] OUT Creating container
2018-09-11T23:49:40.50-0700 [CELL/0] OUT Successfully created container
2018-09-11T23:49:45.33-0700 [APP/PROC/WEB/0] OUT > [email protected] start /home/vcap/app
2018-09-11T23:49:45.33-0700 [APP/PROC/WEB/0] OUT > node ./app.js
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR /home/vcap/app/node_modules/message-hub-rest/lib/messagehub.js:76
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR throw new Error(serviceNamePrefix + '* is not provided in the services environment variable. ' +
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at new Client (/home/vcap/app/node_modules/message-hub-rest/lib/messagehub.js:76:13)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at Object.Module._extensions..js (module.js:663:10)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at Object. (/home/vcap/app/app.js:50:29)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at Module.load (module.js:565:32)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at tryModuleLoad (module.js:505:12)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at bootstrap_node.js:612:3
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR ^
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at Module._compile (module.js:652:30)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] OUT Using VCAP_SERVICES to find credentials.
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR Error: messagehub* is not provided in the services environment variable. Make sure you have bound the Message Hub service to your Bluemix application
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at Function.Module._load (module.js:497:3)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at Function.Module.runMain (module.js:693:10)
2018-09-11T23:49:45.48-0700 [APP/PROC/WEB/0] ERR at startup (bootstrap_node.js:191:16)
2018-09-11T23:49:45.50-0700 [APP/PROC/WEB/0] ERR npm ERR! code ELIFECYCLE
2018-09-11T23:49:45.50-0700 [APP/PROC/WEB/0] ERR npm ERR! errno 1
2018-09-11T23:49:45.50-0700 [APP/PROC/WEB/0] ERR npm ERR! [email protected] start: node ./app.js
2018-09-11T23:49:45.50-0700 [APP/PROC/WEB/0] ERR npm ERR! Exit status 1
2018-09-11T23:49:45.50-0700 [APP/PROC/WEB/0] ERR npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
2018-09-11T23:49:45.50-0700 [APP/PROC/WEB/0] ERR npm ERR!
2018-09-11T23:49:45.50-0700 [APP/PROC/WEB/0] ERR npm ERR! Failed at the [email protected] start script.
2018-09-11T23:49:45.66-0700 [APP/PROC/WEB/0] ERR npm ERR! /home/vcap/app/.npm/_logs/2018-09-12T06_49_45_507Z-debug.log
2018-09-11T23:49:45.66-0700 [APP/PROC/WEB/0] ERR npm ERR! A complete log of this run can be found in:
2018-09-11T23:49:45.71-0700 [CELL/0] OUT Stopping instance 67d705a8-bf01-44e6-6683-7dfa

Connectors not listed

Hi,

We tried deploying the connectors on Kafka connect. We followed the instructions given in the GitHub README.
The connector is not getting listed when we run the below commands:

$ curl http://localhost:8083
{"version":"2.5.0","commit":"66563e712b0b9f84","kafka_cluster_id":"jYm3VyL3RgS92suAs1GRXg"}
$ curl http://localhost:8083/connectors
[]

Could you help resolve the issue.

Thanks,
Balaji

Kafka Connect REST API Security

I was trying to follow the documentation here to set up SSL encryption for Kafka Connect REST API. Has anyone successfully configured https with SSL certificates as explained in the documentation? I tried many combinations of possible values for the following parameters, but none of them worked.

listeners=https://myhost:8443
rest.advertised.listener=https
rest.advertised.host.name=<localhost>
rest.advertised.host.port=8083
listeners.https.ssl.client.auth=requested
listeners.https.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks
listeners.https.ssl.truststore.password=test1234
listeners.https.ssl.keystore.location=/var/ssl/private/kafka.server.keystore.jks
listeners.https.ssl.keystore.password=test1234
listeners.https.ssl.key.password=test1234

improve UX for kubernetes samples by creating secret using --from-file

When using the IBM Cloud Console to get the service credentials, is available as JSON and offers a multiline copy button.

Is easier to save the copied data into a file like credentials.json and then use kubectl create secret using --from-file than expecting the user to copy the multiline into a single string, specially on Windows terminals.

kubectl create secret generic eventstreams-binding --from-file=binding=credentials.json

I can submit a PR to update the samples:

Error: Cannot create 1 partition(s) for topic kafka-nodejs-console-sample-topic

C:\projects\event-streams-samples\kafka-nodejs-console-sample>docker run -e VCAP_SERVICES="%VCAP_SERVICES%" nodejs-console-sample
Using VCAP_SERVICES to find credentials.
Kafka Endpoints: broker-5-1bbc2mfrltrtt1b7.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093,broker-4-1bbc2mfrltrtt1b7.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093,broker-1-1bbc2mfrltrtt1b7.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093,broker-0-1bbc2mfrltrtt1b7.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093,broker-2-1bbc2mfrltrtt1b7.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093,broker-3-1bbc2mfrltrtt1b7.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093
Creating the topic kafka-nodejs-console-sample-topic with AdminClient
AdminClient connected
{ Error: Cannot create 1 partition(s) for topic kafka-nodejs-console-sample-topic.
The maximum number of partitions allowed (1) would be exceeded as the current count is 1
at Function.createLibrdkafkaError [as create] (/usr/src/app/node_modules/node-rdkafka/lib/error.js:334:10)
at /usr/src/app/node_modules/node-rdkafka/lib/admin.js:132:28
message: 'Cannot create 1 partition(s) for topic kafka-nodejs-console-sample-topic.\nThe maximum number of partitions allowed (1) would be exceeded as the current count is 1',
code: 44,
errno: 44,
origin: 'local' }

starting node sample on windows

https://github.com/ibm-messaging/event-streams-samples/blob/master/kafka-nodejs-console-sample/docs/Docker_Local.md instructions aren't working for windows

in dos command line you have to be single line and escape quotes. I don't think single tick worked:

SET VCAP_SERVICES = "{ "instance_id": "...", "mqlight_lookup_url": "...", "api_key": "...","kafka_admin_url": "....","kafka_rest_url": "...","kafka_brokers_sasl": [ ... ],"user": "...",
"password": "..."}"

interpolating env variable is different
docker run -e VCAP_SERVICES="%VCAP_SERVICES%" nodejs-console-sample

java sample doesn't build with gradle 7 which is latest version as of Apr 09, 2021

Builds with 6.8.3, but the Java sample gradle file uses the compile configuration which is removed from Gradle 7. Note that the 6.8.3 install will give this message:

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

As of gradle 7.0, you simply get:

  • What went wrong:
    A problem occurred evaluating root project 'kafka-java-console-sample'.

Could not find method compile() for arguments [org.apache.kafka:kafka-clients:2.4.0] on object of type org.gradle.api.internal.artifacts.dsl.dependencies.DefaultDependencyHandler.

There isn't a brew target for gradle 6.8.3, making it more difficult to install the older version needed to build the sample now.

NodeJS documentation for local confusing when using Cloud Pak for for Integration

Could you please provide an example of the credentials needed when using Event streams from a deployed Cloud Pak for integraton ?
I am getting different errors and i am not sure if it is due to the credentials I am using. This would be very useful for us NodeJS developers, I spent a few days sorting things out, and I still haven't figured out.

Update samples to support Kubernetes

  1. Change "Bluemix" references to IBM Cloud.
  2. Existing checks to see if app is running in Bluemix should be changed to CloudFoundry
  3. Add code to read credentials passed in as Kube secrets -> env variable.

I'm happy to work with a developer on this or send a PR.

Use in Spark as a Service on Bluemix

Error when I run the example

When I run the node.js rdkafka example as per instructions. I get the following error.

Error: Invalid value for configuration property "security.protocol"

Where Do You Download the PEM Certificate in IBM Cloud

I'm trying to connect to IBM Cloud Event Streams using Node.JS. The example code uses the path to a PEM Certificate. However, I cannot find the link to download the PEM certificate for my Event Streams instance. I do see where I can use the API key and username/password combination.

Next, I attempted to use the parameters as per the example on IBM cloud (without any PEM Certificate) and the code errors with no error message. Could you please assist?

Constructor parameters:
var producer = new Kafka.Producer({
'broker.version.fallback': '0.10.0',
'bootstrap.servers': targetServers,
'security.protocol': 'SASL_SSL',
'log.connection.close' : false,
'ssl.protocol': 'TLSv1.2',
'sasl.mechanism': 'PLAIN',
'sasl.username': username,
'sasl.password': apiKey,
'client.id': 'testme'
});

I also attempted to use the constructor here as per your example. Both failed:
producer = new Kafka.Producer(producer_opts, topicOpts);

It seems as though that there is no way to download a certificate on IBM Cloud's Event Streams.

gettin ssl error

{ severity: 7,
fac: 'TERM',
message: '[thrd:sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix]: sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:main]: Purging reply queue' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix]: sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Handle is terminating: failed 0 request(s) in retry+outbuf' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:main]: Decommissioning internal broker' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Handle is terminating: failed 0 request(s) in retry+outbuf' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Handle is terminating: failed 0 request(s) in retry+outbuf' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Handle is terminating: failed 0 request(s) in retry+outbuf' }
{ severity: 7,
fac: 'BROKERFAIL',
message: '[thrd:sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix]: sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: failed: err: Local: Broker handle destroyed: (errno: Bad address)' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:main]: Join 5 broker thread(s)' }
{ severity: 7,
fac: 'BROKERFAIL',
message: '[thrd:sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: failed: err: Local: Broker handle destroyed: (errno: Operation now in progress)' }
{ severity: 7,
fac: 'TERM',
message: '[thrd::0/internal]: :0/internal: Received TERMINATE op in state UP: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs' }
{ severity: 7,
fac: 'BROKERFAIL',
message: '[thrd:sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: failed: err: Local: Broker handle destroyed: (errno: Operation now in progress)' }
{ severity: 7,
fac: 'BROKERFAIL',
message: '[thrd:sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: failed: err: Local: Broker handle destroyed: (errno: Operation now in progress)' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix]: sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Purging bufq with 0 buffers' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix]: sasl_ssl://"kafka03-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Updating 0 buffers on connection reset' }
{ severity: 7,
fac: 'STATE',
message: '[thrd:sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Broker changed state CONNECT -> DOWN' }
{ severity: 7,
fac: 'STATE',
message: '[thrd:sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Broker changed state CONNECT -> DOWN' }
{ severity: 7,
fac: 'BROADCAST',
message: '[thrd:sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.]: Broadcasting state change' }
{ severity: 7,
fac: 'STATE',
message: '[thrd:sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Broker changed state CONNECT -> DOWN' }
{ severity: 7,
fac: 'BROADCAST',
message: '[thrd:sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.]: Broadcasting state change' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Purging bufq with 0 buffers' }
{ severity: 7,
fac: 'BROADCAST',
message: '[thrd:sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.]: Broadcasting state change' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Purging bufq with 0 buffers' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Updating 0 buffers on connection reset' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Purging bufq with 0 buffers' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Updating 0 buffers on connection reset' }
{ severity: 7,
fac: 'TERM',
message: '[thrd:sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka02-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd:sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Updating 0 buffers on connection reset' }
{ severity: 7,
fac: 'TERM',
message: '[thrd:sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka04-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs' }
{ severity: 7,
fac: 'TERM',
message: '[thrd:sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.]: sasl_ssl://kafka05-prod02.messagehub.services.us-south.bluemix.net:9093/bootstrap: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd::0/internal]: :0/internal: Handle is terminating: failed 0 request(s) in retry+outbuf' }
{ severity: 7,
fac: 'BROKERFAIL',
message: '[thrd::0/internal]: :0/internal: failed: err: Local: Broker handle destroyed: (errno: Operation timed out)' }
{ severity: 7,
fac: 'STATE',
message: '[thrd::0/internal]: :0/internal: Broker changed state UP -> DOWN' }
{ severity: 7,
fac: 'BROADCAST',
message: '[thrd::0/internal]: Broadcasting state change' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd::0/internal]: :0/internal: Purging bufq with 0 buffers' }
{ severity: 7,
fac: 'BUFQ',
message: '[thrd::0/internal]: :0/internal: Updating 0 buffers on connection reset' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:main]: Main background thread exiting' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:app]: Destroying op queues' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:app]: Destroying SSL CTX' }
{ severity: 7,
fac: 'TERMINATE',
message: '[thrd:app]: Termination done: freeing resources' }

Does Event Stream support Spark Kafka Streaming?

Current,I try to use Spark Kafka Streaming API to receive message from IBM Event Stream

 JavaInputDStream<ConsumerRecord<String, String>> directKafkaStream =
                KafkaUtils.createDirectStream(
                    jssc,
                    LocationStrategies.PreferConsistent(),
                    ConsumerStrategies.<String, String>Subscribe(topics, getKafkaReaderParams())
                                 ;
            directKafkaStream.foreachRDD(rdd -> {
               //doning someting
 })

But once my spark job run, I find it cannot recevie message, the logs like below:

2020-06-09 07:11:13 INFO AppInfoParser:117 - Kafka version: 2.3.0
2020-06-09 07:11:13 INFO AppInfoParser:118 - Kafka commitId: fc1aaa116b661c8a
2020-06-09 07:11:13 INFO AppInfoParser:119 - Kafka startTimeMs: 1591686673522
2020-06-09 07:11:13 INFO KafkaConsumer:964 - [Consumer clientId=consumer-1, groupId=test-1] Subscribed to topic(s): device.test.1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.