mhart / kinesalite Goto Github PK
View Code? Open in Web Editor NEWAn implementation of Amazon's Kinesis built on LevelDB
License: MIT License
An implementation of Amazon's Kinesis built on LevelDB
License: MIT License
Right now I published data on stream but I am not able to consume it, but the same code works in Kinesis.
It would be great to enable log debug in order to check/monitor messages in/out from kinesalite.
Hi!
I am trying to use kinesalite and dynalite for integration tests purpose, but cant figure out how to set everything up.
First of all im using:
Java 8
amazon-kinesis-client 1.8.8
amazon-kinesis-producer 0.12.5
I start kinesalite and dynalite with
kinesalite --ssl true --port 4567
dynalite --port 4568
In my /etc/hosts file i have added
127.0.0.1 kinesalite
I disable CBOR with environment variable:
AWS_CBOR_DISABLE: true
I create the dynamoClient like this:
dynamoClient = AmazonDynamoDBClientBuilder
.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(
"http://localhost:4568",
"eu-central-1"
))
.build();
I create the kinesisClient like this:
kinesisClient = AmazonKinesisClientBuilder
.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(
"kinesalite:4567",
"eu-central-1"
))
.build();
Then i create the config and worker
KinesisClientLibConfiguration config =
new KinesisClientLibConfiguration(
CONFIG.applicationName,
CONFIG.streamName,
credentialsProvider,
CONFIG.workerId
)
.withInitialPositionInStream(InitialPositionInStream.LATEST);
final Worker worker = new Worker.Builder()
.recordProcessorFactory(processorFactory)
.config(config)
.kinesisClient(kinesisClient)
.dynamoDBClient(dynamoClient)
.metricsFactory(new NullMetricsFactory())
.build();
But i get errors and cant figure out what i'm missing:
INFO [2017-12-18 15:25:39,847] com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker: Initialization attempt 1
INFO [2017-12-18 15:25:39,847] com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker: Initializing LeaseCoordinator
INFO [2017-12-18 15:25:39,866] com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker: Syncing Kinesis shard info
ERROR [2017-12-18 15:25:40,247] com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncTask: Caught exception while sync'ing Kinesis shards and leases
! sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
...
! Causing: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
! at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397) ~[na:1.8.0_151]
...
! Causing: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
When I run kinesalite, I get the following
17:19:47 kinesalite.1 | started with pid 14538
17:19:48 kinesalite.1 | Listening at http://:::10500
The port is right, and it running on localhost. (There doesn't seem to be a functional problem, just an aesthetic one.)
listShards.js method returns all shards with no respect to optional ExclusiveStartShardId
This results in manual exclusion of shard ids. Example from Flink codebase:
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/KinesisProxy.java#L457
There are a couple new API methods to change the stream's retention period.
I'm not personally using them for anything, but figured I'd mention it just in case.
I'm hoping to use Kinesalite to test my web app locally (using the AWS JS SDK, I post data to Kinesis from the browser).
I've started Kinesalite using kinesalite 4567
in my terminal and in my browser config I have:
var ep = new AWS.Endpoint('https://127.0.0.1:4567');
var kinesis = new AWS.Kinesis({endpoint: ep});
But the SDK of course complains about missing credentials and region settings. Is what I'm trying to achieve possible?
Thanks
I'm trying to mock a firehose stream to test a java lambda. Does Kinesalite support firehose streams? If so, any samples on how to? When I attempt to create the firehose stream I get: Service: AmazonKinesisFirehose; Status Code: 400; Error Code: UnknownOperationException; Request ID: 7a4815a0-84cf-11e6-9ba9-0d4e8c099c0b
Much Thanks!
I am using the python amazon-kclpy client. Kinesalite worked for a while, but once I delete local kinesis stream and local dynamodb table to reinitiate the environment, I could no longer get the client to work again ever.
Can someone please help? I have tried restarting the kinesalite, stream, and dynamodb table multiple times but still exact same error.
com.amazonaws.services.kinesis.clientlibrary.exceptions.internal.KinesisClientLibIOException: Shard [shardId-000000000000] is not closed. This can happen if we constructed the list of shards while a reshard operation was in progress.
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.assertClosedShardsAreCoveredOrAbsent(ShardSyncer.java:206)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.cleanupLeasesOfFinishedShards(ShardSyncer.java:652)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.syncShardLeases(ShardSyncer.java:141)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.checkAndCreateLeasesForNewShards(ShardSyncer.java:88)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncTask.call(ShardSyncTask.java:68)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:49)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.initialize(Worker.java:427)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.run(Worker.java:356)
at com.amazonaws.services.kinesis.multilang.MultiLangDaemon.call(MultiLangDaemon.java:114)
at com.amazonaws.services.kinesis.multilang.MultiLangDaemon.call(MultiLangDaemon.java:61)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
at java.lang.Thread.run(Thread.java:748)
Please add support for ListShards API
https://aws.amazon.com/about-aws/whats-new/2018/02/10x-higher-api-call-rates-for-amazon-kinesis-client-library-kcl-applications/
https://docs.aws.amazon.com/kinesis/latest/APIReference/API_ListShards.html
https://github.com/awslabs/amazon-kinesis-client/releases/tag/v1.9.0
I'm using windows, calling AWS .Net API in c#. Apologies if this is really an issue with the AWS .Net API.
I wanted to ensure there was nothing in my testing streams before each test, by deleting then recreating them. I assumed that if I get a OK response from my deleting request the stream will be deleted. But if I don't introduce a pause between deleting and creating a stream of the same name, I get a ResourceInUseException
my c# code to give you an idea
public void CreateCleanStreams(string streamName)
{
if (_amazonKinesisClient.ListStreams().StreamNames.Contains(streamName))
{
var deleteStreamResponse = _amazonKinesisClient.DeleteStream(new DeleteStreamRequest
{
StreamName = streamName
});
Assert.AreEqual(HttpStatusCode.OK, deleteStreamResponse.HttpStatusCode);
}
Thread.Sleep(500);//see exception handling below
try
{
var createStreamResponse = _amazonKinesisClient.CreateStream(new CreateStreamRequest
{
StreamName = streamName,
ShardCount = 1
});
Assert.AreEqual(HttpStatusCode.OK, createStreamResponse.HttpStatusCode);
}
catch (ResourceInUseException e)
{
Assert.Fail("even though we told kinesalite to delete the stream, and it says it did, it needs a bit more time to really delete it");
}
}
I tried sending a buffer with size 50268, the size cap is 51200. I get this:
com.amazonaws.AmazonServiceException: 1 validation error detected: Value 'java.nio.HeapByteBuffer[pos=0 lim=50268 cap=50268]' at 'data' failed to satisfy constraint: Member must have length less than or equal to 51200 (Service: AmazonKinesis; Status Code: 400; Error Code: ValidationException; Request ID: bc34b080-ce51-11e4-a42f-631ec68de648)
The code works correctly in production and does not give this error.
The Kinesis docs specify that the parameter of the getShardIterator
call using the "AT_TIMESTAMP"
ShardIteratorType should be a timestamp in milliseconds. However, the Kinesalite server multiplies the supplied parameter by 1000. Since it is expecting milliseconds, there is no need to multiply.
This should be a simple fix: remove line 84 in getShardIterator.js and touch up the corresponding unit tests. If you agree, I'd love to get a pull request in for you to take a look at.
I tried to put content of a file that has more than 260 kb but I got an error.
The put-record commands looks like: aws kinesis put-record --stream-name kinesis_stream_name --data "$(cat file.xml)" --partition-key "test" --endpoint-url http://127.0.0.1:4567 --no-verify-ssl --profile local
And the error is: -bash: /usr/local/bin/aws: Argument list too long.
For files that are smaller that 260 kb seems to work fine. I removed from file.xml some content and after that worked properly.
Example code:
var kinesalite = require('kinesalite'),
kinesaliteServer = kinesalite({path: './mydb', createStreamMs: 50});
kinesaliteServer.listen(4567, function(err) {
if (err) {
console.log(err);
throw err;
} else {
console.log('Kinesalite started on port 4567');
var AWS = require('aws-sdk');
var kinesis = new AWS.Kinesis({endpoint: 'http://localhost:4567'});
kinesis.listStreams(function(err,data) {
if (err) {
console.log("Error listing streams");
console.log(err);
} else {
console.log(data);
}
});
}
});
Console output:
Kinesalite started on port 4567
[AWS kinesis undefined 0.002s 0 retries] listStreams({})
Error listing streams
message=Missing region in config, code=ConfigError, time=Mon Dec 14 2015 19:12:08 GMT+0000 (GMT)
I've read in the closed issues about similar people having the same issue but it seems I am following your example and the Amazon SDK here (@2.2.9)
The implementation for split-shard
seems to deviate from the behavior of AWS Kinesis. After splitting a shard (and waiting for it to get back into state ACTIVE), we end up with the following configuration:
{
"StreamDescription": {
"HasMoreShards": false,
"RetentionPeriodHours": 24,
"StreamName": "test-stream",
"Shards": [
{
"ShardId": "shardId-000000000000",
"HashKeyRange": {
"EndingHashKey": "340282366920938463463374607431768211455",
"StartingHashKey": "0"
},
"SequenceNumberRange": {
"EndingSequenceNumber": "49563335997431456098495849915443705260510755470006288386",
"StartingSequenceNumber": "49563335997420305725896584603874146327194044269160562690"
}
},
{
"ShardId": "shardId-000000000001",
"HashKeyRange": {
"EndingHashKey": "170141183459999999999999999999999999999",
"StartingHashKey": "0"
},
"ParentShardId": "shardId-000000000000",
"SequenceNumberRange": {
"StartingSequenceNumber": "49563336006362904550507364483629969354526038951045627922"
}
},
{
"ShardId": "shardId-000000000002",
"HashKeyRange": {
"EndingHashKey": "340282366920938463463374607431768211455",
"StartingHashKey": "170141183460000000000000000000000000000"
},
"ParentShardId": "shardId-000000000000",
"SequenceNumberRange": {
"StartingSequenceNumber": "49563336006385205295705895106771505072798687312551608354"
}
}
],
"StreamARN": "arn:aws:kinesis:us-east-1:000000000000:stream/test-stream",
"StreamStatus": "ACTIVE"
}
}
That is, both the old shard (shardId-000000000000
) as well as the two new child shards (shardId-000000000001
, shardId-000000000002
) exist in the configuration. On AWS Kinesis, splitting a shard has the effect that the parent shard gets removed from the configuration. I believe the idea is that at each point in time, the shard hash ranges should be disjoint sets, i.e., shard ranges should never overlap.
(Side note: The consequence of course is that ParentShardId
points to a non-existing shard, but that is the behavior we see with AWS. I believe the ParentShardId
is kept only for reference to determine which pairs of shards have been created from which parent shard.)
This is a critical deviation from the AWS API, which makes proper testing of shard splitting/merging difficult. I'd be willing to create a PR, unless this is already in the pipeline or in progress. @mhart Please let me know what you think. Thanks
Hello,
is it possible to add triggers to a stream. So my infrastructure is based on aws lambda and kinesis or more precisely my lambda function get triggered from an aws kinesis stream. Is it possible to add these triggers in this kinesis streams?
Thanks in advance.
Hi Michael, Thanks for your effort. I am new to Amazon Kinesis. I am trying to set up Kinesalite in my local to assist development, but I got an error with the following simple java client code.
AmazonKinesisClient client = new AmazonKinesisClient(new BasicAWSCredentials("akid", "secret"));
client.setEndpoint("http://localhost:4567");
String streamName = "myStream1";
client.createStream(new CreateStreamRequest().withStreamName(streamName).withShardCount(2));
ByteBuffer data = ByteBuffer.wrap("myData".getBytes("UTF-8"));
PutRecordResult result = client.putRecord(streamName, data, "myPartKey");
The server is started by:
kinesalite --port 4567 --path ./kinesalite
Then Kinesalite server emitted the following error and stopped:
BigNumber Error: new BigNumber() not a base 16 number: 2000000NaN0000000000000000000000000NaN000000002
at raise (/usr/local/lib/node_modules/kinesalite/node_modules/bignumber.js/bignumber.js:1177:25)
at /usr/local/lib/node_modules/kinesalite/node_modules/bignumber.js/bignumber.js:1165:33
at new BigNumber (/usr/local/lib/node_modules/kinesalite/node_modules/bignumber.js/bignumber.js:212:28)
at Object.stringifySequence (/usr/local/lib/node_modules/kinesalite/db/index.js:168:12)
at /usr/local/lib/node_modules/kinesalite/actions/putRecord.js:57:23
at /usr/local/lib/node_modules/kinesalite/db/index.js:71:7
at /usr/local/lib/node_modules/kinesalite/node_modules/level-sublevel/shell.js:102:12
at /usr/local/lib/node_modules/kinesalite/node_modules/level-sublevel/nut.js:121:19
I tried to look into the source code of Kinesalite and print out some debug information. i inserted a console.log(...) into the first line of the method stringifySequence(obj) in index.js.
function stringifySequence(obj) {
console.log(obj, ('00000000' + Math.floor(obj.shardCreateTime / 1000).toString(16)).slice(-9));
...
}
And I got the following debug information.
{ shardCreateTime: undefined,
shardIx: undefined,
seqIx: 0,
seqTime: NaN }
'000000NaN'
I wonder if there is any step I did not do correctly? Kinesalite is a very helpful local Kinesis server for development. Thanks for your contribution.
Thanks,
Cheng
When a language like Java validates the SSL cert, it expects the hostname to match the CN. Unfortunately, a domain name cannot include a space, so this cert cannot ever validate.
If the CN were "kinesalite", we could alias kinesalite to localhost in /etc/hosts, and the cert would validate.
I have created a stream with kinesalite. I have run putRecords, getRecords operation through http rest api.
Now If I want to connect this stream event with my Serverless nodejs app, how do I do that? note that I am running everything locally, serverless uses serverless-offline.
The kinesalite service is running on localhost:4567. How do I add event in my serverless.yml file so that my function can read stream events?
Currently I am writing something like this.
stream:
handler: functions/stream.handler
events:
- stream:
arn: arn:aws:kinesis:region:us-east-1:stream/stream1
batchSize: 100
startingPosition: LATEST
enabled: true
From the Kinesis documentation on the NextShardIterator response element:
"The next position in the shard from which to start sequentially reading data records. If set to null, the shard has been closed and the requested iterator will not return any more data."
I'm trying to iterate through a lineage of shards, but a closed shard is iterating forever after reaching the end of its data. From the code, it looks like a new iterator is returned that will keep repeating the last sequence number of the shard in its internal encoding.
Hi, may you provide some tips on using Kinesalite to setup integration tests for Java projects?
Thanks for this development!
Since I need to use the KPL, I am trying to run kinesalite with SSL (i.e., kinesalite --ssl
).
Running:
$ aws kinesis list-streams --endpoint=https://localhost:4567
gives the error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
Any ideas why this may be failing, or how to debug the issue? In case it matters, I'm running OS X 10.10.5.
Hello,
I use kinesalite and I want to create a Java producer which write in the Kinesalite stream.
To do that, I start kinesalite using default port and wrote this code:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("FAKE", "FAKE");
kinesisClient = new AmazonKinesisAsyncClient(awsCreds);
kinesisClient.setRegion(Region.getRegion(Regions.US_EAST_1));
kinesisClient.setEndpoint("http://localhost:4567");
kinesisClient.createStream("mystream", 1);
But application throw an exception:
Caused by: com.amazonaws.services.kinesis.model.AmazonKinesisException: null (Service: AmazonKinesis; Status Code: 403; Error Code: null; Request ID: 8f0a2180-add3-11e7-8534-49848585881a)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2219)
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2195)
at com.amazonaws.services.kinesis.AmazonKinesisClient.executeCreateStream(AmazonKinesisClient.java:458)
at com.amazonaws.services.kinesis.AmazonKinesisClient.createStream(AmazonKinesisClient.java:434)
at com.amazonaws.services.kinesis.AmazonKinesisClient.createStream(AmazonKinesisClient.java:470)
When I look closer, I notice that the response is about credentials:
HTTP/1.1 403 Forbidden [x-amzn-RequestId: db224e20-add4-11e7-8534-49848585881a, x-amz-id-2: IEeIw2SzHkV5cnKy2q8rBGdm2Ik0NyEvLAqIk4ex5XVuzI7W3nNw/Mmpr1AdLZ5uIx6r4AOQK/pXq8pe2rhBvUOn0skAScJX, Content-Length: 130, Date: Tue, 10 Oct 2017 16:05:37 GMT, Connection: keep-alive] ResponseEntityProxy{[Content-Length: 130,Chunked: false]}
And request I made on kinesalite is:
POST http://localhost:4567/ HTTP/1.1
Headers [Host: localhost:4567, Authorization: AWS4-HMAC-SHA256 Credential=FAKE/20171010/us-east-1/kinesis/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-date;x-amz-target, Signature=cbbd5f388018a2d8c36adabf72914398a551b2c9f5bb12ff7a59ab224df45ed6, X-Amz-Date: 20171010T155620Z, User-Agent: aws-sdk-java/1.11.198 Mac_OS_X/10.12.1 Java_HotSpot(TM)_64-Bit_Server_VM/25.111-b14 java/1.8.0_111 scala/2.10.0, amz-sdk-invocation-id: 7da5b848-f2be-b9c6-16a3-5ebd04a7d02b, amz-sdk-retry: 0/0/500, X-Amz-Target: Kinesis_20131202.CreateStream, Content-Type: application/x-amz-cbor-1.1]
Have you an idea of how to fix that? I need to set credentials when I try to connect to kinesalite?
Many thanks
I am experiencing problems connecting to kinesalite using new versions of the AWS java sdk. The problem started after updating to version 1.11.0. The new versions of the sdk have no problem connecting to AWS kinesis.
Kinesalite is still working fine with the 1.10 versions.
The requests definitely get to the kinesalite, but the responses are malformed now.
Here is the error I get whenever I try to interact with kinesalite:
com.amazonaws.AmazonServiceException: Unable to parse HTTP response content (Service: AmazonKinesis; Status Code: 404; Error Code: null;
I ended up finding the old NPM module via google, which was a bit confusing. Maybe delete it or update the readme to explain that it's been renamed?
Now, I using Kinesis Client Library v2 and I have problem
[aws-java-sdk-NettyEventLoop-2-7] DEBUG software.amazon.awssdk.http.nio.netty.internal.http2.SdkHttp2FrameLogger - OUTBOUND SETTINGS: ack=false settings={INITIAL_WINDOW_SIZE=1048576, MAX_HEADER_LIST_SIZE=8192}
[aws-java-sdk-NettyEventLoop-2-7] DEBUG software.amazon.awssdk.http.nio.netty.internal.http2.SdkHttp2FrameLogger - OUTBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=1966082
[aws-java-sdk-NettyEventLoop-2-0] ERROR software.amazon.awssdk.http.nio.netty.internal.RunnableRequest - Failed to create connection to http://0.0.0.0:4568/
Do this currently support Kinesis Client Library v2 ? If yes, how to solve it ?
According to the Kinesis docs, PutRecords calls can partially succeed, and Kinesis will respond with a big message telling you how many record succeeded:
Adding Multiple Records with PutRecords
By default, failure of individual records within a request does not stop the processing of subsequent records in a PutRecords request. This means that a response Records array includes both successfully and unsuccessfully processed records. You must detect unsuccessfully processed records and include them in a subsequent call.
Kinesalite does not seem to match this behavior. Instead, if any record in the set is invalid, it rejects the entire batch.
Since the logic around plucking out which records are valid and which are invalid (and why!) is finicky, I'd really love it if Kinesalite matched Kinesis's behavior here.
Recently found out that Kinesalite doesn't seem to support one of the describeStream()
variants:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/kinesis/AmazonKinesisClient.html#describeStream-java.lang.String-java.lang.String-
The API response is supposed to only include shardIds larger than the exclusiveStartShardId
, but it doesn't seem to be working as expected with Kinesalite.
This is almost certainly user error or a config problem on my machine, but having little experience with node, figured I'd ask here.
I installed using the instructions:
rcobb@rcobb-x250: /usr/local/bin$ sudo npm install -g kinesalite
npm ERR! not a package /home/rcobb/tmp/npm-21039-pyzsYb2e/1467417729931-0.7627380455378443/tmp.tgz
npm http GET https://registry.npmjs.org/kinesalite
npm http 304 https://registry.npmjs.org/kinesalite
npm http GET https://registry.npmjs.org/async
...
npm http 304 https://registry.npmjs.org/es6-iterator
> [email protected] install /usr/local/lib/node_modules/kinesalite/node_modules/leveldown
> prebuild --install
/usr/local/bin/kinesalite -> /usr/local/lib/node_modules/kinesalite/cli.js
[email protected] /usr/local/lib/node_modules/kinesalite
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected] ([email protected])
├── [email protected]
├── [email protected] ([email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
└── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
But when I run I get:
rcobb@rcobb-x250: /usr/local/bin$ kinesalite
module.js:340
throw err;
^
Error: Cannot find module 'memdown'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/usr/local/lib/node_modules/kinesalite/db/index.js:4:15)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
I'm using ubuntu and:
rcobb@rcobb-x250: /usr/local/bin$ uname -a
Linux rcobb-x250 4.2.0-38-generic #45~14.04.1-Ubuntu SMP Thu Jun 9 09:27:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
rcobb@rcobb-x250: /usr/local/bin$ node -v
v0.10.25
rcobb@rcobb-x250: /usr/local/bin$ npm -v
1.3.10
Any tips?
Thanks.
[email protected] node_modules/kinesalite
├── [email protected]
├── [email protected] ([email protected])
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected] ([email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
└── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
root@4328213c05f9:/code# /usr/local/bin/kinesalite
module.js:340
throw err;
^
Error: Cannot find module 'memdown'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/usr/local/lib/node_modules/kinesalite/db/index.js:4:15)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
I was able to patch this by removing all of the ^
in the dependencies, which to my understanding is this version or higher
and kinesalite started to work.
May want to look into finding which version of ie memdown and so on are breaking and put in an upper bound or so.
After the lastest commit, the following little script fails with
/Users/r/tmp/node_modules/kinesalite/node_modules/level-sublevel/nut.js:196
return wrapIterator((db.db || db).iterator(opts))
^
TypeError: Object LevelUP has no method 'iterator'
at Object.iterator (/Users/r/tmp/node_modules/kinesalite/node_modules/level-sublevel/nut.js:196:43)
at EventEmitter.emitter.readStream.emitter.createReadStream (/Users/r/tmp/node_modules/kinesalite/node_modules/level-sublevel/shell.js:137:18)
at EventEmitter.emitter.valueStream.emitter.createValueStream (/Users/r/tmp/node_modules/kinesalite/node_modules/level-sublevel/shell.js:152:20)
at Object.sumShards (/Users/r/tmp/node_modules/kinesalite/db/index.js:249:29)
at /Users/r/tmp/node_modules/kinesalite/actions/createStream.js:20:10
at /Users/r/tmp/node_modules/kinesalite/node_modules/level-sublevel/shell.js:101:15
at /Users/r/tmp/node_modules/kinesalite/node_modules/level-sublevel/nut.js:120:19
at dispatchError (/Users/r/tmp/node_modules/kinesalite/node_modules/levelup/lib/util.js:77:7)
at maybeError (/Users/r/tmp/node_modules/kinesalite/node_modules/levelup/lib/levelup.js:169:5)
at LevelUP.get (/Users/r/tmp/node_modules/kinesalite/node_modules/levelup/lib/levelup.js:202:7)
var kinesalite = require('kinesalite')({ createStreamMs: 0, deleteStreamMs: 0, ssl: false });
var AWS = require('aws-sdk');
var client = new AWS.Kinesis({
region: 'fake',
accessKeyId: 'fake',
secretAccessKey: 'fake',
endpoint: 'http://localhost:4567'
});
kinesalite.listen(4567, function() {
client.createStream({ ShardCount: 1, StreamName: 'one' }, function() {
kinesalite.close(function() {
kinesalite.listen(4567, function() {
client.createStream({ ShardCount: 1, StreamName: 'one' }, function(err) {
if (err) throw err; // <---- this'll throw
kinesalite.close(function() {
console.log('made it!');
});
});
});
});
});
});
I tried the following
final ProfileCredentialsProvider provider = new ProfileCredentialsProvider("default");
final KinesisProducerConfiguration conf = new KinesisProducerConfiguration()
.setCredentialsProvider(provider)
.setKinesisEndpoint("localhost:7010")
.setRegion("");
return new KinesisProducer(conf);
But it doesn't work, because it said that KinesisProducer is not able to verify the kinesis endpoint. It fails because I can't set the port. KinesisProducerConfiguration run a regular expression in order to check that the endpoint is valid.
Expected pattern: ^([A-Za-z0-9-\.]+)?$
So, no protocol and port are valid in the endpoint.
Any Idea how can I set the endpoint with the port ? or which port I have to use when I start kinesalite
thanks
To reproduce problem:
kinesalite --shardLimit 3 --ssl --port 11501
aws --no-verify-ssl --endpoint-url=https://localhost:11501 kinesis create-stream \
--shard-count 1 --stream-name test1
aws --no-verify-ssl --endpoint-url=https://localhost:11501 kinesis create-stream \
--shard-count 1 --stream-name test2
aws --no-verify-ssl --endpoint-url=https://localhost:11501 kinesis create-stream \
--shard-count 1 --stream-name test3
aws --no-verify-ssl --endpoint-url=https://localhost:11501 kinesis create-stream \
--shard-count 1 --stream-name test4
When adding the fourth stream, this error occurs:
An error occurred (LimitExceededException) when calling the CreateStream operation: This request would exceed the shard limit for the account 000000000000 in us-east-1. Current shard count for the account: 4. Limit: 3. Number of additional shards that would have resulted from this request: 1.
I was trying to figure out a way to run tests without any creds. Seems like I needed this in my file.
AWS.config.update({accessKeyId: 'FAKE', secretAccessKey: 'FAKE'})
Could be useful for others.
These strings can be seemingly anything except empty.
Hi.
I believe I encountered an issue where the NextShardIterator
returned from GetRecords
, that seems to happen under two circumstances:
GetRecords
returns empty records andGetRecords
is called more often than every secondA test showing this behaviour is here: studzien@078e994
I apologise if it's not clear enough, I haven't used Javascript for a while :)
From a very quick overview of the code it looks to me that the root cause of the problem is rounding the sequence time to the floor of a second (1000 millis).
The sequence is more or less as follows:
GetRecords
is done with the iterator pointing at the beginning of the streamnow + 1000
)gte
, we go back to 3, except we just fetch the last record.I feel like the best approach to fix the above mismatch would be to get rid of rounding to 1000 milliseconds. And use 1 as an offset instead of 1000.
What do you think? I can try to fix it if this approach makes sense.
If my reasoning is incorrect, please let me know, what I've missed :)
Thanks and best regards!
It would be great if you could pass some initial streams configuration as an option to the kinesalite server.
For instance kinesalite --streams='{"stream1": 1, "stream2": 2}'
.
I have wrote a wrapper script that present this feature, but that would be nice to support it natively.
(I'll be happy to share the script, which rely on the aws sdk to create the stream)
Sample code may be:
AmazonDynamoDB dynamoClient = new AmazonDynamoDBClient();
dynamoClient.setEndpoint("http://localhost:7000");
AmazonKinesis kinesisClient = new AmazonKinesisClient();
kinesisClient.setEndpoint("http://localhost:7001");
Do you have a sample java code? Also is there a way to start the kinesalite in embedded mode?
Hi, we use kinesalite to run our integration tests locally. We're having problems with the iterator returned in the GetRecords response (NextShardIterator value). Our code fails randomly (apparently) with kinesalite, although runs perfectly in kinesis.
The following gist contains a go program (using https://github.com/aws/aws-sdk-go), that reproduces the problem, with the output of a typical successful and failure execution.
https://gist.github.com/jriquelme/144816586e3f421d00af
BTW, thanks for developing kinesalite, it's very useful to us.
Hi Team,
I am using kinesalite with KPL and getting below error message
com.amazonaws.services.kinesis.producer.LogInputStreamReader - [2018-04-04 18:11:54.990304] [0x00000694] [error] [shard_map.cc:152] Shard map update for stream "sample" failed. Code: Message: Unable to connect to endpoint;
I am using below configuration
amazon-kinesis-client 1.8.8
amazon-kinesis-producer 0.12.8
System.setProperty(SDKGlobalConfiguration.AWS_CBOR_DISABLE_SYSTEM_PROPERTY, "true");
System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY, "true");
System.setProperty("USE_SSL", "true");
KinesisProducerConfiguration config = new KinesisProducerConfiguration();
config.setRegion(Region.getRegion(Regions.US_EAST_1).getName());
config.setCredentialsProvider(new EnvironmentVariableCredentialsProvider());
config.setKinesisEndpoint("192.168.70.107");
config.setKinesisPort(4567);
config.setLogLevel("warning");
config.setVerifyCertificate(false);
producer = new com.amazonaws.services.kinesis.producer.KinesisProducer(config);
I am able to put record using kinesis rest api. Please let me know what configuration I am missing.
https://docs.aws.amazon.com/kinesis/latest/APIReference/API_UpdateShardCount.html
It doesn't have to do anything, just respond generically not actually implement any sharding increases or decreases.
This library is a part of Localstack which mocks AWS services. When using Localstack with Terraform, Terraform will call UpdateShardCount
I know that this is more aws-sdk question than kinesalite, though I would like to raise it again. It was discussed already in #4 though any recipes from #4 won't work.
If I do var kinesis = AWS.Kinesis({endpoint: new AWS.Endpoint('http://localhost:4567')});
then I got 'Missing region in config' error, but if I add region like this var kinesis = AWS.Kinesis({endpoint: new AWS.Endpoint('http://localhost:4567'), region: 'eu-west-1'});
then my stream will be created in AWS cloud, not in kinesalite on my machine.
Hi,
I've gotten a kinesalite stream working and can post to the stream using standard AWS node js kinesis APIs.
I'm now wanting to consume the same stream in an offline environment using kcl-bootstrap (KCL multilanguagedemon) whereby kinesis credentials are passed in via a properties file and the node js consumer is passed in to the kcl-bootstrap jar via the properties file configuration as well as the AWS credentials.
Any hints you may have on this would be most helpful.
Thanks
Samir
Fist off, thanks for https://github.com/mhart/kinesalite and https://github.com/mhart/kinesis - it opens up a lot of opportunities.
My issue is with the callback being outside of the 'setTimeout' in certain operations i.e.:
store.deleteStreamDb(key, function(err) {
if (err) return cb(err)
setTimeout(function() {
metaDb.del(key, function(err) {
if (err && !/Database is not open/.test(err)) console.error(err.stack || err)
})
}, store.deleteStreamMs)
cb()
})
I'm forced to explicitly juggle a lower operation state with a higher delay:
kinesaliteServer = kinesalite({path: './build/mydb', deleteStreamMs: 0})
//...
kinesis.deleteStream({ StreamName: 'testStream' })
.promise()
.then(() => { // <-- I would expect the API to return a meaningful resolved promise
return Promise.delay(10);
})
.then(done);
Even when promises aren't involved, there is a race condition here. Placing the cb()
inside the setTimeout
makes this more manageable in my case.
Am I using it incorrectly or not understanding the intent?
Noticed this issue when building a fresh Docker image:
Repro steps:
Stacktrace is:
/usr/lib/node_modules/kinesalite/node_modules/levelup/lib/util.js:42
throw new LevelUPError(
^
LevelUPError: Installed version of LevelDOWN (1.2.2) does not match required version (~1.1.0)
at getLevelDOWN (/usr/lib/node_modules/kinesalite/node_modules/levelup/lib/util.js:42:11)
at LevelUP.open (/usr/lib/node_modules/kinesalite/node_modules/levelup/lib/levelup.js:114:37)
at new LevelUP (/usr/lib/node_modules/kinesalite/node_modules/levelup/lib/levelup.js:87:8)
at LevelUP (/usr/lib/node_modules/kinesalite/node_modules/levelup/lib/levelup.js:47:12)
at Object.create (/usr/lib/node_modules/kinesalite/db/index.js:31:12)
at kinesalite (/usr/lib/node_modules/kinesalite/index.js:24:26)
at Object.<anonymous> (/usr/lib/node_modules/kinesalite/cli.js:26:35)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
Changing the leveldown dependency from "^1.0.6" to "1.1" fixes the runtime error:
sed -i "s/\"leveldown\": \"\^1.0.6\"/\"leveldown\": \"1.1\"/g" /git/kinesalite/package.json
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.