Comments (38)
WOOT, or initial version of the C++ client with authentication, events and presence. Thank you @ChristophConrads-deepstreamHub !
https://github.com/deepstreamIO/deepstream.io-client-cpp
from deepstream.io.
I think we should implement a C client first before creating any other bindings.
That will help other clients reach production readyness more easily since all they need to do is to create bindings to the C library.
Any chance we can discuss this over Gitter?
Btw, you guys really need a Gitter channel.
from deepstream.io.
Agreed regarding C client.
Just got a room called deepstreamIO on gitter now.
We currently use slack, just created a room on gitter but will continue using slack as our official community channel so we don't spread across.
from deepstream.io.
@thedrow also agree in regards to the C client first. Could I invite you to join our Slack channel though: https://deepstream-slack.herokuapp.com/ this way we keep all communication in one place
from deepstream.io.
Gitter is more github community oriented. We can use slack for dev discussions.
from deepstream.io.
Both are viable alternatives with their own strength and weaknesses, but the important thing is to not split communication across two independent channels. I perfectly understand why you'd prefer Gitter. Maybe have a look at https://sameroom.io/ to have the best of both worlds?
from deepstream.io.
Discussed possible implementation of a message parse algorithm using prefix trees with @thedrow
- while not is_end_of_message and delimiters_count < 2(edited)
1.1) for each character in buffer
1.1.1) Copy to topic_action_buffer
1.1.2) if character == '|'
1.1.2.1) Increase delimiters_count - Perform lookup in the prefix tree and assign the results to possible_asts
- if number of possible_asts == 0
3.1) Report error - if number of possible_asts == 1
4.1) return possible_asts[0] - while not is_end_of_message
5.1) for each character in buffer
5.2) Copy to msg_buffer
5.3) Lookup in prefix tree
5.4) if possible_asts == 1
5.4.1) return possible_asts[0]
5.5) Parse msg_buffer normally
Given that
5.5 is because the message is not cached
5.3 should be done only if there are enough characters. enough can be determined by benchmarking
from deepstream.io.
I reviewed the algorithm a little further and we need to test it with both Trie (Prefix tree) and Patrichia tree (Radix tree).
from deepstream.io.
Here's a specialized Radix tree for IP addresses: https://github.com/postwait/node-iptrie
from deepstream.io.
The resource links indicated in the first post are not working:
Great resources to get started:
Writing a deepstream client
Deepstream Messaging structure
Detailed Cucumber Specs for Incoming and Outgoing messages (and associated behaviors)
from deepstream.io.
Updated
Are you looking to build a C client? =D
from deepstream.io.
I'll be happy to work with someone on this. I don't have the time to do this on my own right now.
I have contributed a working build though :)
from deepstream.io.
Are you looking to build a C client? =D
Yes and No.
As of now, trying to understand the complexity of message structure and evaluating the ROI (in terms of the features) of building a new C client.
My primary concern is: performance. We recently built an autobahn based javascript/browser to C/C++ pipeline. The crossbar.io server is very low footprint light-weight server. For example, as a study shows , a simple $25 Raspberry PI (700MHz with 512 MB) can shell out 6000 events/sec with roughly 60% cpu on Autobahn.
If for an extra cost of throwing more memory and CPU, if we end-up achieving same or similar performance, then it becomes a questionable effort.
Any performance benchamarks (and memory / io requirements) of deepstream, can help here.
The second concern I have is: the features. Once again to take the example of WAMP/Autobahn, it offers browser-based RPC providers, session meta events, router-to-router clustering, REST http-bridge etc.
From the deepstream docs I am struggling to see if these are possible.
But for the C client, looks like some effort is already going on here https://github.com/deepstreamIO/libdeepstream - could not find much code inside it, though. What is its status?
from deepstream.io.
As I said, I just contributed a build file and the CI process. Nothing else was really done.
from deepstream.io.
Interesting case usage. If I understand correctly, you want to know how many events a rasberry pie can broadcast a minute while running the deepstream server. I honestly have no clue, but we just ordered a rasberry pie to try that out.
All of our performance tests are currently against data-sync, which is quite a bit more load intensive since it has to interact with cache/storage and do a few merges. Events on the other hand is extremely light weight since its just an echo in a way. We are working on a performance harness and can provide more details after that.
Our current results are based on deepstream using ws and node 0.12. We have now changed to using uws ( C++ websocket server ) and node lts which should have performance improvements, as well as automated to run per release.
Installation is around ~15mb and does not require any external dependencies.
Regarding features, what we currently do provide ( broad steps ) are:
- data-sync, so that you can use tiny json snippets to store and share state, only requiring deltas to be sent on the wire.
- events, including listening incase of active providers
- RPCs, which work by trying to find a client connected to the server that can provide the rpc, if not it will route the rpc to other nodes within the cluster to provide it.
- authentication, either via custom logic, using a http endpoint for user login or a file based name/password config file. All of those also provide custom uer meta data
- Permissioning, where you can permissions each message depending on its content, user that performed the event and on other record states.
- Fully customisable plugins, you can use any type of storage/cache/messaging/logger/auth/permission plugin or write you own.
Things like session meta events can be achieved using the features above, and a http-bridge has not yet been implemented since deepstream is a realtime server meaning connections are generally two way - websocket or tcp. However it would literally be a combination of setting up a connection, sending data and closing it off/replying to the request once you get data.
In conclusion, it would interesting to know what your exact usecase and which features you would want to use. Also be good to know why crossbar.io
doesn't fulfill your use case.
from deepstream.io.
WIP: https://github.com/CenacleResearch/deepstream-cpp-client
Where can I find the exact message serialization format used for RPC parameters?
from deepstream.io.
https://deepstream.io/info/specs/rpc/
https://deepstream.io/info/protocol/all-messages/
Look for rpc and you'll see all the individual messages!
from deepstream.io.
Those links just give the connectivity and publishing part, not how the actual parameters are serialized by the server / client.
For example, for a method such as, function countKeys(obj, len)
how are the parameters obj and len
are to be serialized over the wire (and to be de-serialized at the other end on client)? When client is making a request, what exact format of the arguments the server expects?
Does DeepStream use JSON / MessagePack / similar ? or any other custom serialization (specified by the method provider) is fine?
from deepstream.io.
@KrishnaPG We use a propriety type serialization mechanism.
https://deepstream.io/docs/common/constants/#data-types
Generall JSON objects are serialized using normal JSON serialization, and for fields where data types are dynamic are prefixed with 'O'
For example, if an RPC was to return a value of 'ABC' it would be a message like:
P|RES|toUppercase|<UID>|SABC+
If it was a number it would be:
P|RES|toUppercase|<UID>|N1+
True:
P|RES|toUppercase|<UID>|T+
False:
P|RES|toUppercase|<UID>|F+
Object:
P|RES|toUppercase|<UID>|O{"result":{"parts":["This","totally","worked","!"]}}+
( substituting | and + with ch30 and ch31 for reading purposes
from deepstream.io.
Thanks. It illustrates the "return values", which is fine, since return value is usually just one single entity.
I am more interested in how to "serialize" the function arguments / parameters, which are usually more than one, when making a call?
For example, consider this function:
function find_intersection(int array1[15], object array2[20], function intFn)
This method accepts two arrays and one callback function. How to serialize these array objects and send them across wire?
I suppose just a simple JSON would do?
What happens if we use some other format (say flatbuffers) , to send parameters and receive results? Would the DeepStream server care about the particular format of the parameters or return values?
For example, what is preventing me from sending a binary result as:
P|RES|toUppercase|<UID>|O{"result":"XDFUKLKJFDAILK2392LKJALSF=="}+
where the XDFUKLKJFDAILK2392LKJALSF
is actually, say, base64 encoded value of some complex data structure, and the client knows how to decode it?
from deepstream.io.
Nothing would be stopping you from that ( other than if the base encoding is a larger than the maximum message size allowed ). You can either send it as a string or as content within a json field.
We generally have a text protocol ( not binary ) and you can use it in anyway as long as messages don't get too big ;)
from deepstream.io.
Thank you. Is there a special symbol for "arrays" and "binary data" ?
To double confirm, the function parameters can be serialized in JSON format and sent across with the O
Symbol - correct?
from deepstream.io.
Nope, arrays are also serialised via JSON
O[ "a", "b"]
Binary data ( if base64 encoded ) can just used a string prefix
SXDFUKLKJFDAILK2392LKJALSF
from deepstream.io.
Are there deviations from the message formats described in the docs? Here for example, it says to expect A|A+
upon successful authorization. However, I am seeing the below from the server:
A|A|O{"favouriteColor":"red"}+
What is that O{"favouriteColor":"red"}
thing and how to deal with it? Is it safe to ignore?
Similarly for Invalid auth, the doc says to expect
A|E|INVALID_AUTH_DATA|Sinvalid authentication data+
but I am seeing the below returned from the server:
A|E|INVALID_AUTH_DATA|U+
from deepstream.io.
Good catch! We upgraded the auth Api and it hasn't updated.
Other specs is listening. Otherwise it's up to date!
from deepstream.io.
One problem I am seeing is: server being too impatient with the long running RPC calls.
When RPC call (provider) does not give back result in few seconds or so, server is sending "RESPONSE_TIMEOUT" to the other client (caller).
How to mitigate this situation? RPC calls (especially provided by C/C++ back ends) could go on for quite some time, and as long as the caller and provider are ok with that, DeepStream server should not have problem with it.
I do not see much of this 'timeouts' mentioned in the DOCs for the RPCs.
I would expect the server not to decide by itself, and rather wait for the provider to either reject the call or connection-terminated.
Also, does DeepStream support 'progress updates' from the RPC provider? Scenarios are:
- Streamed results: continuous stream of data that is emitted over long period as the result of a call. Example, lines from a files one by one.
- Progress updates: some intermittent data (that is not the actual result) sent before sending the actual result Example, progress percentage of word count in a file
from deepstream.io.
There is a timeout you can set on the server specifically for how long you should be able to wait for an RPC to complete:
https://deepstream.io/docs/server/configuration/
rpcProviderQueryTimeout
Sets how long deepstream will wait for responses after querying for RPC provider
Default:1000
If you want the server to continue listening forever, you can put in a huge timeout. I'm not certain if the server actually acknoweldges a client it is quering to have dropped midway and rejects the RPC, since that is precisely what the timout is for ( plus other edge cases where the connection doesn't drop but becomes super unresponsive ).
Also, does DeepStream support 'progress updates' from the RPC provider? Scenarios are:
No, rpcs are not streams. What you can do for these scenarios is allow the rpc to return a unique event/record name that the client can subscribe to for those events, which contains the state.
client:
ds.rpc.make( 'file-loader', 'filename1.txt', ( err, result ) => {
if( success ) {
ds.event.subscribe( result, ( percentage ) => {
console.log( `file has loaded by: ${percentage}` )
} )
}
} )
provider:
ds.rpc.provide( 'file-loader', ( filename, response ) => {
var eventUID = ds.getUid();
var stat = fs.statSync(filename);
var str = progress({
length: stat.size,
time: 100
});
str.on('progress', function(progress) {
ds.event.emit( `file-progress/${eventUID}`, progress.percentage)
});
fs.createReadStream(filename)
.pipe(str)
.pipe(fs.createWriteStream(output));
response.success( `file-progress/${eventUID}` );
});
from deepstream.io.
The RPC call acknowledgement definition is mismatching between these two:
Here it indicates:
P|A|REQ|rpcName|rpcID+
where as here it indicates:
P|A|rpcName|<UID>+
Looks like P|A|rpcName|<UID>+
is the correct one (based on the server responses).
from deepstream.io.
@KrishnaPG Your right! This has just been in 2.0 fixed which has just been fixed https://github.com/deepstreamIO/deepstream.io/
from deepstream.io.
Hey everyone! I was wondering if anyone has any updates or links to what they may have been working on in C/C++?
from deepstream.io.
Hi guys, any plan for this?
The repo has not been updated for 3 months.
from deepstream.io.
hey @zsinryu, this has been put on hold because of the tremendous amount of time it takes to do and the very little traction it received =(. If anyone at all would like to contribute, we would be more than happy to pair/collaborate.
from deepstream.io.
@yasserf did anything change in the protocol? I downloaded the latest windows version of the deepstream (deepStream-Windows-3.00.zip) and it is working fine with browser client on 6200 port with websockets.
But a custom C++ client code (Based on libUV) on TCP connection setup, and sending the Auth (A|REQ|{"username":"XXX","password":"YYY"}+)
, the socket is getting closed. Earlier (with 2.0 in Nov 2016) the same C++ code with TCP connection was receiving the login challenge/response etc. But with the latest version, the deepstream server is not even receiving the auth details (no debug log in the console on the server side).
Is the TCP protocol still supported ?
from deepstream.io.
The TCP protocol support was dropped since 1.0, currently we only support websockets (which is a bit of a headache in C++ land with POCO).
from deepstream.io.
https://github.com/tnie/deepstream.io-client-cpp
from deepstream.io.
@yasserf was TCP support ever re-added (e.g., with the recent 4.0 changes)?
from deepstream.io.
hey @codyaray
TCP was never re-added no. You could add a custom connection endpoint that would allow TCP to be used, but you won't be able to use them side by side (its hard to get a single server to do normalisation between protocols and survive.
4.0 introduced a new protocol however that uses binary + sends all header fields in a json object. This makes writing clients much easier, although a slight performance hit is incurred due to JSON parsing.
from deepstream.io.
I'm closing this issue since the messages within it are outdated to V3 and Text Protocol, and would probably just confuse anyone starting to look into it.
from deepstream.io.
Related Issues (20)
- Subscription getting triggered after a delay HOT 1
- [Bug] SocketWrapper mismatch in "handlers/record/record-transitions" when storage enabled HOT 2
- Can't connect to server using protobuf HOT 1
- Tutorial using MQTT? HOT 2
- List updates are not sent as deltas HOT 9
- deepstream triton backend exit error HOT 1
- TypeError: this.services.storage.whenReady is not a function when using mongodb storage
- RPC requests protocol HOT 2
- Cannot start the server release 6.2.1 from the command line (Linux) HOT 4
- Endpoint not found Error for Http Request HOT 1
- Other free alternative HOT 1
- If you send a notif to a user and he is offline, he can't receive this notif when he reconnect. HOT 3
- starting deepstream as daemon fails HOT 4
- Is there a simple way to not receive message send from self? HOT 6
- Is there documented way to remove items from record object with path? HOT 3
- Get invalid state transition message when deleting the record HOT 1
- running deepstream in cluster mode doesn't take in account the port configured. HOT 1
- log metadata on fatal error
- Can I use react-native client to connect to deepstream. HOT 1
- How to distinguish between invalid auth data and service unavailable with deepstream and webhook HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepstream.io.