ricejasonf / nbdl Goto Github PK
View Code? Open in Web Editor NEWNetwork Based Data Layer: C++ Framework for Managing Application State Across Network
License: Boost Software License 1.0
Network Based Data Layer: C++ Framework for Managing Application State Across Network
License: Boost Software License 1.0
Apparently this was not a valid use of capturing values in a parameter pack:
In file included from /opt/src/test/nbdl/core/delta.cpp:8:
/opt/src/include/nbdl/delta.hpp:69:42: warning: lambda capture 'e1_arg' is not used
[-Wunused-lambda-capture]
return nbdl::bind_sequence(e2, [&e1_arg ...](auto const& ...e2_arg)
... or the warning is erroneous.
Currently a store matching the tag nbdl::uninitialized
will cause nbdl::context
to trigger a read action. This is used by map_store
when the key is not in the map.
The name of the tag should be more descriptive (like nbdl::trigger_read
).
For basic_map_store
the default tag could be nbdl::not_in_set
.
Eventhough this project is in a state of flux, the Readme should have a brief explanation with a diagram showing the basic set of components and the flow of data. Also a summary of goals and reach goals would be useful.
The test for the context dispatch stuff currently takes about 15s and scales about 1s for every message added. Some of this could because of the overhead with using catch.hpp which is another issue, but the use of function overloading for the logic is probably a large factor. The C++17 if constexpr
feature would not only provide a significant performance improvement but also drastically improve the readability of the message dispatching logic. It needs some refactoring anyways so this is a good opportunity.
This implies that Nbdl will be exclusive to clang trunk for a while.
Sending messages is currently "fire and forget" which can require unnecessary queueing which can also create message flooding issues.
It would probably be best to add a new function called apply_message_promise
.
This depends on switching to use the external full_duplex
library for promises.
The use of psuedo-concepts in Boost.Hana are a tremendous factor in code organization, documentation, early error detection, as well as cleaning up SFINAE. Using concepts could use or possibly replace all of the traits defined in TypeTraits.hpp
. Concepts could also be used as blueprints for user-defined components.
The following concepts should be created in nbdl
:
MessageReceiver
Message
UpstreamMessage
DownstreamMessage
Store
- currently known as StoreContainer
in definitionsBindableSequence
- means that object can be converted using a binderBindableMap
BindableMapKey
Binder
, BindReader
, BindWriter
(each for BindableMap and BindableSequence)Entity
Variant
DefaultConstructible
Replace nbdl::promise
and friends with the facilities provided in the external FullDuplex library.
When an entity is created on the client side it does not yet have an id
assigned to it by the persistence layer. If the application keeps a record of the entity before it has an id
then a random 128bit UID will be used as a temporary identifier.
For these types of entities the path will be a variant of a UID and the key specified by the user. This does not apply to the Create Message as it will use a placeholder and the Provider will use the Message's UID.
Since C++ libraries typically use identifiers that are lower-case and underscore delimited, it would make sense to have Nbdl do the same. The Boost style guidelines appear to be an appropriate standard to follow for the most part. If template parameters remain capitalized and camelcase then it would also prevent collisions with types that they might shadow which is really annoying.
There is existing code to allow an additional payload attached to upstream create messages. Currently, this information is not a part of the access point meta object so it can never be set. It would probably be best to put the definition for it in the AccessPoint
itself and never store any information in Actions
.
Something like this:
AccessPoint:
EntityName: "foo"
Actions: [ Create, Read, Update, Delete ]
PrivatePayload: my_ns::my_private_payload
Consumers should be defined in the same way that Providers are, but instead of a being stored as a map they can just be a tuple.
Figure out how poviders and consumers can push messages to the context. Ideally, providers should be limited to creating downstream messages, and consumers should be limited to creating upstream messages.
Something like this only maybe less object oriented:
template<
typename ProviderMap,
typename Consumers,
typename StoreMap>
class Context
{
ProviderMap providers;
Consumers consumers;
StoreMap stores;
// should be callable only by providers
template<typename Message>
auto pushDownstream(Message&&)
{
// access providers, consumers and stores
}
// should be callable only by consumers
template<typename Message>
auto pushUpstream(Message&&)
{
// access providers, consumers and stores
}
public:
};
For applications, the typical use case is such that the user makes a request (upstream message) and expects to know its status at any given moment. For access points that don't opt out of this, an additional store should be created that tracks requests by the message's UID. It's also possible to just use a single store to track all possible requests. It depends on how much we want to rely on the uniqueness of the locally generated UID.
This request tracking store should contain a variant that will likely have the following possible states:
Note that if the local state changes and would emit a downstream message, the request should be marked as resolved.
Note that there needs to be a way to know when to delete the tracking object from the store.
Note that the need for request tracking only applies to Create, Read and Update actions.
Right now Store
s have the responsibility of emitting changes to listeners via the StoreEmitter
. When a message is given to a store, it would probably be better to have it simply indicate whether that message should be propagated further upstream/downstream and let the Context
pass it on.
This would make pub/sub requirements (if any) completely up to the Consumer
s, which eliminates the need to create custom StoreEmitter
s that would probably have logic specific to a given Consumer
anyways.
Given a tuple of compile-time strings, a static hash map of run-time strings to indices could be generated to use as a lookup to get the appropriate compile-time string. With that it would be possible to convert a map with run-time strings as keys to something like a hana::Struct
.
This would be useful for stuff like converting config files to C++ objects.
Such a function would likely return a nbdl::variant
.
Since a Store
may contain local changes, it would need to queue downstream deltas and squash them into one once it received confirmations from root of all the actions before it. nbdl::apply_action
just returns a bool so this will not be enough to propagate the squashed delta.
A new function nbdl::apply_action_downstream_delta
that returns an optional delta. It should be the basis for a new concept called LocalStore
which implies that the store applies state changes before they are confirmed by the root server.
Communicating updates with the entire state of an object will typically be more IO intensive and contains the possibility of unnecessary conflicts between competing changes resulting in loss of data.
In addition to possibly having a Delta
concept, there should be a template that can take an Entity
to store a vector of fields that have changed.
It is currently difficult to get the type of endpoint in advance which is critical because the user needs to be able to store it and have access to it to send messages.
There is also the idea of the user owning the send queue and triggering an event for the endpoint to process the queue when it has messages to send. (which is currently done internally)
Additionally endpoints need to be easy to compose so the user can add application level handshake events that must complete before message send/receive events start processing.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.