Giter Site home page Giter Site logo

Comments (5)

adrai avatar adrai commented on June 11, 2024 1

Hi Steven,
all of this approaches can "work".

About the solution proposal of having only 1 aggregate:
You could have 1 import command (containing all shippings) and then generate various shipping events. The mongodb will then store only the single events and the aggregate business rules will only need to verify everything in memory (no complete shipping list is persisted).
To optimize loading of passed shipping events from the eventstore you can make use of the snapshot mechanism (by saving a list of shipping numbers only), that would speed up the loading when handling a new import command.

About the solution proposal of solving this outside of the domain (the read model):
If your system can live with a minimum possibility of duplication (by concurrency) risk, you can solve this outside of the domain.
In case you want to make sure also this risk will be eliminated, you can create a saga, that will check after the domain has processed the import, if there are any duplicates and fix it by sending fixing commands.

And finally always remember: "You don't have to solve everything with CQRS." ;-)

I hope this helps a bit,
Adriano

from node-cqrs-domain.

nanov avatar nanov commented on June 11, 2024 1

@adrai : we were responding concurrently, apparently it took me more than 10min to write my response. :)

Another approach that could work, using an aggregate per shipment, would be to do this check inside a business rule, as this can be asynchronous you can maintain an external collection with all shipping numbers and validate weather one is already present using mnogo atomic operators ( update, upsert ) and then in case it was there reject the command with a BussinessRuleError, you should be careful the check those only for create/import command.
This way you don't have to tie your shipping ( aggregate ) ID to the shipping number. One thing to take into consideration is that, in theory, there is no way to ensure that the first shipment of two with the same number will be accepted and the second refused, it could go the other way around.

If there is a need to reject the whole import in case any of the shipments fails, than this should be handled with a saga, if this is not the case an import could simply consist of a service who is firing a create/import command per each shipment, load-balancing can be handled either on the message-bus level or on the sending side.

from node-cqrs-domain.

nanov avatar nanov commented on June 11, 2024

Hi,

you have some misconception about how ES works, I suggest that you dive a little bit deeper into the theory of Event Sourcing, there is plenty of information readily available over the web presented in many different ways.

A few things to note:

  1. There is no collection of aggregates, each aggregate has it's own event stream and this stream is applied every time when a new command arrives to build the current state of the aggregate.

  2. The state of the aggregate is modeled inside the event-handlers, there you can do aggregate.set and affect the state. Again, those handlers are applied in their respective order each time a new command arrives in order to build the current state of the aggregate.

  3. Pre-conditions do not query anything ( in fact - no queries are done on the write side, ie. domain), per-conditions are executed after the state has been rebuild and in them you can perform some checks against the state of the aggregate ( having the command and state data), and respectively reject the command if some condition is not satisfied.

  4. Each event is stored in a separate record ( in MongoDB case separate document for each event ), which consist of stream id ( aggregate id ), some generic event-store data plus the specific events data ( payload, metadata etc. ) and only that ( no state information whatsoever ). This means that MongoDB ( or any other DB ) document size limitations do not apply on the aggregate, but rather would only apply on the event level ( very unlikely to present a problem, and if does it is a sign of bad design ).

With that being said, to answer your specific question, as i get it, has nothing to do with "bulk import", as the same logic applies to each individual shipping - no matter if it comes trough bulk or single operation. Weather you should model your domain with a shippings ( many ) or shipping ( one ) aggregate is hard to say, depends on what business logic you will perform with those afterwards, in both scenarios you can denormalize those into separate readmodels so that shouldn't be a consideration.

  • If you choose to go with the shipping ( one aggregate per shipping ) way - you can control the uniqueness of the shipping with their ids. Combining the defineCommandAwareAggregateIdGenerator aggregate option and the exsiting: true command option you could prevent a shipping being created twice.

  • If you choose to go with the shippings ( one aggregate for all shippings ) way - you should maintain a list of all shippings ids, and check against this list inside a per-condition. Bear in mind, that if you have a huge amount of shippings you may encounter some memory issues here because, as said, state is rebuild in-memory.

I personally wouldn't choose the one-aggregate-for-all approach, as you would probably need to maintain some other state for each shipping ( position, status, delivered - for example ) and with such approach this would be much more complicated and less performant, before mentioning scaling and distribution limitation.

from node-cqrs-domain.

blissi avatar blissi commented on June 11, 2024

@adrai / @nanov Thanks a lot you for your thorough explanations! I will try the follow approach now: add an additional collection with the shipping number as a unique key. There will be one import command for each shipping, and in the business rule I will add the shipping number to this collection. If there is already one, the business rule will fail because of the duplicated key.

That will save me from the problem that the shipping numbers list can get too large, so that the aggregate document in the snapshots-collection couldn't be saved because of the 16MB limit. Plus, I don't need to fetch the whole list from the DB server to evaluate if the shipping number is allowed.

from node-cqrs-domain.

adrai avatar adrai commented on June 11, 2024

sounds good

from node-cqrs-domain.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.