Giter Site home page Giter Site logo

Comments (6)

krasserm avatar krasserm commented on June 29, 2024

If you want to achieve that a message is redelivered only to that external system that has been unavailable during the initial delivery, use a separate channel for each destination. For example (with cardinalities in parentheses)

processor(1) -> router(1) -> channel(n) -> destination(n)

ensures that the delivery is done for each destination individually and indpendently. This can already be done with eventsourced without any further additions. In my previous post I was more referring to the following scenario.

processor(1) -> channel(1) -> router(1) -> destination(n)

In this case, if one or more destinations do not confirm, the messages would be re-delivered (re-routed) to all destinations. This would however require an addition to router (or the multicast processor).

As far as I understood, you'd like to have support for the first scenario. Would you also like to have support for the second scenario?

from eventsourced.

rocketraman avatar rocketraman commented on June 29, 2024

Currently I am interested in the first scenario. But as I understand it, if I create n separate channels, then the message will be written to disk n times, which is what I am trying to avoid. Is my understanding correct?

from eventsourced.

krasserm avatar krasserm commented on June 29, 2024

Ok now it's clear to me what you want. However, the Multicast processor of the lib serves a different purpose: it allows several event-sourced processors to 'share' the same entry in the journal.

To have the same optimization for n reliable channels (such that they 'share' the message to be delivered in the journal) requires indeed an addition to the library. You cannot use Multicast for that. I think it makes sense to implement that optimization.

In the meantime, I recommend you to use n reliable channels as proposed in my previous message. This will involve more disk IO but won't require more disk space long-term as messages written by reliable channels get deleted after delivery.

from eventsourced.

rocketraman avatar rocketraman commented on June 29, 2024

My events can be quite large (they contain a bunch of binary data), which is why I was trying to avoid writing them to the journal twice.

I will do some performance testing with the dual channel approach you describe and see if it is ok.

from eventsourced.

krasserm avatar krasserm commented on June 29, 2024

Maybe I should mention further alternatives: you could also use a reliable channel to a destination that sends the message to a message broker (such as RabbitMQ or whatever) and the message broker is then repsonsible for optimized storage on disk, distribution to multiple destinations and dealing with destination failures. This was actually one of the primary use cases for implementing the ReliableChannel in eventsourced. Eventsourced is not meant to be a message broker. Nevertheless, I still think the discussed optimization should be implemented. WDYT?

Just out of interest, are you using only the reliable channel of eventsourced or also its event-sourcing features?

from eventsourced.

rocketraman avatar rocketraman commented on June 29, 2024

Currently, I'm only interested in the reliable channel part of eventsourced, as it does all the hard work for me in terms of the journaling as well as integration with my Akka actors. I may very well have future need of the pure event sourcing part later.

Thanks for the suggestion of using a broker. At this time I want to avoid the hassle of another large component in my application for a relatively simple requirement. What I am really trying to achieve at this point is just deferring the persistence of certain data that is quite large, and takes a long time to transfer to my database (and therefore creates locks in the DB that slows down other operations as well). By journaling the data to disk locally, I can transfer the data to the DB asynchronously of the main application thread, but still maintain the overall persistence guarantee.

So one actor will persist that data for long term access and the other actor will take some other action on the data, in this case involving a remote system. Therefore, my need for a ReliableChannel that can feed two actors.

I definitely think it would be a good optimization -- it would be very useful in any use case where given data needs to be sent to multiple external systems.

from eventsourced.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.