Giter Site home page Giter Site logo

Comments (7)

uglybugger avatar uglybugger commented on June 27, 2024

Hi Ashley,

This is something that's been on our radar for a while as it's a key part of allowing offline behaviour. I agree with the local queue approach in principle and you'll see recent refactorings to encapsulate message senders and receivers to allow pretty much precisely that.

The one sticking point is that there are lots of cases where you want both local and remote concurrently. Consider this scenario: A FooCommand is both sent and handled by a bunch of machines. The commands tend to need to be done in batches (think image encoding) so any machine in the cluster can generate many commands in one hit. If we have completely local message queuing then the sending machine will be swamped but other machines in the farm will sit idle. Likewise, if we push all the commands to a central queue first then 1) we don't work offline and 2) we pay a latency penalty every single time.

One way I was thinking about doing it was effectively having an outgoing message pump for each queue as well as an incoming one, i.e.:

[Bus] -> [Sender] -> Local outbound queue -> [Local Dispatcher Message Pump] -> [Dispatcher]
                                          -> [Local Outbound Message Pump] -> Azure queue

This way would allow us to have local handlers compete by popping messages off the local outbound queue directly, provided that they were idle, but there'd be another pump competing to push them to the Azure queue. We'd get the best of both worlds, then - the highest possible throughput locally whilst offloading any overflow to the cloud.

Thoughts?

from nimbus.

KodrAus avatar KodrAus commented on June 27, 2024

First off, can you please point me to who is responsible for creating Message Pumps and what their multiplicity relationship is with queues?

I see your point and I think the outgoing pump is a good solution. So we post our message to the F# (doesn't have to be F#, I just like it) outbound queue, we then have two pumps pulling messages out of it; one will pass on to the local incoming queue for that handler, and then to the dispatcher, the other will push that message out to the Azure queue. I think I'll stick a configurable throttle (maybe a 2ms wait or something) on the Outbound Pump for testing so we can minimise messages that go all the way out to Azure where they don't need to.

Another potential scenario could be that you want the physical machine/process to be a boundary that ensures messages are only delivered to microservices within your microservice (metamicroservices?) that are implicitly tied together, without having to tightly couple them. An example may be some kind of performance metric service, or access to a local datasource. Or perhaps an application that you decide is best to deploy on a one-machine-per-customer basis.

So who should be responsible for configuration in determining whether or not to use local messaging and the outbound Azure pump? I had thought since the glue that holds it all together is the message, maybe an optional Attribute could be applied to Messaging Contracts that could look something like:

[DeliveryPreference(DeliveryMethods.Local)] [DeliveryPreference(DeliveryMethods.External)] [DeliveryPreference(DeliveryMethods.Any)] //(The default if the attribute isn't set)

I have mixed feelings about Attributes, but I think this way is the least intrusive for folks who already have a lot of messages.

We can then wrap a Mailbox.Scan call in a function on the Outbound Queue that gets the first message with either DeliveryMethods.Any and DeliveryMethods.External/DeliveryMethods.Local so the message pumps don't touch messages that are explicitly flagged for a delivery method.

The devil is in the details. But I'll start fleshing things out over the next week.

from nimbus.

KodrAus avatar KodrAus commented on June 27, 2024

Just about got a proof of concept together for the local messaging. I'm supporting two scenarios: Handlers that exist in process (handled by direct access to the queue) and handlers that exist in different processes on the same machine (handled by access via pipes). I've written up a simple non-blocking library that uses NamedPipes to emulate Azure Topics. The way it'll work is when initially wiring everything up, the manager will check if the local queue is known to it. If it is, then it's an in process message. If not, then it will check a machine-wide resource where each application declares its existence (at this stage I've got a Windows Service). If it finds the reference there, then it's a cross process message. If the application is not found there (or fails to respond to a handshake) then it's an Azure message. I've handled the scenario of local services that aren't all started at the same time, so the manager will pick up local services that start after the initial setup.

Any thoughts?

from nimbus.

jhashemi avatar jhashemi commented on June 27, 2024

Check queues out here: http://wacel.codeplex.com/documentation I think that would work amazingly for this.

from nimbus.

uglybugger avatar uglybugger commented on June 27, 2024

Argh. Derp. I moved the wrong issue into 3.0 by mistake. Moving it back out for now. I still think this idea is worth discussing, though.

from nimbus.

KodrAus avatar KodrAus commented on June 27, 2024

Definitely. Last time I thought about this I was trying to think of a unified way to do internal/cross process messaging with named pipes. I'm not so sure that's really the right approach now

from nimbus.

KodrAus avatar KodrAus commented on June 27, 2024

#33

from nimbus.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.