Comments (7)
Hi Ashley,
This is something that's been on our radar for a while as it's a key part of allowing offline behaviour. I agree with the local queue approach in principle and you'll see recent refactorings to encapsulate message senders and receivers to allow pretty much precisely that.
The one sticking point is that there are lots of cases where you want both local and remote concurrently. Consider this scenario: A FooCommand is both sent and handled by a bunch of machines. The commands tend to need to be done in batches (think image encoding) so any machine in the cluster can generate many commands in one hit. If we have completely local message queuing then the sending machine will be swamped but other machines in the farm will sit idle. Likewise, if we push all the commands to a central queue first then 1) we don't work offline and 2) we pay a latency penalty every single time.
One way I was thinking about doing it was effectively having an outgoing message pump for each queue as well as an incoming one, i.e.:
[Bus] -> [Sender] -> Local outbound queue -> [Local Dispatcher Message Pump] -> [Dispatcher]
-> [Local Outbound Message Pump] -> Azure queue
This way would allow us to have local handlers compete by popping messages off the local outbound queue directly, provided that they were idle, but there'd be another pump competing to push them to the Azure queue. We'd get the best of both worlds, then - the highest possible throughput locally whilst offloading any overflow to the cloud.
Thoughts?
from nimbus.
First off, can you please point me to who is responsible for creating Message Pumps and what their multiplicity relationship is with queues?
I see your point and I think the outgoing pump is a good solution. So we post our message to the F# (doesn't have to be F#, I just like it) outbound queue, we then have two pumps pulling messages out of it; one will pass on to the local incoming queue for that handler, and then to the dispatcher, the other will push that message out to the Azure queue. I think I'll stick a configurable throttle (maybe a 2ms wait or something) on the Outbound Pump for testing so we can minimise messages that go all the way out to Azure where they don't need to.
Another potential scenario could be that you want the physical machine/process to be a boundary that ensures messages are only delivered to microservices within your microservice (metamicroservices?) that are implicitly tied together, without having to tightly couple them. An example may be some kind of performance metric service, or access to a local datasource. Or perhaps an application that you decide is best to deploy on a one-machine-per-customer basis.
So who should be responsible for configuration in determining whether or not to use local messaging and the outbound Azure pump? I had thought since the glue that holds it all together is the message, maybe an optional Attribute could be applied to Messaging Contracts that could look something like:
[DeliveryPreference(DeliveryMethods.Local)]
[DeliveryPreference(DeliveryMethods.External)]
[DeliveryPreference(DeliveryMethods.Any)] //(The default if the attribute isn't set)
I have mixed feelings about Attributes, but I think this way is the least intrusive for folks who already have a lot of messages.
We can then wrap a Mailbox.Scan call in a function on the Outbound Queue that gets the first message with either DeliveryMethods.Any and DeliveryMethods.External/DeliveryMethods.Local so the message pumps don't touch messages that are explicitly flagged for a delivery method.
The devil is in the details. But I'll start fleshing things out over the next week.
from nimbus.
Just about got a proof of concept together for the local messaging. I'm supporting two scenarios: Handlers that exist in process (handled by direct access to the queue) and handlers that exist in different processes on the same machine (handled by access via pipes). I've written up a simple non-blocking library that uses NamedPipes to emulate Azure Topics. The way it'll work is when initially wiring everything up, the manager will check if the local queue is known to it. If it is, then it's an in process message. If not, then it will check a machine-wide resource where each application declares its existence (at this stage I've got a Windows Service). If it finds the reference there, then it's a cross process message. If the application is not found there (or fails to respond to a handshake) then it's an Azure message. I've handled the scenario of local services that aren't all started at the same time, so the manager will pick up local services that start after the initial setup.
Any thoughts?
from nimbus.
Check queues out here: http://wacel.codeplex.com/documentation I think that would work amazingly for this.
from nimbus.
Argh. Derp. I moved the wrong issue into 3.0 by mistake. Moving it back out for now. I still think this idea is worth discussing, though.
from nimbus.
Definitely. Last time I thought about this I was trying to think of a unified way to do internal/cross process messaging with named pipes. I'm not so sure that's really the right approach now
from nimbus.
from nimbus.
Related Issues (20)
- (Non-)reliable Azure/WSB messaging in V3 HOT 5
- MessageIds do not correlate between producers and subscribers HOT 1
- When does it really make sense to use a project like Nimbus vs Hangfire? HOT 1
- Question: Are there any plans on supporting creating partitioned azure queues? HOT 1
- Protobuf serializer needs to support NimbusMessage
- BusEventSender.Publish() swallowing exceptions - how does OutboundInterceptor.OnEventPublishingError work? HOT 1
- HeartBeat messages HOT 1
- Memory leak in Nimbus.Infrastructure.Dispatching.SubsequentDispatchContext HOT 1
- MessageLoggingExtensions throws exception on formatting output on any ILogger implementation HOT 2
- NuGet packages from myget.org only shows 50 latest versions HOT 1
- Incorrect expiry of RequestResponseCorrelationWrapper
- DendencyResolutionException: No registration for type Nimbus.Transports.WindowsServiceBus.Filtering.ISqlFilterExpressionGenerator
- Event subscription naming does not use fully-qualified handler type name HOT 9
- Porting Nimbus to Netstandard 2.0 (.NET Core & .NET 4.6.2) HOT 1
- Nimbus.Transports.WindowsServiceBus nuget package introduces FodyWeavers.xml file into target project
- Release dotnet core HOT 2
- AMQP Transport HOT 3
- Required permissions HOT 1
- Azure Service Bus subscriptions not reconnecting
- ASB: First time deployments of new subscriptions failing
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nimbus.