Comments (8)
Ok great, thanks for the help. I'll play around with it a bit and let you know how it works out.
from xtra.
Could you elaborate on what kind of blocking task this is? Is it CPU bound? Is it waiting for something to happen? Also, is this a vast majority of what the actor does, or does it happen less frequently?
Two options:
- If it's short, you can maybe use
tokio::task::block_in_place
orspawn_blocking
. I am not an expert on this function and I'm not 100% sure what the wider implications are for tokio runtime performance - If it's a lot of what the actor does, you can spawn the entire actor on its own dedicated thread, e.g with
thread::spawn(block_on(actor_future))
The ScopedTask approach would not really help, I think, given that a blocking task cannot be cancelled.
from xtra.
Yes, it's CPU bound: basically it does some fancy parsing that takes around 2-5 seconds of CPU time. I would like to keep the latency as low as possible and not risk waiting that long.
I'll take a look at the second option, it seems to be a bit simpler.
In this case, how would I go about running multiple instances of the actor in parallel? Would I implement a ManagerActor
that has a Vec<Address<MyActor>>
and does a round-robin load distribution between multiple instances? Or is there a simpler way to associate multiple instances with one address to do this transparently?
from xtra.
In this case, how would I go about running multiple instances of the actor in parallel? Would I implement a ManagerActor that has a Vec<Address> and does a round-robin load distribution between multiple instances? Or is there a simpler way to associate multiple instances with one address to do this transparently?
xtra has this built in :) You can clone the Context
and call run
on it again. This will make the new actor run on the same address as the Context
. Therefore, whichever actor is available first will take the message to handle. See the message stealing actors example. This is not guaranteed to be scheduled fairly, as it is done for load distribution as you said. I.e, if one actor is just always ready, the other actors may never have a chance to handle the message. Since this feature is intended for load distribution, this shouldn't be a problem, though, as this just means that you have spawned too many actors compared to the workload and are providing too much extra capacity that is going unused.
In practice, this might be actually be accidentally fair, given that waiting actors go into a queue 🤔 It's not guaranteed by the API, though.
I'll take a look at the second option, it seems to be a bit simpler.
I've done this in the past. The effect is you get one actor thread that basically just parks until a message comes along. This should be ok in most instances.
from xtra.
With a CPU-bound task, does it make sense to have multiple actors running on the same Context
?
In case the thread is 100% busy with one actor handling a message, what is the point of a 2nd or 3rd actor?
I think that may be worth benchmarking before you spawn N of those actors on the same thread. You can of course spawn a new thread for each run
future.
from xtra.
It may be worth to utilize tokio's spawn_blocking
here to spawn those run
futures and have tokio do the thread management.
from xtra.
In case the thread is 100% busy with one actor handling a message, what is the point of a 2nd or 3rd actor?
If you have multiple messages, coming in, then a 2nd actor can work on that. If it's just one message at a time which takes 100% for however long, then it wouldn't be worth it, I agree.
I think that may be worth benchmarking before you spawn N of those actors on the same thread. You can of course spawn a new thread for each run future.
Yea, this is what I meant - otherwise you are essentially running a mini runtime on one thread, right?
It may be worth to utilize tokio's spawn_blocking here to spawn those run futures and have tokio do the thread management.
I asked about this on the tokio discord, and it was recommended against, given that the actor will last for a long time, whereas spawn_blocking tasks should be short and for one piece of work. So, you end up using up one of the blocking threads from the pool for a significant amount of time. In that case, it is better to just spawn your own thread. This is for the 1 thread per actor case.
from xtra.
It may be worth to utilize tokio's spawn_blocking here to spawn those run futures and have tokio do the thread management.
I asked about this on the tokio discord, and it was recommended against, given that the actor will last for a long time, whereas spawn_blocking tasks should be short and for one piece of work. So, you end up using up one of the blocking threads from the pool for a significant amount of time. In that case, it is better to just spawn your own thread. This is for the 1 thread per actor case.
That is interesting, thanks for sharing :)
from xtra.
Related Issues (20)
- Document features of `Context::stop_all` HOT 7
- Switch `WaitingSender` implementation to a oneshot channel implementation
- Re-organise cargo workspace
- trait for actors handling multiple message types HOT 4
- wait for actor to finish processing entire mailbox HOT 10
- Review changelog and sort by priority HOT 2
- `as_either` for `MessageChannel`? HOT 3
- See if we can simplify `TickFuture` HOT 4
- Switch to IntoFuture trait instead of modifying SendFuture
- wasm_bindgen::JsValue within Actor? HOT 4
- Feature Request: relax return type in into sink HOT 2
- Receiver<M> support HOT 7
- Context::notify_later replacement examples HOT 6
- Should we still recommend `spaad` in the README? HOT 3
- Experiment with nightly async fn in traits
- Atomicity of handlers HOT 2
- Actor persistence and journaling HOT 10
- 0.6 Release HOT 7
- Deprecated feature in doctests
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xtra.