Comments (8)
Yes. If your IO resource relies on futures internally, then the poll_flush
that finishes the flush future the N bytes must not return Ready
. Instead, that call to poll_flush
should start another flush operation for the remaining X bytes. You don't return Ready
until the X bytes have also been flushed.
from tokio.
I could fixed the problem by copying this code here, just before the loop.
This could be a fix, but I think that ideally it should be implemented in such a way that reads to buffer do not stop if there is room in the buffer but flushing future is pending.
from tokio.
Reading before writing to include more data in the write makes sense to me.
I'll close this issue, but if you want to adapt the PR to optimize something, then that's fine with me.
from tokio.
I could fixed the problem by copying this code here, just before the loop.
This could be a fix, but I think that ideally it should be implemented in such a way that reads to buffer do not stop if there is room in the buffer but flushing future is pending.
I agree, that would be definitely more efficient. As I imagine it, this solution would look like this:
loop {
// If our buffer is empty, then we need to read some data to
// continue.
if self.pos == self.cap && !self.read_done {
// This code remains the same
}
// If a flush future is in progress, do not write until it is finished
if self.need_flush {
ready!(writer.as_mut().poll_flush(cx))?;
#[cfg(any(
feature = "fs",
feature = "io-std",
feature = "net",
feature = "process",
feature = "rt",
feature = "signal",
feature = "sync",
feature = "time",
))]
coop.made_progress();
self.need_flush = false;
}
// Remaining code is untouched
}
Note that any error during flushing is ignored in my code, since it is also ignored elsewhere in the function. The flushing code could probably be factorized, since it's the same as the one in case of a pending read.
Besides, I think that we can fix the other similar problem coming from this optimization (read future may never be properly polled upon termination) by changing the condition at the entry of the loop:
loop {
// If our buffer is empty, then we need to read some data to
// continue. Otherwise, if we can read a bit more data, do it
// to improve the chances of a large write
let is_readable = (self.pos == self.cap) || (self.cap < self.buf.len());
if is_readable && !self.read_done {
// This code remains the same
}
// Remaining code is untouched
}
from tokio.
While digging into the first issue I had (with just a poll_write()
implementation), I could confirm that this optimization is partially meaningless for now. Indeed, the current implementation will not poll again a pending read future before the next poll_write()
, which is not fatal, but makes the whole thing useless if the reader is not immediately ready.
Hence, I've reworked my fixes and implemented them in the related PR. With those fixes, everything works as expected, both in the real project where I spotted the issue and the minimal example provided here.
The fix related to the flush future is definitely the most important since it breaks the poll_flush
contract, but the modification of the read condition seemed to improve a bit the overall performance in my case (high workload with a lot of data flowing from especially one side).
from tokio.
If poll_flush
returns Ready
and you haven't flushed everything, then your IO resource is incorrect.
from tokio.
If
poll_flush
returnsReady
and you haven't flushed everything, then your IO resource is incorrect.
So if I write N bytes of data, then I start to flush (thus getting a Poll::Pending
), then I write again X bytes of data, to eventually flush again and getting a Poll::Ready(Ok(()))
, it will necessary mean that N + X bytes of data have been flushed properly?
from tokio.
Ok, thank you very much, I did not understand it this way. The documentation is not very explicit in this sense in my opinion, but maybe that I'm the only one to think so and that I've misunderstood its wording.
Hence, I guess that the only thing that still makes sense in this issue and the related PR is about this optimization. It's a minor detail, but with the current implementation, if the read operation is pending, then the next write might occur before the next read, without letting any chance for a second poll for the read operation. In other terms, we could potentially allow for more optimizations by trying to read a second time before writing again when both operations are pending. What do you think @Darksonn?
from tokio.
Related Issues (20)
- Vectored IO for `write_all_buf` HOT 1
- Doc bug in `Command::stdin`? HOT 1
- Include relevant values in ReadBuf.put_slice() panic message HOT 1
- File IO hangs after timeout / cancellation HOT 2
- Use `[lints.rust.unexpected_cfgs.check-cfg]` instead of hacky check-cfg workaround HOT 2
- Helper struct/wrapper for a stream of named pipe connections HOT 7
- tokio::sync::mpsc::bounded::Receiver<T>::is_empty() returns false when recv().await blocks HOT 3
- Feature tokio::sync::mpsc::Receiver::wait_close(&self) HOT 11
- Panic at linked_list.rs - reborn HOT 5
- Every 32 messages `is_empty()` on `Receiver` and `UnboundedReceiver` returns `false` even though len == 0 HOT 1
- LengthDelimitedCodec misses that last N bytes of the frame with num_skip(0) HOT 1
- Segment fault at `poll_future` HOT 8
- app stuck dropping tokio::runtime::runtime::Runtime HOT 1
- regression: tokio::test in 1.38 eats my tests HOT 5
- Sending data frame followed by trailer frame discards the data frame HOT 3
- Really bad tcp performance on windows. HOT 5
- Alternative to AsyncDrop HOT 3
- `arcmutex!` convenience macro HOT 1
- Internal panics in `CurrentThread::block_on()` under WSLv1 HOT 11
- Please `impl Default for watch::Sender<T> where T: Default` HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tokio.