Giter Site home page Giter Site logo

Comments (9)

nihohit avatar nihohit commented on July 29, 2024 1

You could try this branch to see if it helps

It helps! Thanks for the quick implementation.
Actually, not only does it help, but it's also faster on the first call, not only on the repeat calls. Amazing work!

from combine.

Marwes avatar Marwes commented on July 29, 2024

I think there may be an accidentally quadratic behavior here, where when you run it using an actual IO device the data comes in chunks and every time we read a new chunk we go through extend_buf_sync we end up initializing the buffer againbefore passing it to the std::io::Read.

Hypothesis

When reading the first value we end up incrementally increasing the size of the buffer, extending it by8 * 1024 bytes every time it needs to grow when we read a new chunk of data. But for any values following that the buffer is already huge! While we don't need to grow the buffer again, we still end up initializing the entire uninitialized part of the buffer (it is uninitialized as far as the code is aware, but it is actually initialized from the previous value). So when we again read the data coming in chunks we get quadratic behaviour from repeatedly initializing the huge buffer.

This doesn't show when reading straight from from a byte buffer because the data is available in full, so as we get to the second value we only do a single call to extend_buf_sync/read and that fills the buffer with the entire value all at once.

Fixes

I can think of three at the top of my head.

  • Cap how much we initialize (and thus are able to read) in each call to extend_buf_sync/read
    • Very easy, but will still have overhead
  • Track how much of the buffer is initialized from previous calls and avoid doing the initialization on this again
    • Ideal, more or less, given the APIs std gives us at this time. But it does involve more unsafe since we must give std::io::Read::read an initialized buffer.
  • Wait for https://doc.rust-lang.org/std/io/trait.Read.html#method.read_buf to stabilize
    • This basically mirrors the API tokio (and consequentally redis-rs and combine's async versions uses). This API allows the initialization step be omitted entirely without unsafe (at this level, at the lower levels there are still unsafe)

I'd probably prefer point 2, but I may need a little before I have time for it. If you want to try something easy, 1 would be easy to patch in to test the hypothesis. If you are able to produce a minimized example I can add to combine's tests or benchmarks that would be appreciated the partial-io crate used in combine can be used to simulate reading data in chunks while still using in memory data.

from combine.

nihohit avatar nihohit commented on July 29, 2024

from combine.

Marwes avatar Marwes commented on July 29, 2024

I don't know if it helps, but the initial report was about async redis-rs.

Didn't see that, the async version wouldn't call extend_buf_sync so perhaps it is a similar issue, but the posted flamegraphs doesn't apply to it.

from combine.

Marwes avatar Marwes commented on July 29, 2024

You could try this branch to see if it helps (implements point 1) https://github.com/Marwes/combine/tree/perf_extend_buf_sync

from combine.

nihohit avatar nihohit commented on July 29, 2024

Didn't see that, the async version wouldn't call extend_buf_sync so perhaps it is a similar issue, but the posted flamegraphs doesn't apply to it.

No, you're right - this doesn't happen in the async non-TLS runs, only in async TLS runs. Thanks for correcting me!

edit: or, at least I can't repro it in async non-TLS. here redis-rs/redis-rs#1128 (comment) the original writer claims that he sees the same.

from combine.

artemrakov avatar artemrakov commented on July 29, 2024

@Marwes Can I create pr with your change (point 1) with additional testing using partial-io or you think it is more appropriate to implement point 2 solution?

from combine.

Marwes avatar Marwes commented on July 29, 2024

@artemrakov Go ahead

from combine.

Marwes avatar Marwes commented on July 29, 2024

Actually, not only does it help, but it's also faster on the first call, not only on the repeat calls. Amazing work!

it will likely be a performance regression if the reader could actually fill the entire buffer that were initialized, but that would just be a constant factor so especially for a temporary workaround, that is fine.

from combine.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.