Giter Site home page Giter Site logo

libcurvecpr's People

Contributors

habnabit avatar impl avatar kostko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libcurvecpr's Issues

Contiguous sent bytes is not updated correctly under packet reordering

The messager variable their_contiguous_sent_bytes, tracking the number of contiguous bytes that have been successfully received is not updated correctly when packets are reordered.

Imagine the following scenario:

  • Currently bytes 0 - 12480 have been successfully received, ACKed (their_contiguous_sent_bytes = 12480) and distributed (removed from the receive queue).
  • Because of packet reordering, block 14528 - 15552 is received next and is SACKed and marked as acknowledged in the receive queue. Since some blocks in front of it are missing, the block is not yet distributed.
  • Blocks 12480 - 14528 are received next and are SACKed. Bytes 12480 - 15552 are distributed and removed from the receive queue. The first acknowledgement range now covers bytes 0 - 14528 and their_contiguous_sent_bytes is updated to 14528. At this point, this sent bytes counter is stuck at this value and will never be incremented again.

This wrong behavior causes two issues:

  • Selective ACKs will be unnecessarily fragmented as one SACK range will always cover 0 - 14528 (due to their_contiguous_sent_bytes being stuck at 14528).
  • EOF will not be handled correctly, as messager will assume that some bytes still need to be received.

A workaround that I currently use in my Boost.ASIO C++ bindings is that when bytes are distributed, I update the their_contiguous_sent_bytes counter as follows:

      // Update the number of contiguous sent bytes
      if (messager_.their_contiguous_sent_bytes < recvmarkq_distributed_)
        messager_.their_contiguous_sent_bytes = recvmarkq_distributed_;

Timeouts handled with real time clock instead of monotonic clock

The timeout calculations are done based on time read from the curvecpr_util_nanoseconds() call which by default uses the CLOCK_REALTIME source. This means that if a timeout is set and the time is then moved forward the timeout will not occur as expected but will be further delayed by the wall time jump.

I would strongly recommend to use CLOCK_MONOTONIC instead to avoid this pitfall.

If a message is successfully sent but the sendq_move_to_sendmarkq(...) operation fails, it will be resent indefinitely

This is because sendq_move_to_sendmarkq(...) will never be reinvoked, so the message remains at the head of the sendq.

Instead, sendq_move_to_sendmarkq(...) should be invoked every time a message with a valid block is sent, even if it's a retry. This change would slightly change the semantics of the operation such that any implementations will have to check whether the block passed in is actually the head of the sendq.

Release?

Can you do an initial release tarball that includes a pregenerated ./configure script? That'd be super ;)

I took a stab at #6 today and it would be a lot easier if there were a release tarball I could grab.

Also releases can't hurt when encouraging people to use your software ;)

Proposal: Add additional reliability guarantees

I'd like to propose adding two additional types of reliability guarantees to CurveCPR, which could be negotiated using two extensions.

The first type would be a fully unreliable connection, where no attempt at reliability, acknowledgement or congestion control is done.

The second is a sequenced stream, where in the face of packet reordering, only the latest packet is kept, and older ones are discarded, again, with this, no attempt at extra reliability, acknowledgement or congestion control is done. The idea is that the data is so time critical that any resending would yield outdated data anyway as fresher data would've been sent.

I was hoping for some guidance on where the code changes for this would be, as well as feedback on whether or not this belongs in curveCPR itself.
I've investigating implementing it purely in the callbacks, but I found no way to implement it cleanly without screwing up chicago or the reliability code.

Total bytes sent by the remote side is incorrectly tracked

The current implementation simply sets total bytes sent to received_block->offset + received_block->data_len, which, because UDP doesn't do any ordering guarantees, could result in the total received bytes decreasing over time, etc. Indeed, in many cases (e.g. when a message is received with 0-length data), this will result in the total bytes sent being set to 0.

This should instead be tracked by keeping track of how much contiguous data we've received, plus any left over in blocks waiting to be acknowledged, or something similar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.