Comments (38)
FYI I'm working through an implementation right now. I've gone a different route in refactoring the UsedChunkList and am leveraging the mpmc_loffli
to make the UseChunkList
basically a slot map (since ordering doesn't matter). It seems like this might accidentally also make publishers and subscribers thread safe, but we'll see.
from iceoryx.
Not sure what you mean by iterating over tombstone values? The FixedPositionContainer skips removed elements on iterations.
I meant for implementations using iox::vector
or MpmcLoffli
If we adapt the FixedPositionContainer
though we could support both use cases well.
from iceoryx.
My intention is to try to run Iceperf to see the impact of the synchronization. If it is too problematic we can always fall back to a simpler slot map without the mpmc part.
from iceoryx.
Took another look. There doesn't seem to be any issue with removing a chunk when we know its index in the UsedChunkList
. That would essentially just skip to this line of code:
from iceoryx.
Yes so, after my first quick pass I'm thinking we can update the API's in the following way to achieve the goal of eliminating this linear search:
- For
UsedChunkList::insert
, instead of returning abool
, return anexpected<uint32_t, InsertChunkError>
, where theuint32_t
is the index intom_listData
where the chunk was inserted - In
ChunkReceiver::tryGet()
, instead of returningexpected<const mepoo::ChunkHeader*, ChunkReceiveResult>
, it should returnexpected<UsedChunk, ChunkReceiveResult>
whereUsedChunk
is a struct containing aconst mepoo::ChunkHeader*
and theuint32_t
index where that chunk lives in the used chunk list. - This should get propagated out to
SubscriberPortUser::tryGetChunk()
- And again to
BaseSubscriber::takeChunk()
- In
Subcriber::take()
, the deleter around the sample unique pointer should callreleaseChunk()
with the modifiedUsedChunk
object SubscriberPortUser::releaseChunk
should get updated to take theUsedChunk
datastructure instead of the chunk headerChunkReceiver::release
gets updated to takeUsedChunk
instead of the chunk headerUsedChunkList::remove
gets updated to take theUsedChunk
instead of the chunk header- It then skips the linear search and directly looks up the data in
m_listData
given the index stored in theUsedChunk
struct. To check out logic is sound we add a contract check validating thatm_listData[usedChunk.index].getChunkHeader() == usedChunk.header
from iceoryx.
@gpalmer-latai This approach could work for the good case. But keep in mind, the UsedChunkList
is a lock-free construct, and they often have unpleasant surprises for you that may need weeks to fix.
But the bad case still has some problems. When an application crashes and someone has to iterate once over 100.000 chunks to clean them up manually you will see a hick up in the system where suddenly, due to a crash, the runtime increases. This is maybe unacceptable for a real-time system and for a safety-critical one (freedom from interference).
The point I am trying to make is, that we can for sure apply your solution but in my opinion it maybe solves a problem that should not exist in this current form. I do not doubt the use case but the used messaging pattern (publish-subscribe). Could you tell us a bit more about the setup, especially:
- How many subscribers do you have? (I assume just 1 or 2)
- How many publishers do you have? (I assume many)
- And the overall task? (I assume some kind of logging)
In iceoryx2 we have planned messaging patterns also like: pipeline & blackboard and those could be easily manually implemented in iceoryx since all the right building blocks are already here.
from iceoryx.
But keep in mind, the UsedChunkList is a lock-free construct
This isn't relevant as the data structure is not meant to be used concurrently. It is even explicitly NOT thread safe. It exists in shared memory only to allow RouDi to clean up when the subscriber process crashes.
But the bad case still has some problems. When an application crashes and someone has to iterate once over 100.000 chunks to clean them up manually you will see a hick up in the system where suddenly, due to a crash, the runtime increases
Not so. While RouDi will not have the index, it doesn't really matter. RouDi doesn't need to release the samples in any particular order so it can simple iterate forward through the list, releasing everything with O(n) time complexity. For 100,000 samples this is probably just a ~10ms operation.
The problem described in this issue has to do with random access time complexity. But the solution is the same as it is for a std::list
- simple allow a version of remove
which takes an index/iterator to the element's position, known when it is inserted.
from iceoryx.
As for our setup.
- There are just a few subscribers per topic, usually, yes. But there could be more depending on the use case.
- The problem described here applies to just a single publisher/subscriber (as I've shown in my example). In general though we might only have one publisher on a topic, or we might have many.
- The important information about the task is that we maintain a buffer of recent history, whose size correlates to the overall publish rate of the topic.
As a mitigation we are already exploring batching to reduce publisher sample count, but that can only help to an extent and is not always possible.
It is also worth noting that we can't rely on our pattern of access here to solve the problem. For example - when we drain samples with oldest samples first, we are hitting the worst case scenario because the oldest samples are at the back of the used chunk list. You might solve this problem by somehow allowing reverse iteration of the list, but we will still have other release patterns with relatively new and in-the-middle samples.
from iceoryx.
This isn't relevant as the data structure is not meant to be used concurrently. It is even explicitly NOT thread safe. It exists in shared memory only to allow RouDi to clean up when the subscriber process crashes.
I think it is used concurrently. Whenever a publisher delivers a sample, it calls UsedChunkList::insert()
in ChunkSender::tryAllocate()
and the subscriber calls UsedChunkList::remove()
when releasing the chunk in ChunkReceiver::release()
. This is somehow hidden, but getMembers()
in both constructs should return the data to the same underlying construct.
But it is possible that I get this wrong, since I am not so familiar with the source code anymore. @elBoberido I think you are the one who implemented it, maybe you can shine some light on it.
Addendum: I was wrong, the list must just be synced with RouDi.
from iceoryx.
No worries 🙂
I just took a closer look and there is a slight complication to the approach I outlined. In true forward list fashion, list removal does unfortunately require knowing the index to the previous element:
Will have to do some more brainstorming to figure out what additional info needs to be stored where to make this work.
The first obvious solution that pops out to me though is that we simply return from insert and take as an argument to removal:
struct UsedChunk
{
const mepoo::ChunkHeader* chunkHeader;
uint32_t current;
uint32_t previous;
}
from iceoryx.
Note that this information basically gets immediately plumbed into the deleter of a unique_ptr
here
from iceoryx.
Ah, but the untyped subscriber doesn't so neatly encode a destructor: https://github.com/gpalmer-latai/iceoryx/blob/e97d209e13c36d71237c99e0246310b8029f8f26/iceoryx_posh/include/iceoryx_posh/internal/popo/untyped_subscriber_impl.inl#L54, so we'd have to store this information elsewhere...
from iceoryx.
@gpalmer-latai Instead of using some kind of list, couldn't you fill the m_listData
with optional
s and set them to nullopt
when it is no longer set?
from iceoryx.
@gpalmer-latai Instead of using some kind of list, couldn't you fill the
m_listData
withoptional
s and set them tonullopt
when it is no longer set?
The performance hit isn't in removal of the elements. It is in locating where they are. Setting aside the quirks of implementation (every operation has to be on less than 64 bits because of torn writes), the UsedChunkList
is more or less just a typical forward list. Removal is O(1) provided you already know the location of the data in the list. That is the part that is missing here.
from iceoryx.
So following along my line of thought here - using
struct UsedChunk
{
const mepoo::ChunkHeader* chunkHeader;
uint32_t current;
uint32_t previous;
}
in the UsedChunkList
API's would be sufficient to facilitate O(1) removal. It is similar to how a std::list::insert
will yield you an iterator.
The problem then becomes plumbing this through the rest of the stack. For the typed API's it is relatively simple. You just bubble this data structure up to where the sample is created and plumb it into the deleter of the iox::unique_ptr
: https://github.com/gpalmer-latai/iceoryx/blob/e97d209e13c36d71237c99e0246310b8029f8f26/iceoryx_posh/include/iceoryx_posh/internal/popo/subscriber_impl.inl#L49
The catch is untyped APIs. They currently only return void *
pointers and have separate release(void*)
/publish(void*)
, etc... methods. I can think of a few options to handle these:
- Instead of returning
void*
, return something like theUsedChunk
but with the pointer being to the payload and the indices possibly being private with friend access to the publisher/subscriber classes. This is quite ugly and users would have to adapt code to handle these ugly things. It might be less ugly if you add someoperator->
, but still... - Instead of returning
void*
, return directly or wrapped aiox::unique_ptr<void*>
with the same custom deleter logic as the typed API's. Remove therelease
methods and change the untyped API's to use the same RAII logic as the typed ones. Users then have to callsample.get()
to get the void* pointer, which they can then use the same way as before. We could even add some convenience methods to make this easier such as
template <typename T>
T* UntypedSample::getTyped()
{
return static_cast<T*>(m_ptr.get());
}
from iceoryx.
If it is preferable to you I'm happy to speak about this problem synchronously via some video chat. It is a bit of a high priority issue for us and I intend to implement some solution immediately.
from iceoryx.
There is also a compromise solution. We could add the plumbing to allow typed endpoints to have fast removal by referencing the insertion index in those custom deleters, but leave the untyped endpoints alone and fallback to linear search.
I'm not a fan of this solution because of the general complexity of supporting two different paths, and also because it will require us to replace our usage of the untyped subscriber with typed ones, which won't be trivial because we actually rely on the type-erased nature of samples received this way, extracting the size from the header. Using typed API's instead would require some ugly hackery. I think it is doable, but still...
from iceoryx.
Ah shoot, realization just dawned on me about a flaw in my proposed solution here.
We cannot simply store the previous element index in the returned data structure, because the previous element could be removed, invalidating this index. Instead what we probably need to do is make this a "doubly linked list" by adding a m_listIndicesBackwards
.
from iceoryx.
Hah, cool. iox::unique_ptr
supports type erasure unlike std::unique_ptr
because of the non-templated deleter. Neat. Seems like we really could return a Sample<void, H>
for untyped endpoints.
from iceoryx.
Can confirm that using the mpmc_loffli
works. Updating the used chunk list unit tests to use size 100,000 results in a hang on master, and completes in the snap of a finger on my experimental branch.
Working through the other API layers now.
from iceoryx.
Sorry for the late response. The mpmc_loffli approach should work but might add a higher cost than necessary due too memory synchronization. Have you thought about using the FixedPositionContainer
? It's basically a hop slot map (found the name only after implementation). This is only a detail, though.
As you noted, the more critical part is the API breakage for the untyped API which is also used by the C Binding. We have to be careful here since we might break quite a lot of code out there.
from iceoryx.
I think I might also be able to revert my changes to the untyped API's to use the Smart_Chunk
in favor of returning a struct with the slot map token directly. It is ugly but if that is what C requires 🤷
from iceoryx.
There is another option. Leave the current API as legacy API and create a new one in the experimental folder. It is a bit more of work but also gives us the option to experiment with the best approach. Especially if you encounter more issues with your specific setup of 100k samples.
We could use this chance to rethink the untyped API a bit. Instead of using void*
as data type we could reuse the typed API and just request a dynamic std::byte
array with a user defined alignment. After all, that's essentially the use case for the untyped API. We would have the additional benefit of having type safety and this could also be extended to other types than std::byte
.
If you can read some Rust, it could look similar to this https://github.com/eclipse-iceoryx/iceoryx-rs/blob/master/examples/publisher_untyped.rs#L20-L30
from iceoryx.
With the experimental API, I assume we still have to maintain the changes to the middleware layers chunk_reciever
, etc... to support keeping track of the slot map iterator?
But then we can maintain a raw pointer
and iterator
path separately and fallback to iteration when removing from the slot map with a pointer.
from iceoryx.
By the way, I've got Iceperf building and running in my branch.
It looks like average RTT latency went from 4.2 to 5.9.
I'll try now with the FixedPositionContainer.
from iceoryx.
WIth the FixedPositionContainer I get back down to 5.5. I suspect most of the added latency is therefore from the friction I've added through the upper API layers passing around a struct instead of a pointer. Also if Iceperf uses the untyped API I suspect the logic of creating a smart chunk might contribute.
from iceoryx.
Have you thought about using the FixedPositionContainer?
Question about this actually - does the container still satisfy this constraint?
In order to always be able to access the used chunks, neither a vector or list can be used, because these container could be corrupted when the application dies in the wrong moment.
I think that since there is no way to just "iterate over the data array", RouDi couldn't safely release all the shared chunks.
So I AM forced to use a separate free list - though instead of the mpmc_loffli I could just use an iox::vector of indices.
from iceoryx.
It's too bad because the iteration properties of the FixedPositionContainer would make it a suitable choice for backwards compatibility - allowing the existing untyped API to continue releasing with the raw pointer and just iterating over the container to match against that pointer.
from iceoryx.
I need to look closer at the FixedPositionContainer but why do you think it is not suitable?
Btw, those benchmark numbers are quite high. Did you test in release mode?
from iceoryx.
I need to look closer at the FixedPositionContainer but why do you think it is not suitable?
Because of this invariant the UsedChunkList needs to uphold:
In order to always be able to access the used chunks, neither a vector or list can be used, because these container could be corrupted when the application dies in the wrong moment.
In order for RouDi to clean up the shared chunks, it needs to iterate over the actual data array. The FixedPositionContainer does not expose this directly (though I suppose you could always add some friend
accessor for RouDi), and the iterators could be left in an undefined state if the subscriber crashed at the wrong moment.
from iceoryx.
Btw, those benchmark numbers are quite high. Did you test in release mode?
I don't know. I'm not sure it matters too much for comparison sake, though I could go back and fiddle around with it some more. The point is that my changes altered the Iceperf average under the default build configuration from 4.2 microseconds RTT to 5.9 for the mpmc_loffli implementation with altered APIs, 5.6 for the FixedPositionContainer implementation with altered APIs, and 4.6 when ONLY swapping out the FixedPositionContainer for the forward list but leaving the APIs unchanged.
from iceoryx.
Right now I'm doing another pass using a simple iox::vector
as the freelist and having some lighter API changes. I'll see what the performance there is when I'm done. For our immediate purposes the increase in latency isn't really a major concern, but we do want to be able to get this to a state that we can at least propose a reasonable experimental
implementation to upstream.
from iceoryx.
From my gut feelings I think it will be worse for large amount of samples, especially when the ones from the beginning of the container are removed.
from iceoryx.
It will be worse if you have to perform a linear search, yes. That would be the case if for example we wished to maintain the legacy API, since over time you may end up iterating over large swaths of tombstone values (unless we can adapt the FixedPositionContainer
to allow bypassing iteration for cleanup, in which case we have the best of both worlds - a slot map with O(1) removal and effecient iteration).
However for the typed API and experimental new untyped subscriber, the removal will always be O(1) because you directly pass the slot handle with the index allocated from the free list as an argument to remove.
from iceoryx.
When you call FixedPositionContainer::clear
by the way, it does circumvent the possibly-corrupted iterators and reset everything. But it does so by calling the destructor on all the data - which is insufficient for the UsedChunkList
because rather than calling the destructor on the elements, it needs to call releaseToSharedChunk()
instead. Perhaps there is a workaround there with either adding a destructor to shmSafeUnmangedChunk
or allowing a custom callback on the clear method which defaults to just calling the destructor on the element.
from iceoryx.
Not sure what you mean by iterating over tombstone values? The FixedPositionContainer skips removed elements on iterations.
The FixedPositionContainer would need some adaptions but I think they would not be too intrusive. There are basically two options. Either adding a custom callback to the clear
method or having a drain
method with a custom callback.
from iceoryx.
So just as a quick update - switching from MpmcLoffli
to iox::vector
works fairly well. I haven't had a chance to run Iceperf yet.
I have incorporated a much more polished version of master...gpalmer-latai:iceoryx:iox-2221-constant-time-chunk-release into our fork of Iceoryx - one which backtracks the changes to the untyped API's to use smart wrappers, but instead returns and takes the slot map handle (renamed from UsedChunk
to UsedChunkHandle
) directly.
Unfortunately I will have to context switch to another task for the time being and don't have an upstream PR / design proposal to share as of yet. Once I am able to free up some time to do so though, I would propose something along these lines:
- Replace the forward list in the
UsedChunkList
with theFixedPositionContainer
. - Adapt the
FixedPositionContainer
to allow custom cleanup of the elements s.t. RouDi would be able to release a bunch ofshmSafeUnmanagedChunks
contained by one. - Create a
UsedChunkHandle
which acts as a sort of slot map handle to theUsedChunkList
(which could arguably be renamed toUsedChunkSlotMap
). It could have conveniences like a*
and->
operator to the underlying chunk Header. - Propagate this handle up and down all inner layers of the stack. For allocating the chunk this would replace existing calls returning a naked pointer. For releasing the chunk it would be an additional overload.
- Adapt the
SmartChunk
class to work with this handle, piping it through to release callbacks upon destruction and releasing it explicitly whenrelease()
is called. - Update the typed endpoints as necessary to create the SmartChunk with these changes.
- Leave the untyped endpoints alone for now - call
chunkHandle->userPayload()
when handing the allocated pointer to the user, and continue using the existing release/publish paths that take a pointer - When releasing a chunk from the
UsedChunkList
, there will be both an O(1) overload that takes the handle and removes the corresponding element, and an O(n) overload that takes a pointer and does a linear search for the corresponding element. This will still be used for the Untyped endpoints and C API's - Create a new set of untyped endpoints under the
experimental
folder returning variants of theSmartChunk
which expose dynamic spans instead of typed data. These will also use theUsedChunkHandle
under the hood and therefore will support efficient release of the chunks.
from iceoryx.
@gpalmer-latai that sounds reasonable. Go ahead.
from iceoryx.
Related Issues (20)
- Update links to default branch
- Make iceoryx resource prefix a compile time option
- Fix new clang-tidy-18 warnings
- ICEORYX error! POPO__CHUNK_LOCKING_ERROR HOT 4
- Mark iox container operations which return bool as [[nodiscard]] HOT 1
- 'iox::string' tests can exceed the translation unit compilation timeout
- Prepare v2.0.6 release for ROS2
- Compiler warnings on latest macOS on release_2.0
- sendChunk, releaseChunk latencies HOT 8
- While setting the acquired shared memory to zero a fatal SIGBUS signal appeared caused by memset. HOT 5
- how to support string using zero-copy transport HOT 1
- Add 'iox' prefix to all functions and types in the platform abstraction
- SingleProcess example crashes on QNX HOT 4
- : backtrace: HOT 2
- Is `libatomic` still required to build Iceoryx? HOT 7
- Some questions about IndexQueue. HOT 10
- Feature request: print the number of ports in the introspection client
- 32-Bit Support
- How to handle messages in parallel with popo::listener? HOT 3
- Is there a cost comparison between serialization app and roudi with protobuf or other efficient implementation HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from iceoryx.