Hi everyone,

yesterday, we released iceoryx2 v0.6, an ultra-low latency inter-process communication framework for Rust, C and C++. Python support is also on the horizon. The main new feature is Request-Response-Stream, but there’s much more.

If you are into robotics, embedded real-time systems (especially safety-critical), autonomous vehicles or just want to hack around, iceoryx2 is built with you in mind.

Check out our release announcement for more details: https://ekxide.io/blog/iceoryx2-0-6-release

And the link to the project: https://github.com/eclipse-iceoryx/iceoryx2

  • KRAW@linux.community
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Can you explain on a high level how iceoryx2 is able to achieve low latency? Is it as simple as using shared memory or are there other tricks in the background? Are there different transfer methods depending on the payload size?

    • elBoberido@programming.devOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      Hi KRAW,

      it’s basically shared memory and lock-free queues. While that helps a lot with latency, we have been working on these topics for almost eight years, and there are a ton of things one can do wrong. For comparison, the first incarnation of iceoryx has a latency of around 1 microsecond in polling mode and with iceoryx2, we achieve 100 nanoseconds on some systems.

      The payload size is always 8 bytes since we only push memory offsets to a shared memory segment.

      The trick is to reduce contention as much as possible and have cache locality. With iceoryx classic, we used MPMC queues to support multiple publisher for the same topic and used reference counting across process boundaries to free the memory chunks once they were no longer used. With iceoryx2, we moved to SPSC queues, mainly to improve robustness, and solved the multi-publishing problem differently. Instead of reference counting across process boundaries for lifetime handling of the memory, we use SPSC completion queues to send the freed data back to the producer process. This massively reduced memory contention and made the whole transport mechanism simpler. There is a ton of other stuff going on to make all of this safe and also to be able to recover memory from crashed applications.

      • KRAW@linux.community
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        Thanks for the details! I have done MPI work in the past, so I was curious how an MPI implementation and iceoryx2 might be similar/different regarding local IPC transfers. It’d be interesting to do a detailed review of the two to see if they can benefit from each other.