Giter Site home page Giter Site logo

Comments (10)

Tradias avatar Tradias commented on August 23, 2024 2

I have implemented the above mentioned asio::defer style code on a separate branch: https://github.com/Tradias/asio-grpc/tree/grpc_context-poll

Still needs some more thought on the API desgin. Usage for now is:

#include <agrpc/pollContext.hpp>

// Must stay alive until grpc_context stops
agrpc::PollContext context{io_context.get_executor()};
context.poll(grpc_context);

io_context.run();

Performance on an otherwise idle io_context seems to be almost identical to running GrpcContext on its own thread, which is great news.

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
cpp_asio_grpc_run 38882 25.58 ms 27.32 ms 27.78 ms 29.00 ms 101.16% 20.86 MiB
cpp_asio_grpc_poll 38430 25.89 ms 27.57 ms 28.09 ms 29.26 ms 102.35% 22.19 MiB

from asio-grpc.

Tradias avatar Tradias commented on August 23, 2024 1

grpc::CompletionQueue is already an event loop. Some thread must repeatedly invoke its Next function or (for performance reason) be suspended in a call to it. Yes, it can be invoked with an immediate deadline in which case it behaves like a poll and might be suitable for integration with an io_context. I could image something like the following as well:

asio::defer(io_context, [&] {
  grpc_context.poll();
  asio::defer(io_context, [&] {
    grpc_context.poll();
    // recursive
  }
};

Like I said, I will try it out and see what can be done.

from asio-grpc.

Tradias avatar Tradias commented on August 23, 2024

Hi, I am glad to hear that my library is helpful.

When I started writing the library I tried doing something like:

while(!io_context.stopped() && !grpc_context.stopped()) {
  io_context.run_one();
  grpc_context.run_one();
}

but the performance was very bad. I could try it again and see whether I can speed it up. I will put that on the agenda for v1.5.0.


If you have multiple execution contexts you could try to declare one of them as the "main" context where all your business logic runs. Let's assume you have created a tcp::socket with an io_context:

asio::co_spawn(
  grpc_context,
  [&]() -> asio::awaitable<void> {
    // ... some business logic that will be performed in the thread of the grpc_context

    // Interaction with the io_context is thread-safe as long as you do not use one of the 
    // concurrency hints.
    co_await socket.async_wait(asio::ip::tcp::socket::wait_read, asio::use_awaitable);
    // async_wait will automatically dispatch back to the grpc_context when it completes

    // ... some more business logic that will be performed in the thread of the grpc_context

    // It is also possible to explicitly switch to the grpc_context. By using asio::dispatch 
    // it will be a no-op if we already are on the grpc_context.
    co_await asio::dispatch(asio::bind_executor(grpc_context, asio::use_awaitable));
}

I have recently added an example that uses grpc_context and io_context, maybe that can give you some more ideas: file-transfer-client and file-transfer-server

from asio-grpc.

ashtum avatar ashtum commented on August 23, 2024

Thanks for your response,
What is the reason for not using asio::io_context? is it because it needs to work with libunifex too?

from asio-grpc.

ashtum avatar ashtum commented on August 23, 2024

Thanks for your work,
I was thinking about what is the requirement for having multiple async libraries to use same event loop (e.g. somebody else is working on a c++ async mysql lib), it seams we need some sort of standard i/o scheduler in the STL which is far from happening anytime soon.
But I think it's necessary for having efficient single threaded concurrent programs where we can use all sort synchronization primitives like async_mutex when_all when_any channels condition_variables without any lock and atomic operations.

from asio-grpc.

CaptainTrunky avatar CaptainTrunky commented on August 23, 2024

Hi, thanks for sharing your awesome solution with us!

It looks like the snippet below is exactly what I need - I'm working on a project that requires both grpc and REST-like API at the same time - imagine that I'm accepting REST for authorization and use given credentials for executing GRPC calls. Am I correct that i may use this approach for running a coroutine that executes grpc call? At the moment I plan to use something like grpc-gateway, but I'm considering other options.

Hi, I am glad to hear that my library is helpful.

When I started writing the library I tried doing something like:

while(!io_context.stopped() && !grpc_context.stopped()) {
  io_context.run_one();
  grpc_context.run_one();
}

but the performance was very bad. I could try it again and see whether I can speed it up. I will put that on the agenda for v1.5.0.

If you have multiple execution contexts you could try to declare one of them as the "main" context where all your business logic runs. Let's assume you have created a tcp::socket with an io_context:

asio::co_spawn(
  grpc_context,
  [&]() -> asio::awaitable<void> {
    // ... some business logic that will be performed in the thread of the grpc_context

    // Interaction with the io_context is thread-safe as long as you do not use one of the 
    // concurrency hints.
    co_await socket.async_wait(asio::ip::tcp::socket::wait_read, asio::use_awaitable);
    // async_wait will automatically dispatch back to the grpc_context when it completes

    // ... some more business logic that will be performed in the thread of the grpc_context

    // It is also possible to explicitly switch to the grpc_context. By using asio::dispatch 
    // it will be a no-op if we already are on the grpc_context.
    co_await asio::dispatch(asio::bind_executor(grpc_context, asio::use_awaitable));
}

I have recently added an example that uses grpc_context and io_context, maybe that can give you some more ideas: file-transfer-client and file-transfer-server

from asio-grpc.

Tradias avatar Tradias commented on August 23, 2024

Correct I assume in your case you would want the io_context to be the "main" context. In that case all you need to change is:

asio::co_spawn(
  io_context,
  [&]() -> asio::awaitable<void> {
    // Some REST API logic:
    co_await socket.async_wait(asio::ip::tcp::socket::wait_read, asio::use_awaitable);

    // A client streaming RPC, just an example
    grpc::ClientContext client_context;
    example::v1::Response response;
    std::unique_ptr<grpc::ClientAsyncWriter<example::v1::Request>> writer;
    // Must use asio::bind_executor because asio::this_coro::executor does not refer to a GrpcExecutor
    co_await agrpc::request(&example::v1::Example::Stub::AsyncClientStreaming, stub, client_context,
                                           writer, response, asio::bind_executor(grpc_context, asio::use_awaitable));
    // Now executing in the thread that called grpc_context.run(). We can still interact with the asio IoObjects,
    // like the socket, from here since they are thread-safe (unless you have set certain concurrency hints).
}

And then run the io_context and grpc_context:

agrpc::GrpcContext grpc_context{std::make_unique<grpc::CompletionQueue>()};
auto guard = asio::make_work_guard(grpc_context);

asio::io_context io_context{1};

asio::co_spawn(io_context, ...);

std::thread grpc_context_thread{[&] { grpc_context.run(); }};
io_context.run();
guard.reset();
grpc_context_thread.join();

from asio-grpc.

vangork avatar vangork commented on August 23, 2024

I have implemented the above mentioned asio::defer style code on a separate branch: https://github.com/Tradias/asio-grpc/tree/grpc_context-poll

Still needs some more thought on the API desgin. Usage for now is:

#include <agrpc/pollContext.hpp>

// Must stay alive until grpc_context stops
agrpc::PollContext context{io_context.get_executor()};
context.poll(grpc_context);

io_context.run();

Performance on an otherwise idle io_context seems to be almost identical to running GrpcContext on its own thread, which is great news.

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
cpp_asio_grpc_run 38882 25.58 ms 27.32 ms 27.78 ms 29.00 ms 101.16% 20.86 MiB
cpp_asio_grpc_poll 38430 25.89 ms 27.57 ms 28.09 ms 29.26 ms 102.35% 22.19 MiB

Will it support io_context with thread pool enabled(io_context.run() called from multiple threads) ?

from asio-grpc.

Tradias avatar Tradias commented on August 23, 2024

@vangork yes it should, although I would expect the performance to be slightly worse than running it on a single thread.

from asio-grpc.

Tradias avatar Tradias commented on August 23, 2024

I pushed a client and server example showing how to run io_context and grpc_context in the same thread with the new PollContext. Here are some performance numbers from my machine:

  • Idle io_context, loaded grpc_context sharing one thread: ~2.5% slower RPC performance compared to grpc_context.run() on its own thread
  • Loaded io_context, idle grpc_context sharing one thread: Seemingly no slowdown of the io_context
  • Loaded io_context, loaded grpc_context sharing one thread: ~2.5% slower RPC performance compared to grpc_context.run() on its own thread and 88% slower io_context performance.

I used grpc_bench to load the grpc_context and repeated calls to asio::post(io_context)s to load the io_context, so real world io_context usage may show different performance characteristics.

Another important thing to note is that the PollContext will bring CPU consumption of the shared thread to 100% even while io_context and grpc_context are idle.

from asio-grpc.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.