C++ – Blocking Function Call with Asynchronous Content

asynchronous-programmingblocking-function-callc

I am sure that this is a common design pattern, but I seem to have a blind-spot.

enter image description here

I have a requirement that the function call from Application to Service be blocking, but service needs to do something asynchronous.

Well, I can handle that, BUT, there can be multiple applications and there is only one service.

While the service should block the application, it should also be able to handle multiple parallel requests from different applications.

I suppose I need to spawn a new thread for each application’s request, but how then can I return from the thread to application at the end?

I don't mind an assembler insert to store the application’s return address & pop it later, but is that really necessary?

Do I need to do anything special in C++ to mark the service function as re-entrant?

Can someone please lift my blindfold?


[Update] There is no correlation between the applications.
Also, only the service would be allowed to spawn a thread, not the applications. So, I am thinking of a service main spawning a bunch of service threads, but I don't see how I can can handle the function call blocking & return.

Best Answer

You have to use a blocking construct on a per-request basis.

This is called Futures (programming)

Each request will have its own Future. Once the request processing is started (possibly in a separate pool of threads, as you have described), the caller will be blocked on the Future's fulfillment, or failure.

When the result arrives, the Future is unblocked, and the call will return to the application.

When using this pattern, one must take great care prevent deadlocks and zombies. In other words, the Future needs to be unblocked at some time, whether the result is successful or failure (and in case of failure, throwing an exception back to the application is a fair game) - the call must not just hang there indefinitely.

As to thread-pool design:

  • There will be at least one thread per simultaneous requester (i.e. application), which will be blocked during the request.
    • If the applications and the service are inside the same process, these threads are the same as the application threads, which will become blocked.
  • There will be a separate thread pool, requiring as many threads as is necessary to do the work between the service and the internet. These threads are in addition to the request-accepting threads.
    • The exact number depends on how the work is done. It is possible to use asynchronous pattern (reactor) here, which might reduce the number of threads needed, but that will not have any effect on the number of request-accepting threads.

If the request from Application to Service occurs over network, there is a decoupling between Application threads and Service threads (in other words, a blocking of an Application request does not involve a "thread" at all; just a non-response from a network connection.) In that case the Service can also use reactor pattern, which means you can further reduce the number of threads.

Related Topic