Multithreading – Why It Improves Performance

concurrencymultithreadingx86

I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general.

I'm considering 2 main approaches here:

  • an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline
  • a concurrent approach or multi-threading approach

I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources.

I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant.

A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on.

I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed.

On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me.

For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads.

Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view.

According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

Best Answer

You have multiple cores/procesors, use them

Async is best for doing heavy IO bound processing but what about heavy CPU bound processing?

The problem arises when single-threaded code blocks (ie gets stuck) on a long-running process. For instance, remember back when printing a word processor document would make the whole application freeze until the job was sent? Application freezing is a side-effect of a single-threaded application blocking during a CPU-intensive task.

In a multi-threaded application, CPU-intensive tasks (ex a print job) can be sent to a background worker thread thereby freeing up the UI thread.

Likewise, in a multi-process application the job can be sent via messaging (ex IPC, sockets, etc) to a subprocess designed specifically to process jobs.

In practice, async and multi-threaded/process code each have their benefits and drawbacks.

You can see the trend in the major cloud platforms, as they will offer instances specialized for CPU bound processing and instances specialized for IO bound processing.

Examples:

  • Storage (ex Amazon S3, Google Cloud Drive) is CPU bound
  • Web Servers are IO bound (Amazon EC2, Google App Engine)
  • Databases are both, CPU bound for writes/indexing and IO bound for reads

To put it into perspective...

A webserver is a perfect example of a platform that is strongly IO bound. A multi-threaded webserver that assigns one thread per connection doesn't scale well because every thread incurs more overhead due to the increased amount of context switching and thread locking on shared resources. Whereas an async webserver would use a single address space.

Likewise, an application specialized for encoding video would work much better in a multi-threaded environment because the heavy processing involved would lock the main thread until the work was done. There are ways to mitigate this but it's much easier to have a single thread managing a queue, a second thread managing cleanup, and a pool of threads managing the heavy processing. Communication between threads happens only when tasks are assigned/completed so thread-locking overhead is kept to a bare minimum.

The best application often uses a combination of both. A webapp, for instance may use nginx (ie async single-threaded) as a load balancer to manage the torrent of incoming requests, a similar async webserver (ex Node.js) to handle http requests, and a set of multi-threaded servers handle uploading/streaming/encoding content, etc...

There have been a lot of religious wars over the years between multi-threaded, multi-process, and async models. As with the most things the best answer really should be, "it depends."

It follows a the same line of thinking that justifies using GPU and CPU architectures in parallel. Two specialized systems running in concert can have a much greater improvement than a single monolithic approach.

Neither are better because both have their uses. Use the best tool for the job.

Update:

I removed the reference to Apache and made a minor correction. Apache uses a multiprocess model which forks a process for every request increasing the amount of context switching at the kernel level. In addition, since the memory can't be shared across processes, each request incurs an additional memory cost.

Multi-threading gets around requiring additional memory because it relies on a shared memory between threads. Shared memory removes the additional memory overhead but still incurs the penalty of increased context switching. In addition -- to ensure that race conditions don't happen -- thread locks (that ensure exclusive access to only one thread at a time) are required for any resources that are shared across threads.

It's funny that you say, "programmers seems to love concurrency and multi-threaded programs in general." Multi-threaded programming is universally dreaded by anybody who has done any substantial amount of it in their time. Dead locks (a bug that happens when a resource is mistakenly locked by two different sources blocking both from ever finishing) and race conditions (where the program will mistakenly output the wrong result randomly due to incorrect sequencing) are some of the most difficult to track down and fix.

Update2:

Contrary to the blanket statement about IPC being faster than network (ie socket) communications. That's not always the case. Keep in mind that these are generalizations and implementation-specific details may have a huge impact on the result.

Related Topic