The term "kernel threads" can be used to refer to actual threads that run entirely in kernel space or it can refer to user-space threads scheduled by the kernel. The term "kernel-supported" threads means the latter, threads that run in user-space but are facilitated by the kernel, which usually means the kernel schedules them.
"User-level threads" usually means threads visible to user space. That is, what you create when you call your threading standard's "create thread" function. Generally, the term "user-level thread" is used to mean a thread created by the application code regardless of how it's implemented by the system. It may be a pure user-space thread with little to no kernel support or it may be a thread scheduled by the kernel.
The pthreads standard can be implemented as pure user-space threads (where the kernel schedules the process and the process schedules the threads), kernel-supported threads (where the kernel schedules the threads directly), or a hybrid approach (where the kernel schedules a kernel-level thread which then, in user-space, schedules a user-level thread). The standard doesn't demand any one particular means of implementation. The most common implementation is 1-to-1 mapping where each user-level thread has a corresponding thread that is scheduled by the kernel.
You have multiple cores/procesors, use them
Async is best for doing heavy IO bound processing but what about heavy CPU bound processing?
The problem arises when single-threaded code blocks (ie gets stuck) on a long-running process. For instance, remember back when printing a word processor document would make the whole application freeze until the job was sent? Application freezing is a side-effect of a single-threaded application blocking during a CPU-intensive task.
In a multi-threaded application, CPU-intensive tasks (ex a print job) can be sent to a background worker thread thereby freeing up the UI thread.
Likewise, in a multi-process application the job can be sent via messaging (ex IPC, sockets, etc) to a subprocess designed specifically to process jobs.
In practice, async and multi-threaded/process code each have their benefits and drawbacks.
You can see the trend in the major cloud platforms, as they will offer instances specialized for CPU bound processing and instances specialized for IO bound processing.
Examples:
- Storage (ex Amazon S3, Google Cloud Drive) is CPU bound
- Web Servers are IO bound (Amazon EC2, Google App Engine)
- Databases are both, CPU bound for writes/indexing and IO bound for reads
To put it into perspective...
A webserver is a perfect example of a platform that is strongly IO bound. A multi-threaded webserver that assigns one thread per connection doesn't scale well because every thread incurs more overhead due to the increased amount of context switching and thread locking on shared resources. Whereas an async webserver would use a single address space.
Likewise, an application specialized for encoding video would work much better in a multi-threaded environment because the heavy processing involved would lock the main thread until the work was done. There are ways to mitigate this but it's much easier to have a single thread managing a queue, a second thread managing cleanup, and a pool of threads managing the heavy processing. Communication between threads happens only when tasks are assigned/completed so thread-locking overhead is kept to a bare minimum.
The best application often uses a combination of both. A webapp, for instance may use nginx (ie async single-threaded) as a load balancer to manage the torrent of incoming requests, a similar async webserver (ex Node.js) to handle http requests, and a set of multi-threaded servers handle uploading/streaming/encoding content, etc...
There have been a lot of religious wars over the years between multi-threaded, multi-process, and async models. As with the most things the best answer really should be, "it depends."
It follows a the same line of thinking that justifies using GPU and CPU architectures in parallel. Two specialized systems running in concert can have a much greater improvement than a single monolithic approach.
Neither are better because both have their uses. Use the best tool for the job.
Update:
I removed the reference to Apache and made a minor correction. Apache uses a multiprocess model which forks a process for every request increasing the amount of context switching at the kernel level. In addition, since the memory can't be shared across processes, each request incurs an additional memory cost.
Multi-threading gets around requiring additional memory because it relies on a shared memory between threads. Shared memory removes the additional memory overhead but still incurs the penalty of increased context switching. In addition -- to ensure that race conditions don't happen -- thread locks (that ensure exclusive access to only one thread at a time) are required for any resources that are shared across threads.
It's funny that you say, "programmers seems to love concurrency and multi-threaded programs in general." Multi-threaded programming is universally dreaded by anybody who has done any substantial amount of it in their time. Dead locks (a bug that happens when a resource is mistakenly locked by two different sources blocking both from ever finishing) and race conditions (where the program will mistakenly output the wrong result randomly due to incorrect sequencing) are some of the most difficult to track down and fix.
Update2:
Contrary to the blanket statement about IPC being faster than network (ie socket) communications. That's not always the case. Keep in mind that these are generalizations and implementation-specific details may have a huge impact on the result.
Best Answer
I ran a series of tests, building llvm (in Debug+Asserts mode) on a machine with two cores and 8 GB of RAM:
Oddly enough, it seems to climb until 10 and then suddenly drops below the time it takes to build with two jobs (one jobs takes about the double time, not included in the graph).
The minimum seems to be
7*$cores
in this case.