You have multiple cores/procesors, use them
Async is best for doing heavy IO bound processing but what about heavy CPU bound processing?
The problem arises when single-threaded code blocks (ie gets stuck) on a long-running process. For instance, remember back when printing a word processor document would make the whole application freeze until the job was sent? Application freezing is a side-effect of a single-threaded application blocking during a CPU-intensive task.
In a multi-threaded application, CPU-intensive tasks (ex a print job) can be sent to a background worker thread thereby freeing up the UI thread.
Likewise, in a multi-process application the job can be sent via messaging (ex IPC, sockets, etc) to a subprocess designed specifically to process jobs.
In practice, async and multi-threaded/process code each have their benefits and drawbacks.
You can see the trend in the major cloud platforms, as they will offer instances specialized for CPU bound processing and instances specialized for IO bound processing.
Examples:
- Storage (ex Amazon S3, Google Cloud Drive) is CPU bound
- Web Servers are IO bound (Amazon EC2, Google App Engine)
- Databases are both, CPU bound for writes/indexing and IO bound for reads
To put it into perspective...
A webserver is a perfect example of a platform that is strongly IO bound. A multi-threaded webserver that assigns one thread per connection doesn't scale well because every thread incurs more overhead due to the increased amount of context switching and thread locking on shared resources. Whereas an async webserver would use a single address space.
Likewise, an application specialized for encoding video would work much better in a multi-threaded environment because the heavy processing involved would lock the main thread until the work was done. There are ways to mitigate this but it's much easier to have a single thread managing a queue, a second thread managing cleanup, and a pool of threads managing the heavy processing. Communication between threads happens only when tasks are assigned/completed so thread-locking overhead is kept to a bare minimum.
The best application often uses a combination of both. A webapp, for instance may use nginx (ie async single-threaded) as a load balancer to manage the torrent of incoming requests, a similar async webserver (ex Node.js) to handle http requests, and a set of multi-threaded servers handle uploading/streaming/encoding content, etc...
There have been a lot of religious wars over the years between multi-threaded, multi-process, and async models. As with the most things the best answer really should be, "it depends."
It follows a the same line of thinking that justifies using GPU and CPU architectures in parallel. Two specialized systems running in concert can have a much greater improvement than a single monolithic approach.
Neither are better because both have their uses. Use the best tool for the job.
Update:
I removed the reference to Apache and made a minor correction. Apache uses a multiprocess model which forks a process for every request increasing the amount of context switching at the kernel level. In addition, since the memory can't be shared across processes, each request incurs an additional memory cost.
Multi-threading gets around requiring additional memory because it relies on a shared memory between threads. Shared memory removes the additional memory overhead but still incurs the penalty of increased context switching. In addition -- to ensure that race conditions don't happen -- thread locks (that ensure exclusive access to only one thread at a time) are required for any resources that are shared across threads.
It's funny that you say, "programmers seems to love concurrency and multi-threaded programs in general." Multi-threaded programming is universally dreaded by anybody who has done any substantial amount of it in their time. Dead locks (a bug that happens when a resource is mistakenly locked by two different sources blocking both from ever finishing) and race conditions (where the program will mistakenly output the wrong result randomly due to incorrect sequencing) are some of the most difficult to track down and fix.
Update2:
Contrary to the blanket statement about IPC being faster than network (ie socket) communications. That's not always the case. Keep in mind that these are generalizations and implementation-specific details may have a huge impact on the result.
Will my application magically see and use multiple cores when run on a
multi-core processor (because everything is managed either by the
operating system or by the standard thread library), or do I have to
modify my code to be aware of the multiple cores?
Simple answer: Yes, it will usually be managed by the operating system or threading library.
The threading subsystem in the operating system will assign threads to processors on a priority basis (your option 1). In other words, when a thread has finished executing for its time allocation or blocks, the scheduler looks for the next highest priority thread and assigns that to the CPU. The details vary from operating system to operating system.
That said, options 2 (managed by programming language) and 3 (explicitly) exist. For example, the Tasks library and async/await in recent versions of .Net give the developer a much easier way to write parallelizable (i.e. that can run concurrently with itself) code. Functional programming languages are innately parallelizable and some runtimes will run different parts of the program in parallel if possible.
As for option 3 (explicitly), Windows allows you to set the thread affinity (specifying which processors a thread can run on). However, this is usually unnecessary in all but the fastest, response-time critical systems. Effective thread to processor allocation is highly hardware dependent and is very sensitive to other applications running concurrently.
If you want to experiment, create a long running, CPU intensive task like generating a list of prime numbers or creating a Mandelbrot set. Now create two threads in your favorite library and run both threads on a multi-processor machine (in other words, just about anything released in the last few years). Both tasks should complete in roughly the same time because they are run in parallel.
Best Answer
A process will have one thread, unless you tell it otherwise. Two processes then have two threads total. Each core on a processor can run one thread at a time. So, if you have a processor with 2 cores (or 2 processors with 1 core each):
If you have one process with one thread, it will only ever run on one of the cores at a time (note that it can switch between cores, but will never use both at the same time).
If you have two processes with one thread each, they can (but might not) both run at the same time (on different cores and/or different processors).
If you have one process with two threads, both threads can (but might not) run at the same time (on different cores and/or different processors).