You have multiple cores/procesors, use them
Async is best for doing heavy IO bound processing but what about heavy CPU bound processing?
The problem arises when single-threaded code blocks (ie gets stuck) on a long-running process. For instance, remember back when printing a word processor document would make the whole application freeze until the job was sent? Application freezing is a side-effect of a single-threaded application blocking during a CPU-intensive task.
In a multi-threaded application, CPU-intensive tasks (ex a print job) can be sent to a background worker thread thereby freeing up the UI thread.
Likewise, in a multi-process application the job can be sent via messaging (ex IPC, sockets, etc) to a subprocess designed specifically to process jobs.
In practice, async and multi-threaded/process code each have their benefits and drawbacks.
You can see the trend in the major cloud platforms, as they will offer instances specialized for CPU bound processing and instances specialized for IO bound processing.
Examples:
- Storage (ex Amazon S3, Google Cloud Drive) is CPU bound
- Web Servers are IO bound (Amazon EC2, Google App Engine)
- Databases are both, CPU bound for writes/indexing and IO bound for reads
To put it into perspective...
A webserver is a perfect example of a platform that is strongly IO bound. A multi-threaded webserver that assigns one thread per connection doesn't scale well because every thread incurs more overhead due to the increased amount of context switching and thread locking on shared resources. Whereas an async webserver would use a single address space.
Likewise, an application specialized for encoding video would work much better in a multi-threaded environment because the heavy processing involved would lock the main thread until the work was done. There are ways to mitigate this but it's much easier to have a single thread managing a queue, a second thread managing cleanup, and a pool of threads managing the heavy processing. Communication between threads happens only when tasks are assigned/completed so thread-locking overhead is kept to a bare minimum.
The best application often uses a combination of both. A webapp, for instance may use nginx (ie async single-threaded) as a load balancer to manage the torrent of incoming requests, a similar async webserver (ex Node.js) to handle http requests, and a set of multi-threaded servers handle uploading/streaming/encoding content, etc...
There have been a lot of religious wars over the years between multi-threaded, multi-process, and async models. As with the most things the best answer really should be, "it depends."
It follows a the same line of thinking that justifies using GPU and CPU architectures in parallel. Two specialized systems running in concert can have a much greater improvement than a single monolithic approach.
Neither are better because both have their uses. Use the best tool for the job.
Update:
I removed the reference to Apache and made a minor correction. Apache uses a multiprocess model which forks a process for every request increasing the amount of context switching at the kernel level. In addition, since the memory can't be shared across processes, each request incurs an additional memory cost.
Multi-threading gets around requiring additional memory because it relies on a shared memory between threads. Shared memory removes the additional memory overhead but still incurs the penalty of increased context switching. In addition -- to ensure that race conditions don't happen -- thread locks (that ensure exclusive access to only one thread at a time) are required for any resources that are shared across threads.
It's funny that you say, "programmers seems to love concurrency and multi-threaded programs in general." Multi-threaded programming is universally dreaded by anybody who has done any substantial amount of it in their time. Dead locks (a bug that happens when a resource is mistakenly locked by two different sources blocking both from ever finishing) and race conditions (where the program will mistakenly output the wrong result randomly due to incorrect sequencing) are some of the most difficult to track down and fix.
Update2:
Contrary to the blanket statement about IPC being faster than network (ie socket) communications. That's not always the case. Keep in mind that these are generalizations and implementation-specific details may have a huge impact on the result.
After having been in this crazy business since about 1978, having spent almost all of that time in embedded real-time computing, working multitasking, multithreaded, multi-whatever systems, sometimes with multiple physical processors, having chased more than my fair share of race conditions, my considered opinion is that the answer to your question is quite simple.
No.
There's no good general way to trigger a race condition in testing.
Your ONLY hope is to design them completely out of your system.
When and if you find that someone else has stuffed one in, you should stake him out an anthill, and then redesign to eliminate it. After you have designed his faux pas (pronounced f***up) out of your system, you can go release him from the ants. (If the ants have already consumed him, leaving only bones, put up a sign saying "This is what happens to people who put race conditions into XYZ project!" and LEAVE HIM THERE.)
Best Answer
Java implements the Java Memory Model which states what should happen in certain situations, and dumping core is not mentioned anywhere in this document. So any Java implementation must take care to implement what the memory model says, and thus throwing core is not permitted. It does sometimes happen, though rarely, but only as a result of bugs in the implementation.
How a particular implementation achieves adherence to the Memory Model rules is left to the implementor - it is only the final behavior which matters. Others have already mentioned that the Garbage Collector plays an important role here and that in particular as long as you have references to an object in any thread, that object can not be freed from memory.