I've been developing concurrent
systems for several years now, and I
have a pretty good grasp on the
subject despite my lack of formal
training (i.e. no degree).
Many of best programmers I know didn't finish the University.
As for me I studied Philosophy.
C/C++, C#, Java, etc.). In particular,
it can be near impossible to recreate
conditions that happen readily on one
system in your development
environment.
yes
How do you figure out what can be made concurrent vs. what has to be
sequential?
we usually start with a 1000 miles high metaphor to clarify our architecture to ourselves (firstly) and to others (secondly).
When we faced that problem, we always found a way to limiting the visibility of concurrent objects to non concurrent ones.
Lately I discovered Actors in scala and I saw that my old solutions were a kind of "miniactors", much less powerful than scala ones. So my suggestion is to start from there.
Another suggestion is to skip as many problems as possible: for example we use centralised cache (terracotta) instead of keeping maps in memory, using inner class callbacks instead of synchronised methods, sending messages instead of writing shared memory etc.
With scala it's all much easier anyway.
How do you reproduce error conditions and view what is happening
as the application executes?
No real answer here. We have some unit test for concurrency and we have a load test suite to stress the application as much as we can.
How do you visualize the interactions between the different
concurrent parts of the application?
Again no real answer: we design our Metaphor on the whiteboard and we try to make sure there are no conflicts on the architectural side.
For Arch here I mean the Neal Ford's definition: Sw Architecture is everything that will be very hard to change later.
programming leads me to believe you
need a different mindset than you do
with sequential programming.
Maybe but for me it's simply impossible to think in a parallel way, so better design our software in a way that doesn't require parallel thinking and with clear guardrails to avoid crashes between concurrency lanes.
According to Wikipedia:
Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel").
That is, parallelism always implies concurrency.
Also, are multi-threaded programs running on multi-cores CPU but where the different threads are doing totally different computation considered to be using "parallelism"?
No. The essence of parallelism is that a large problem is divided into smaller ones so that the smaller pieces can be solved concurrently. The pieces are mutually independent (to some degree at least), but they're still part of the larger problem, which is now being solved in parallel.
The essence of concurrency is that a number of threads (or processes, or computers) are doing something simultaneously, possibly (but not necessarily) interacting in some ways. Wikipedia again:
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other.
Best Answer
Concurrency and parallelism are two related but distinct concepts.
Concurrency means, essentially, that task A and task B both need to happen independently of each other, and A starts running, and then B starts before A is finished.
There are various different ways of accomplishing concurrency. One of them is parallelism--having multiple CPUs working on the different tasks at the same time. But that's not the only way. Another is by task switching, which works like this: Task A works up to a certain point, then the CPU working on it stops and switches over to task B, works on it for a while, and then switches back to task A. If the time slices are small enough, it may appear to the user that both things are being run in parallel, even though they're actually being processed in serial by a multitasking CPU.