This is an oddly phrased question that is really, really broad if answered fully. I'm going to focus on clearing up some of the specifics that you're asking about.
Immutability is a design trade off. It makes some operations harder (modifying state in large objects quickly, building objects piecemeal, keeping a running state, etc.) in favor of others (easier debugging, easier reasoning about program behavior, not having to worry about things changing underneath you when working concurrently, etc.). It's this last one we care about with this question, but I want to emphasize that it is a tool. A good tool that often solves more problems than it causes (in most modern programs), but not a silver bullet... Not something that changes the intrinsic behavior of programs.
Now, what does it get you? Immutability gets you one thing: you can read the immutable object freely, without worrying about its state changing underneath you (assuming it is truly deeply immutable... Having an immutable object with mutable members is usually a deal breaker). That's it. It frees you from having to manage concurrency (via locks, snapshots, data partitioning or other mechanisms; the original question's focus on locks is... Incorrect given the scope of the question).
It turns out though that lots of things read objects. IO does, but IO itself tends to not handle concurrent use itself well. Almost all processing does, but other objects may be mutable, or the processing itself might use state that is not friendly to concurrency. Copying an object is a big hidden trouble point in some languages since a full copy is (almost) never an atomic operation. This is where immutable objects help you.
As for performance, it depends on your app. Locks are (usually) heavy. Other concurrency management mechanisms are faster but have a high impact on your design. In general, a highly concurrent design that makes use of immutable objects (and avoids their weaknesses) will perform better than a highly concurrent design that locks mutable objects. If your program is lightly concurrent then it depends and/or doesn't matter.
But performance should not be your highest concern. Writing concurrent programs is hard. Debugging concurrent programs is hard. Immutable objects help improve your program's quality by eliminating opportunities for error implementing concurrency management manually. They make debugging easier because you're not trying to track state in a concurrent program. They make your design simpler and thus remove bugs there.
So to sum up: immutability helps but will not eliminate challenges needed to handle concurrency properly. That help tends to be pervasive, but the biggest gains are from a quality perspective rather than performance. And no, immutability does not magically excuse you from managing concurrency in your app, sorry.
Best Answer
Although it's sometimes expressed that way, functional programming¹ doesn't prevent stateful computations. What it does is force the programmer to make state explicit.
For example, let's take the basic structure of some program using an imperative queue (in some pseudolanguage):
The corresponding structure with a functional queue data structure (still in an imperative language, so as to tackle one difference at a time) would look like this:
Since the queue is now immutable, the object itself doesn't change. In this pseudo-code,
q
itself is a variable; the assignmentsq := Queue.add(…)
andq := tail
make it point to a different object. The interface of the queue functions has changed: each must return the new queue object that results from the operation.In a purely functional language, i.e. in a language with no side effect, you need to make all state explicit. Since the producer and consumer are presumably doing something, their state must be in their caller's interface here as well.
Note how now every piece of state is explicitly managed. The queue manipulation functions take a queue as input and produce a new queue as output. The producer and consumer pass their state through as well.
Concurrent programming doesn't fit so well inside functional programming, but it fits very well around functional programming. The idea is to run a bunch of separate computation nodes and let them exchange messages. Each node runs a functional program, and its state changes as it sends and receives messages.
Continuing the example, since there's a single queue, it's managed by one particular node. Consumers send that node a message to obtain an element. Producers send that node a message to add an element.
The one “industrialized” language that gets concurrency right³ is Erlang. Learning Erlang is definitely the path to enlightenment⁴ about concurrent programming.
Everybody switch to side-effect-free languages now!
¹ This term has several meanings; here I think you're using it to mean programming without side effects, and that's the meaning I'm also using.
² Programming with implicit state is imperative programming; object orientation is a completely orthogonal concern.
³ Inflammatory, I know, but I mean it. Threads with shared memory is the assembly language of concurrent programming. Message passing is a lot easier to understand, and the lack of side effects really shines as soon as you introduce concurrency.
⁴ And this is coming from someone who's not a fan of Erlang, but for other reasons.