My main experience working with multithread environment was using Scala, so if I had an object that was concurrently updated by multiple threads I always used Akka. Now working in Java environment I don't want to bring all Akka dependencies and futures complications for a simple case where I have a concurrently updated object. I question myself, what is the accepted alternative besides Actor model for sharing updateable state between threads?
Java Concurrency – Alternatives to Actor Model
conceptsconcurrencyjavamultithreading
Related Solutions
Actors, in the sense of modeling actions, with messages, etc is a way of modeling software that provides a couple of useful items...
Actors can live on a single thread, allowing non-thread-safe/non-concurrent operations to happen without a bunch of locking magic. An actor will respond to messages in its inbox. When you want it to process a command you send it a message and it will take care of them in the order they are received. Just like a normal queue. Thread safe is killer here, and I use this in a number of open source projects I work on.
In some languages, Scala for example, it's easy to turn actor based code in a single process into a distributed system by breaking apart the actors and turning the channels they communicate over into remote channels. This changes between implementations on how easy it is, but it's an awesome feature.
Helps focus on Task Based events rather than CRUD events. CRUD is simple but it's just like interacting with a filing cabinet. If we can provide more value than that in the software we produce, why are we doing it? Tying multiple actions to a single "Update" command in a task based system is more useful than just saving to the DB. This also gets into stuff like CQRS.
The short answer is no, it's not correct.
starts reasonably correct (each Actor at least potentially executes as an independent thread), but then largely goes off the rails. There's nothing about the model that makes lots of threads work well -- that's up to the implementation. At most, the ease of creating lots of threads puts pressure on the implementation to provide efficient threading. At least as far as the model cares, any resemblance between actors and objects is mostly coincidental. "Object" carries fairly specific implications about how you combine code and data. An actor will generally involve both code and data, but implies little about how they're combined (other than the fact that the only data visible to the outside world is messages).
The usual way of describing the interaction is as message sending, yes. I don't have a citation handy, but somebody proved quite a long time ago that mechanisms like C++ virtual functions are isomorphic to message sending (as virtual functions are normally implemented, you're using an offset into a vtable -- but if you sent an offset into a table of messages instead, the effect would be the same).
It's not quite that simple. If you can find a copy, Henry Baker (with somebody else whose name I don't remember right now) wrote a paper about the rules necessary for data consistency in the Actor model.
"Better" is highly subjective at best. Some problems are highly parallel in nature, and really do involve a large number of essentially autonomous entities, with minimal interaction that's primarily asynchronous. When that's the case, the actor model can work very well indeed. For other problems, that's really not the case. Some problems are almost entirely serial in nature. Others can be executed in parallel, but still require close synchronization between those actions (e.g., essentially a SIMD-like mode, where you execute one instruction at a time, but each instruction acts on on a large number of data items). It's certainly possible to solve both of these types of problems using the actor model -- but for such problems, it often involves a fair amount of extra work for little or no gain in return.
Best Answer
Three paradigms in common use are actors (e.g. Akka), Software Transactional Memory (e.g. Clojure) or traditional manual lock-wrangling. They all have their own challenges.
Traditional lock-based programming involves detecting the places that need protection from corruption via concurrent accesses, protect them via locks, and hope that your solution is correct, deadlock-free and efficient. (Venkat Subramaniam calls this "synchronize and suffer"). There are a lot of problems with this approach: many languages give you virtually no help in ensuring correctness. Development doesn't scale because thread-safety isn't preserved under composition (every time you change anything about your code you have to re-prove that the solution is correct). Although this model underlies all other models, it is effectively the assembler programming of concurrency, and just as unpleasant to work with.
Actors you already know. The fundamental concepts here are asynchronous messages and exclusive responsibility of an actor for its bit of mutable state. The watchword is isolated mutability. One of the problems with this is that it can get very inefficient if one of the resources hidden behind an actor is a bottleneck. And it's still possible to create deadlocks when using actors improperly.
Software Transactional Memory is best viewed as shared immutability. The usual explanation is that every thread just does what it wants to do and a monitor undoes the entire sequence of one thread if it detects that a conflict has actually occurred. A nice metaphor is updating a web service with no downtime: you simply set up a second, identical server, upgrade that, and if it worked you throw a switch to redirect all traffic to the new address and shut down the old one. If something unforeseen happened (e.g. requirements changed before you were done upgrading) you simply discard the new server and start over.
The disadvantage with STM is that all the copying needed to maintain immutability can get expensive, and programming it can be really hard to understand. STM is basically lock-free programming with a vengeance, and lock-free programming is hard on many people's minds as it is. It often requires finding new algorithms for problems that you thought were already solved long ago. And again, it's efficient only as long as conflicts and rollbacks are relatively rare.
Which model to choose depends on many factors; my personal impression is that tool support is actually more important than the underlying paradigm. It's a lot easier to program with an unfamiliar paradigm with good language support than a familiar one with next to none.