I could have stored indices of Polygon in current Scene, index of dragged point in Polygon, and replace it every time. But this approach does not scale - when composition levels go to 5 and further, boilerplate would become unbearable.
You're absolutely right, this approach doesn't scale if you can't get around the boilerplate. Specifically, the boilerplate for creating a whole new Scene with a tiny subpart changed. However, many functional languages provide a construct for dealing with this sort of nested structure manipulation: lenses.
A lens is basically a getter and setter for immutable data. A lens has a focus on some small part of a larger structure. Given a lens, there are two things you can do with it - you can view the small part of a value of the larger structure, or you can set the small part of a value of a larger structure to a new value. For example, suppose you have a lens that focuses on the third item in a list:
thirdItemLens :: Lens [a] a
That type means the larger structure is a list of things, and the small subpart is one of those things. Given this lens, you can view and set the third item in the list:
> view thirdItemLens [1, 2, 3, 4, 5]
3
> set thirdItemLens 100 [1, 2, 3, 4, 5]
[1, 2, 100, 4, 5]
The reason lenses are useful is because they are values representing getters and setters, and you can abstract over them the same way you can other values. You can make functions that return lenses, for instance a listItemLens
function which takes a number n
and returns a lens viewing the n
th item in a list. Additionally, lenses can be composed:
> firstLens = listItemLens 0
> thirdLens = listItemLens 2
> firstOfThirdLens = lensCompose firstLens thirdLens
> view firstOfThirdLens [[1, 2], [3, 4], [5, 6], [7, 8]]
5
> set firstOfThirdLens 100 [[1, 2], [3, 4], [5, 6], [7, 8]]
[[1, 2], [3, 4], [100, 6], [7, 8]]
Each lens encapsulates behavior for traversing one level of the data structure. By combining them, you can eliminate the boilerplate for traversing multiple levels of complex structures. For instance, supposing you have a scenePolygonLens i
that views the i
th Polygon in a Scene, and a polygonPointLens n
that views the nth
Point in a Polygon, you can make a lens constructor for focusing on just the specific point you care about in an entire scene like so:
scenePointLens i n = lensCompose (polygonPointLens n) (scenePolygonLens i)
Now suppose a user clicks point 3 of polygon 14 and moves it 10 pixels right. You can update your scene like so:
lens = scenePointLens 14 3
point = view lens currentScene
newPoint = movePoint 10 0 point
newScene = set lens newPoint currentScene
This nicely contains all the boilerplate for traversing and updating a Scene inside lens
, all you have to care about is what you want to change the point to. You can further abstract this with a lensTransform
function that accepts a lens, a target, and a function for updating the view of the target through the lens:
lensTransform lens transformFunc target =
current = view lens target
new = transformFunc current
set lens new target
This takes a function and turns it into an "updater" on a complicated data structure, applying the function to only the view and using it to construct a new view. So going back to the scenario of moving the 3rd point of the 14th polygon to the right 10 pixels, that can be expressed in terms of lensTransform
like so:
lens = scenePointLens 14 3
moveRightTen point = movePoint 10 0 point
newScene = lensTransform lens moveRightTen currentScene
And that's all you need to update the whole scene. This is a very powerful idea and works very well when you have some nice functions for constructing lenses viewing the pieces of your data you care about.
However this is all pretty out-there stuff currently, even in the functional programming community. It's difficult to find good library support for working with lenses, and even more difficult to explain how they work and what the benefits are to your coworkers. Take this approach with a grain of salt.
This is an oddly phrased question that is really, really broad if answered fully. I'm going to focus on clearing up some of the specifics that you're asking about.
Immutability is a design trade off. It makes some operations harder (modifying state in large objects quickly, building objects piecemeal, keeping a running state, etc.) in favor of others (easier debugging, easier reasoning about program behavior, not having to worry about things changing underneath you when working concurrently, etc.). It's this last one we care about with this question, but I want to emphasize that it is a tool. A good tool that often solves more problems than it causes (in most modern programs), but not a silver bullet... Not something that changes the intrinsic behavior of programs.
Now, what does it get you? Immutability gets you one thing: you can read the immutable object freely, without worrying about its state changing underneath you (assuming it is truly deeply immutable... Having an immutable object with mutable members is usually a deal breaker). That's it. It frees you from having to manage concurrency (via locks, snapshots, data partitioning or other mechanisms; the original question's focus on locks is... Incorrect given the scope of the question).
It turns out though that lots of things read objects. IO does, but IO itself tends to not handle concurrent use itself well. Almost all processing does, but other objects may be mutable, or the processing itself might use state that is not friendly to concurrency. Copying an object is a big hidden trouble point in some languages since a full copy is (almost) never an atomic operation. This is where immutable objects help you.
As for performance, it depends on your app. Locks are (usually) heavy. Other concurrency management mechanisms are faster but have a high impact on your design. In general, a highly concurrent design that makes use of immutable objects (and avoids their weaknesses) will perform better than a highly concurrent design that locks mutable objects. If your program is lightly concurrent then it depends and/or doesn't matter.
But performance should not be your highest concern. Writing concurrent programs is hard. Debugging concurrent programs is hard. Immutable objects help improve your program's quality by eliminating opportunities for error implementing concurrency management manually. They make debugging easier because you're not trying to track state in a concurrent program. They make your design simpler and thus remove bugs there.
So to sum up: immutability helps but will not eliminate challenges needed to handle concurrency properly. That help tends to be pervasive, but the biggest gains are from a quality perspective rather than performance. And no, immutability does not magically excuse you from managing concurrency in your app, sorry.
Best Answer
No, immutable objects are quite useful in general.
The first and most basic reason is that concurrency in a system doesn't require a multi-threaded application. Making say... a row in a database immutable provides a lot of benefit for change tracking, collision avoidance, syncing, and backups.
And while less valuable than in concurrent scenarios, immutable objects tend to be easier to use and debug because you know the state of the object across the lifetime of your app and you know that some function isn't misbehaving and mutating it on you. Also, any sort of exceptional state will show up immediately on object creation rather than during some mutation later on during processing. Those tend to be easier to identify, and happen in places where it is easier to recover or abort cleanly.
The most obvious example is a configuration. You don't want to change it at runtime, but it's often needed by different parts of your code. Something like the current user. You don't want to change it, but will want to share it with different modules.
So the biggest thing with immutable objects (in most languages) is that writes of the object are atomic. Uninterruptable.
Say you want to change a few fields on a mutable object. The thread changes one, then another, then another. Any other thread can read the object in-between each of those steps. It'll see a half-changed object.
But if you want to change a few fields on an immutable object it's different. The thread makes a new object, changing the three fields it wants to change. Then it overwrites the shared reference in one, uninterruptable step. Any other thread can grab a reference to the object and know that it won't change. If it grabs the reference before the other thread does its write, it might get the old object but it can never get a half-changed object.
For a counter it doesn't much matter. Incrementing an int will be just as uninterruptable as assigning a reference to a new int (though that might not apply if you need counters bigger than an int, depending on your language, compiler, target CPU, etc.). Locks though are very costly in most languages/platforms so programmers will avoid them when it is safe to do so.
(For more info consider this question, which is adjacent to this one)