Functional relational programming seems to be, just as the name suggests, a blend of both functional programming and the relational model. I think this sentence pretty much sums it up (p. 42):
In FRP all essential state takes the form of relations, and the essential
logic is expressed using relational algebra extended with (pure) user defined
functions.
Functional programming removes state from the equation and deals only with pure functions (no side effects). This is supposed to make things easier on everyone by preventing data manipulation from being hidden therefore making it easier to reason about the program. FP is a beautiful ideal but in real life applications state is necessary and often useful. It is of course possible to have state in FP it's just a little more involved.
Based on a quick glance at the paper it seems they're trying to simplify FP by allowing state in a tightly controlled manner. Relational data is well structured and easy to reason about and functional programs are easy to reason about (maybe not in the human sense mind you) so let's augment FP with R and make everyone's state-loving live easier.
You have state when you associate values (numbers, strings, complex data structures) to an identity and a point in time.
For example, the number 10 by itself does not represent any state: it is just a well-defined number and will always be itself: the natural number 10. As another example, the string "HELLO" is a sequence of five characters, and it is completely described by the characters it contains and the sequence in which they appear. In five million years from now, the string "HELLO" will still be the string "HELLO": a pure value.
In order to have state you have to consider a world in which these pure values are associated to some kind of entities that possess an identity.
Identity is a primitive idea: it means you can distinguish two things regardless of any other properties they may have. For example, two cars of the same model, same colour, ... are two different cars.
Given these things with identity, you can attach properties to them, described by pure values. E.g., my car has the property of being blue. You can describe this fact by associating the pair
("colour", "blue")
to my car.
The pair ("colour", "blue") is a pure value describing the state of that particular car.
State is not only associated to a particular entity, but also to a particular point in time. So, you can say that today, my car has state
("colour", "blue")
Tomorrow I will have it repainted in black and the new state will be
("colour", "black")
Note that the state of an entity can change, but its identity does not change by definition. Well, as long as the entity exists, of course: a car may be created and destroyed, but it will keep its identity throughout its lifetime. It does not make sense to speak about the identity of something that does not exist yet / any more.
If the values of the properties attached to a given entity change over time, you say that the state of that entity is mutable. Otherwise, you say that the state is immutable.
The most common implementation is to store the state of an entity in some kind of variables (global variables, object member variables), i.e. to store the current snapshot of a state. Mutable state is then implemented using assignment: each assignment operation replaces the previous snapshot with a new one. This solution normally uses memory locations to store the current snapshot. Overwriting a memory location is a destructive operation that replaces a snapshot with a new one. (Here you can find an interesting talk about this place-oriented programming approach.)
An alternative is to view the subsequent states (history) of an entity as a stream (possibly infinite sequence) of values, see e.g. Chapter 3 of SICP.
In this case, each snapshot is stored at a different memory location, and the program can examine different snapshots at the same time. Unused snapshots can be garbage-collected when they are no longer needed.
Advantages / disadvantages of the two approaches
- Approach 1 consumes less memory and allows to construct a new snapshot more efficiently since it involves no copying.
- Approach 1 implicitly pushes the new state to all the parts of a program holding a reference to it, approach 2 would need some mechanism to push a snapshot to its observers, e.g. in the form of an event.
- Approach 2 can help to prevent inconsistent state errors (e.g. partial state updates): by defining an explicit function that produces a new state from an old one it is easier to distinguish between snapshots produced at different points in time.
- Approach 2 is more modular in that it allows to easily produce views on the state that are independent of the state itself, e.g. using higher-order functions such as
map
and filter
.
Best Answer
I'll try to hint on the answer. This is not an answer, only an introductory illustration. @jk's answer points to the real thing, zippers.
Imagine you have an immutable tree structure. You want to alter one node by inserting a child. As a result, you get a whole new tree.
But most of the new tree is exactly the same as old tree. A clever implementation would reuse most of the tree fragments, routing pointers around the altered node:
Okasaki's book is full of examples like this.
So I suppose you could reasonably alter small parts of your game world each move (pick up a coin), and only actually change small parts of your world data structure (the cell where the coin was picked up). Parts that only belong to past states will be garbage-collected in time.
This probably takes some consideration in designing the data game world structure in an appropriate way. Unfortunately, I'm no expert in these matters. Definitely it must be something else than a NxM matrix one would use as a mutable data structure. Probably it should consist of smaller pieces (corridors? individual cells?) that point to each other, as tree nodes do.