It's been a while since I've worked in LISP, but as I recall, the basic non-atomic structure is a list. Everything else is based on that. So you could have a list of atoms where each atom is a node followed by a list of edges that connect the node to other nodes. I'm sure there's other ways to do it too.
Maybe something like this:
(
(a (b c)),
(b (a c)),
(c (a b d)),
(d (c))
)
could give a graph like this:
a<-->b<-->c<-->d
^ ^
| |
+---------+
If you want to get fancy, you could add weights to it as well:
(
(a (b 1.0 c 2.0)),
(b (a 1.0 c 1.0)),
(c (a 1.3 b 7.2 d 10.5)),
(d (c -10.5))
)
You might also be interested in this: CL-Graph (found by google-searching the phrase "lisp graph structure" )
That's a good example of two different approaches that carry the notion of performing a task outside the boundary of concern for the calling object.
While it's clear in this example that you should go for the functional approach, in general it will really depend on how complex the behavior the called object has to exhibit is. If it really is a matter of complex behavior, where you'll find yourself reapplying similar logic often, and function generators cannot be used to clearly express it, then you'll probably want to go for class composition or inheritance, where you'll have a bit more freedom to reuse and extend existing behavior on an ad-hoc basis.
One pattern I did observe, though, is that usually developers go for the functional approach initially and only once the demand for more granular behavior arises they decide to go for a class-based approach. I know, for instance, Django went from function-based views, template loaders and test runners to class-based ones once the benefits and requirements became obvious, but not before that.
Best Answer
There is a vast array of data structures exploiting laziness and other tricks to achieve amortized constant time or even (for some limited cases, such as queues) constant time updates for many kinds of problems. Chris Okasaki's PhD thesis "Purely Functional Data Structures" and book of the same name is a prime example (perhaps the first major one), but the field has advanced since. These data structures are typically not only purely functional in interface, but can also be implemented in pure Haskell and similar languages, and are fully persistent.
Even without any of these advanced tools, simple balanced binary search trees give logarithmic-time updates, so mutable memory can be simulated with at worst a logarithmic slow down.
There are other options, which may be considered cheating, but are very effective with regard to implementation effort and real-world performance. For example, linear types or uniqueness types allow in-place updating as implementation strategy for a conceptually pure language, by preventing the program from holding on to the previous value (the memory that would be mutated). This is less general than persistent data structures: For example, you can't easily build an undo log by storing all previous versions of the state. It's still a powerful tool, though AFAIK not yet available in the major functional languages.
Another option for safely introducing mutable state into a functional setting is the
ST
monad in Haskell. It can be implemented without mutation, and barringunsafe*
functions, it behaves as if it was just a fancy wrapper around passing a persistent data structure implicitly (cf.State
). But due to some type system trickery that enforces order of evaluation and prevents escaping, it can safely be implemented with in-place mutation, with all the performance benefits.