Because immutable collections absolutely require sharing to be usable. Otherwise, every single operation drops a whole other list into the heap somewhere. Languages that are entirely immutable, like Haskell, generate astonishing amounts of garbage without aggressive optimizations and sharing. Having collection that's only usable with <50 elements is not worth putting in the standard library.
Further more, immutable collections often have fundamentally different implementations than their mutable counterparts. Consider for example ArrayList
, an efficient immutable ArrayList
wouldn't be an array at all! It should be implemented with a balanced tree with a large branching factor, Clojure uses 32 IIRC. Making mutable collections be "immutable" by just adding a functional update is a performance bug just as much as a memory leak is.
Furthermore, sharing isn't viable in Java. Java provides too many unrestricted hooks to mutability and reference equality to make sharing "just an optimization". It'd probably irk you a bit if you could modify an element in a list, and realize you just modified an element in the other 20 versions of that list you had.
This also rules out huge classes of very vital optimizations for efficient immutability, sharing, stream fusion, you name it, mutability breaks it. (That'd make a good slogan for FP evangelists)
The approach I would recommend is to focus on the interface of your key-value store, so as to make it as clean as possible and as nonrestrictive as possible, meaning that it should allow maximum freedom to the callers, but also maximum freedom for choosing how to implement it.
Then, I would recommend that you provide an as bare as possible, and as clean as possible implementation, without any performance concerns whatsoever. To me it seems like unordered_map
should be your first choice, or perhaps map
if some kind of ordering of keys must be exposed by the interface.
So, first get it to work cleanly and minimally; then, put it to use in a real application; in doing so, you will find what issues you need to address on the interface; then, go ahead and address them. Most chances are that as a result of changing the interface, you will need to rewrite big parts of the implementation, so any time you have already invested on the first iteration of the implementation beyond the bare minimum amount of time necessary to get it to just barely work is time wasted.
Then, profile it, and see what needs to be improved in the implementation, without altering the interface. Or you may have your own ideas about how to improve the implementation, before you even profile. That's fine, but it is still no reason to work on these ideas at any earlier point in time.
You say you hope to do better than map
; there are two things that can be said about that:
a) you probably won't;
b) avoid premature optimization at all costs.
With respect to the implementation, your main issue appears to be memory allocation, since you seem to be concerned with how to structure your design in order to work around problems that you foresee that you are going to have with respect to memory allocation. The best way to address memory allocation concerns in C++ is by implementing a suitable memory allocation management, not by twisting and bending the design around them. You should consider yourself lucky that you are using C++, which allows you to do your own memory allocation management, as opposed to languages like Java and C#, where you are pretty much stuck with what the language runtime has to offer.
There are various ways of going about memory management in C++, and the ability to overload the new
operator may come in handy. A simplistic memory allocator for your project would preallocate a huge array of bytes and use it as a heap. (byte* heap
.) You would have a firstFreeByte
index, initialized to zero, which indicates the first free byte in the heap. When a request for N
bytes comes in, you return the address heap + firstFreeByte
and you add N
to firstFreeByte
. So, memory allocation becomes so fast and efficient that it becomes virtually no issue.
Of course, preallocating all of your memory may not be a good idea, so you may have to break your heap into banks which are allocated on demand, and keep serving allocation requests from the at-any-given-moment-newest bank.
Since your data are immutable, this is a good solution. It allows you to abandon the idea of variable length objects, and to have each Pair
contain a pointer to its data as it should, since the extra memory allocation for the data costs virtually nothing.
If you want to be able to discard objects from the heap, so as to be able to reclaim their memory, then things become more complicated: you will need to be using not pointers, but pointers to pointers, so that you can always move objects around in the heaps so as to reclaim the space of deleted objects. Everything becomes a bit slower due to the extra indirection, but everything is still lightning fast compared to using standard runtime library memory allocation routines.
But all this is of course really useless to be concerned with if you don't first build a straightforward, bare-minimal, working version of your database, and put it to use in a real application.
Best Answer
While the goal of using immutable types is usually laudable, your specific implementation is excessively clever, involves loads of UB, and will fail for any ParentStruct that is not trivially copyable. Furthermore, your field instances are bound to a specific instance. As these fields must not be copied around, you should at least delete copy constructors and assignment operators. Also note that your fields each have a comparatively huge
int64_t
sized overhead – they are not a zero-cost abstraction! (Also, you should have probably usedptrdiff_t
types rather than explicit integer types.)Part of your conceptual problem is that C++ is a value-oriented language, not a reference-oriented language like Java. In C++, a variable is the object. A variable cannot be re-assigned to a different object, but we can overwrite the contents of the object with a new value. This has direct impact on how we think about immutability. In particular, having mutable values is not immediately problematic. Where we want to disallow changes to a value through a specific reference (such as a variable name), we can mark that reference
const
.C++ already has an acceptable way to create a new object with only one field changed: make a copy, then change that field.
If
y
is supposed to beconst
, then we can use an immediately-invoked lambda:(which, thanks to copy elision, does not require a copy of inner-
y
to the outer variable.)This pattern is also nicely generalizable so that you could write a macro to define setter-methods:
Then:
const ComplexNumber y = x.with_real(12).with_imag(5)
at the cost of an extra copy.