I've seen some posts on the StackExchange family of sites talking about std::vector implementations. They all seem to indicate that std::vector is implemented strictly as an array (in practice), and that C++ 2003 dictates contiguity of elements – pretty much closing non-array loopholes.
Now I thought that I had read once of a non-array implementation of std::vector – perhaps this predated the 2003 enforcement of contiguity? (Edit: Herb Sutter makes note of this here) I believe it was something like a series of linked arrays with decreasing or increasing sizes under the hood but I can't remember the details. Does anyone know of std::vector implementations (or perhaps, more broadly, non-STL vector implementations) that use a non-array approach like this?
Edit: I'd like to clarify here that the emphasis is less on strict std::vector implementation for C++ and rather more on 1) historical STL implementations prior to C++ 2003 contiguous elements constraints or possibly even 2) "vector" implementations in other languages – that do not use the usual array-like structure. A VList implementation of a vector might be a potential example and I'm looking for others.
Best Answer
StackOverflow would be a somewhat better place for asking programming/algorithm related questions. In any case, the implementation you must have read would be based on "tables". Here is how such implementation works:
Address: 0xAAA0 to 0xAAB0 Memory reserved
STL Library: Allocate memory for 16 * 2 = 32. Copy 16 elements. (Actual time taken = 16 units). Insert the 17th element.
STL Library: Allocate memory for 16 * 2 * 2 = 64. Copy 32 elements. (Actual time taken = 32 units). Insert the 33rd element.
This implementation is O(1) for accessing and O(1) amortized for insertion. How? Over a very large number of operations, the total time of inserts would be:
Time = 2^0 (inserts) + 2^0 (copy) + 2^1 / 2 ( inserts ) + 2^1(copy) + 2^2/2 (inserts) + 2^2 (copy) ... .. + 2^n(copy)
Average time per insert = 2 units
Average time per insert = 3 units
Its not a linked list- but I'm pretty this is what you read: 'increasing/decreasing' sizes. Decrease in size upon deletes is similar. When used size is less than 1/4, free up the rest of memory, free up half of memory. If you're using your own memory allocator, it shouldn't be too hard to free only what you want. But if you want to copy over as well, analysis would tell you deletes still remain O(1)