As far as I can tell, the reason can be found in the part of javadoc you didn't quote (emphasis below mine):
An iterator for lists that allows the programmer to traverse the list in either direction, modify the list during iteration...
You see, the intended purpose is to allow usage while list is being modified. Possible modifications apparently include removal of the elements.
Now think of what would happen if we remove an element that would be current()
for iterator - assuming that iterator would have a notion of current element? In this context, the way to implement it without a notion of current element makes pretty good sense to me - because that way, iterator does not have to worry about elements removal.
It is important to note that javadoc does not require interface implementations to be thread safe.
- Because of that, one should not expect correct handling of modifications done from different threads - for that, implementation would have to provide additional means to synchronize access, guarantee visibility etc as specified by Java Memory Model per JSR 133.
What ListIterator is capable of, is handling modifications done from the same thread when iterating. Not all the iterators are like that, ConcurrentModificationException javadocs specifically warn about this:
...Note that this exception does not always indicate that an object has been concurrently modified by a different thread. If a single thread issues a sequence of method invocations that violates the contract of an object, the object may throw this exception. For example, if a thread modifies a collection directly while it is iterating over the collection with a fail-fast iterator, the iterator will throw this exception...
... it's definitely useful to have the option of passing ranges. But at least in my experience, that's a rare special case. I'll usually want to operate on whole containers
It may be a rare special case in your experience, but in reality the whole container is the special case, and the arbitrary range is the general case.
You've already noticed that you can implement the whole container case using the current interface, but you can't do the converse.
So, the library-writer had a choice between implementing two interfaces up front, or only implementing one which still covers all cases.
It's easy to write a wrapper function which takes a container and calls begin() and end() on it, but such convenience functions are not included in the standard library
True, especially since the free functions std::begin
and std::end
are now included.
So, let's say the library provides the convenience overload:
template <typename Container>
void sort(Container &c) {
sort(begin(c), end(c));
}
now it also needs to provide the equivalent overload taking a comparison functor, and we need to provide the equivalents for every other algorithm.
But we at least covered every case where we want to operate on a full container, right? Well, not quite. Consider
std::for_each(c.rbegin(), c.rend(), foo);
If we want to handle operating backwards on containers, we need another method (or pair of methods) per existing algorithm.
So, the range-based approach is more general in the simple sense that:
- it can do everything the whole-container version can
- the whole-container approach doubles or triples the number of overloads required, while still being less powerful
- the range-based algorithms are also composable (you can stack or chain iterator adaptors, although this is more commonly done in functional languages and Python)
There's another valid reason, of course, which is that it was already a lot of work to get the STL standardized, and inflating it with convenience wrappers before it had been widely used wouldn't be a great use of limited committee time. If you're interested, you can find Stepanov & Lee's technical report here
As mentioned in comments, Boost.Range provides a newer approach without requiring changes to the standard.
Best Answer
Yes - it forces an extra indirection and an extra allocation per element, and in multithreaded programs each increment/decrement of the reference count is extra expensive even if a given container is used only inside a single thread.
All of these might be fine, and even desirable, in some situations, but the general rule is not to impose unnecessary overheads which the user cannot avoid, even when they're useless.
Since none of these overheads are necessary, but are rather debugging niceties (and remember, incorrect iterator lifetime is a static logic bug, not some weird runtime behaviour), no-one would thank you for slowing down their correct code to catch your bugs.
So, to the original question:
the real question is, should the cost of tracking all live iterators into a container, and invalidating them when the container is destroyed, be foisted on people whose code is correct?
I think probably not, although if there is some case where it's genuinely hard to manage iterator lifetimes correctly and you're willing to take the hit, a dedicated container (or container adapter) that provides this service could be added as an option.
Alternatively, switching to a debug implementation based on a compiler flag might be reasonable, but it's a much bigger and more expensive change than most that are controlled by DEBUG/NDEBUG. It's certainly a bigger change than either removing assert statements or using a debugging allocator.
I forgot to mention, but your solution of using
shared_ptr
everywhere doesn't necessarily fix your bug anyway: it may merely exchange it for a different bug, namely a memory leak.