Don't expose your guts, guide visitors :
class A {
public:
// we assume you want read-only versions, if not you can add non-const versions
template< class Func >
void for_each_primary( Func f ) const { for_each_value( f, m_primary ); }
template< class Func >
void for_each_secodary( Func f ) const { for_each_value( f, m_secondary ); }
private:
std::vector<int> m_primary;
std::vector<char> m_secondary;
template< class Func, class Container >
void for_each_value( Func f, const Container& c )
{
for( auto i : c )
f( i );
}
};
int main()
{
A a;
a.for_each_primary( [&]( int value )
{ std::cout << "Primary Value : " << value ; } );
a.for_each_secondary( [&]( int value )
{ std::cout << "Secondary Value : " << value ; } )
}
Note that you could use std::function instead of template parameter if you want to put the implementation in a cpp file, making implementation changes less expensive on compilation times in big projects.
Also, I didn't try to compile it now, but I used a lot this pattern in my open-source projects.
This solution is a C++11 enhancement of B I guess.
HOWEVER
This solution have several issues :
- It requires C++11 to be effective, because it's efficient for the user of you class ONLY if he can use lambda.
- It relies on the fact that the class implementer really know what algorithms precisely are to be available to users. If the user need to do complex manipulations to the numbers, jumping from index to index in an unpredictable way for example, then exposing iterators, a copy of the values OR the values would be better.
In fact, this kind of choice totally depends on what you intend the user to do with this class.
By default I prefer the solution I gave you because it's the most "isolated" one, making sure the class know how it's values can be manipulated by external code. It's a bit like "extensions points".
If it's a map, providing a find function to your class is easy. So I think that's the more sane way to expose data and it's also made available by lambdas.
As said, if you need to make sure the user can manipulate the data as he wish, then providing a copy of the container is the next "isolated" option (maybe with a way to reset the container with the copy after that). If a copy would be expensive, then iterators would be better. If not enough then a reference is acceptable but it's certainly a bad idea.
Now assuming you're using C++11 and don't want to provide algorithms, the most idiomatic way is using iterators this way (only the user code changes) :
class B {
private:
std::vector<int> m_primary;
std::vector<char> m_secondary;
public:
// your code is read-write enabled... make sure it's not const_iterator you want
// also I'm using decltypt to allow changing container type without having to manually change functions signatures
decltype(m_primary)::iterator primary_begin() const;
decltype(m_primary)::iterator primary_end() const;
decltype(m_secondary)::iterator secondary_begin() const;
decltype(m_secondary)::iterator secondary_end() const;
};
int main()
{
B b;
std::for_each( b.primary_begin(), b.primary_end(), []( int& value ) {
// ...
});
std::for_each( b.secondary_begin(), b.secondary_end(), []( double& value ) {
// ...
});
}
Is using shared_ptr
to the backing data an unnecessary inefficiency
Yes - it forces an extra indirection and an extra allocation per element, and in multithreaded programs each increment/decrement of the reference count is extra expensive even if a given container is used only inside a single thread.
All of these might be fine, and even desirable, in some situations, but the general rule is not to impose unnecessary overheads which the user cannot avoid, even when they're useless.
Since none of these overheads are necessary, but are rather debugging niceties (and remember, incorrect iterator lifetime is a static logic bug, not some weird runtime behaviour), no-one would thank you for slowing down their correct code to catch your bugs.
So, to the original question:
should an iterator into a custom container survive the container itself being destroyed?
the real question is, should the cost of tracking all live iterators into a container, and invalidating them when the container is destroyed, be foisted on people whose code is correct?
I think probably not, although if there is some case where it's genuinely hard to manage iterator lifetimes correctly and you're willing to take the hit, a dedicated container (or container adapter) that provides this service could be added as an option.
Alternatively, switching to a debug implementation based on a compiler flag might be reasonable, but it's a much bigger and more expensive change than most that are controlled by DEBUG/NDEBUG. It's certainly a bigger change than either removing assert statements or using a debugging allocator.
I forgot to mention, but your solution of using shared_ptr
everywhere doesn't necessarily fix your bug anyway: it may merely exchange it for a different bug, namely a memory leak.
Best Answer
It depends on the nature of those custom operations, but probably yes, it's bad design. And the main reason for stating this is that you are coupling operation and storage.
If a certain custom operation is semantically independent, it can be better implemented as an standalone (functor) class or function, taking an specific container parameter. Moreover, if it doesn't depend on the concrete container class, it can be added an extra template parameter specifying it, or even better, refactored in order to work over iterator ranges. See, just like the STL algorithm functions.
Also, wrapping the way you do it involves forwarding a significant part of the wrapped container's public interface. This is a clear smell.