Your code has significant other issues apart from just that. Manually deleting a pointer? Calling a cleanup
function? Owch. Also, as accurately pointed out in the question comment, you don't use RAII for your lock, which is another fairly epic fail and guarantees that when DoSomethingImportant
throws an exception, terrible things happen.
The fact that this multithreaded bug is occurring is just a symptom of the core problem- your code has extremely bad semantics in any threading situation and you're using completely unreliable tools and ex-idioms. If I were you, I'd be amazed that it functions with a single thread, let alone more.
Common Mistake #3 - Even though the objects are reference counted, the
shutdown sequence "releases" it's pointer. But forgets to wait for the
thread that is still running to release it's instance. As such,
components are shutdown cleanly, then spurious or late callbacks are
invoked on an object in an state not expecting any more calls.
The whole point of reference counting is that the thread has already released it's instance. Because if not, then it cannot be destroyed because the thread still has a reference.
Use std::shared_ptr
. When all threads have released (and nobody, therefore, can be calling the function, as they have no pointer to it), then the destructor is called. This is guaranteed safe.
Secondly, use a real threading library, like Intel's Thread Building Blocks or Microsoft's Parallel Patterns Library. Writing your own is time-consuming and unreliable and your code is full of threading details which it doesn't need. Doing your own locks is just as bad as doing your own memory management. They have already implemented many general-purpose very useful threading idioms which work correctly for your use.
Best Answer
Transactional Memory (TM) programming really has two elements that need to be discussed: productivity and performance.
Productivity
Compared to locks, Transactional Memory can be considered a higher-level access-control construct. The difference is akin to imperative programming vs declarative.
From a programmer's standpoint, TM is supposed to allow you to declare what needs to be executed atomically, rather than providing instructions (in the forms of locking code) on how the atomic execution should be done.
This comes with a number of promised benefits compared to locking -- easier reasoning, composability, deadlock-freedom.
Performance
Compared to locks, TM is (supposed to be --- implementations can vary here!) an optimistic access-control construct, as opposed to the pessimism of locks. The general idea of TM is that instead of requiring mutual exclusion from an atomic region, as you would with locks, the system allows multiple threads to enter the atomic region.
By tracking the state changes, and monitoring for the possibility of data races, in the case where a race does not occur, the system will allow the threads to commit. Otherwise, at least one transaction will be aborted and re-started. In the case where conflict is rare, this can be a substantial performance gain.
Caveats
Like all real systems, there's a list of caveats as long as my arm. Here's a partial set: