Your code has significant other issues apart from just that. Manually deleting a pointer? Calling a cleanup
function? Owch. Also, as accurately pointed out in the question comment, you don't use RAII for your lock, which is another fairly epic fail and guarantees that when DoSomethingImportant
throws an exception, terrible things happen.
The fact that this multithreaded bug is occurring is just a symptom of the core problem- your code has extremely bad semantics in any threading situation and you're using completely unreliable tools and ex-idioms. If I were you, I'd be amazed that it functions with a single thread, let alone more.
Common Mistake #3 - Even though the objects are reference counted, the
shutdown sequence "releases" it's pointer. But forgets to wait for the
thread that is still running to release it's instance. As such,
components are shutdown cleanly, then spurious or late callbacks are
invoked on an object in an state not expecting any more calls.
The whole point of reference counting is that the thread has already released it's instance. Because if not, then it cannot be destroyed because the thread still has a reference.
Use std::shared_ptr
. When all threads have released (and nobody, therefore, can be calling the function, as they have no pointer to it), then the destructor is called. This is guaranteed safe.
Secondly, use a real threading library, like Intel's Thread Building Blocks or Microsoft's Parallel Patterns Library. Writing your own is time-consuming and unreliable and your code is full of threading details which it doesn't need. Doing your own locks is just as bad as doing your own memory management. They have already implemented many general-purpose very useful threading idioms which work correctly for your use.
I created a SingleTon class that can be used to Manage this Proxy.
/// <summary>
/// Singleton Pattern
/// <para>ConnectionHelper provides connection to the Server.</para>
/// </summary>
public sealed class ConnectionHelper
{
private static readonly ConnectionHelper instance = new ConnectionHelper();
/// <summary>
/// Provides a readonly Instance to "ConnectionHelper" class.
/// </summary>
public static ConnectionHelper Instance
{
get
{
return instance;
}
}
/// <summary>
/// Provides access to IServer members.
/// </summary>
public ServerProxy ServerProxyInstance { get; set; }
/// <summary>
/// Gets or sets the peer vue communication URL.
/// </summary>
/// <value>The peer vue communication URL.</value>
public string CommunicationURL
{
get;
set;
}
.....
...
..
}
Best Answer
The Guava Library has the concept of a ListenableFuture and a SettableFuture.
A
ListenableFuture
allows you to register callbacks to be executed once the computation is complete, or if the computation is already complete, immediately. This simple addition makes it possible to efficiently support many operations that the basicFuture
interface cannot support.Because the
Runnable
interface does not provide direct access to theFuture
result, users who want such access may preferFutures.addCallback
. AFutureCallback<V>
implements two methods:onSuccess(V)
, the action to perform if the future succeeds, based on its resultonFailure(Throwable)
, the action to perform if the future fails, based on the failureThe most important reason to use
ListenableFuture
is that it becomes possible to have complex chains of asynchronous operations.When several operations should begin as soon as another operation starts ("fan-out"),
ListenableFuture
just works: it triggers all of the requested callbacks. With slightly more work, we can "fan-in," or trigger a ListenableFuture to get computed as soon as several other futures have all finished: see the implementation of Futures.allAsList for an example.In .NET, these concepts are implemented using
Task
,TaskCompletionSource
andContinueWith
.