I ran into a scenario where I had a delegate callback which could occur on either the main thread or another thread, and I wouldn't know which until runtime (using StoreKit.framework
).
I also had UI code that I needed to update in that callback which needed to happen before the function executed, so my initial thought was to have a function like this:
-(void) someDelegateCallback:(id) sender
{
dispatch_sync(dispatch_get_main_queue(), ^{
// ui update code here
});
// code here that depends upon the UI getting updated
}
That works great, when it is executed on the background thread. However, when executed on the main thread, the program comes to a deadlock.
That alone seems interesting to me, if I read the docs for dispatch_sync
right, then I would expect it to just execute the block outright, not worrying about scheduling it into the runloop, as said here:
As an optimization, this function invokes the block on the current thread when possible.
But, that's not too big of a deal, it simply means a bit more typing, which lead me to this approach:
-(void) someDelegateCallBack:(id) sender
{
dispatch_block_t onMain = ^{
// update UI code here
};
if (dispatch_get_current_queue() == dispatch_get_main_queue())
onMain();
else
dispatch_sync(dispatch_get_main_queue(), onMain);
}
However, this seems a bit backwards. Was this a bug in the making of GCD, or is there something that I am missing in the docs?
Best Answer
dispatch_sync
does two things:Given that the main thread is a serial queue (which means it uses only one thread), if you run the following statement on the main queue:
the following events will happen:
dispatch_sync
queues the block in the main queue.dispatch_sync
blocks the thread of the main queue until the block finishes executing.dispatch_sync
waits forever because the thread where the block is supposed to run is blocked.The key to understanding this issue is that
dispatch_sync
does not execute blocks, it only queues them. Execution will happen on a future iteration of the run loop.The following approach:
is perfectly fine, but be aware that it won't protect you from complex scenarios involving a hierarchy of queues. In such case, the current queue may be different than a previously blocked queue where you are trying to send your block. Example:
For complex cases, read/write key-value data in the dispatch queue:
Explanation:
workerQ
queue that points to afunnelQ
queue. In real code this is useful if you have several “worker” queues and you want to resume/suspend all at once (which is achieved by resuming/updating their targetfunnelQ
queue).funnelQ
with the word "funnel".dispatch_sync
something toworkerQ
, and for whatever reason I want todispatch_sync
tofunnelQ
, but avoiding a dispatch_sync to the current queue, so I check for the tag and act accordingly. Because the get walks up the hierarchy, the value won't be found inworkerQ
but it will be found infunnelQ
. This is a way of finding out if any queue in the hierarchy is the one where we stored the value. And therefore, to prevent a dispatch_sync to the current queue.If you are wondering about the functions that read/write context data, there are three:
dispatch_queue_set_specific
: Write to a queue.dispatch_queue_get_specific
: Read from a queue.dispatch_get_specific
: Convenience function to read from the current queue.The key is compared by pointer, and never dereferenced. The last parameter in the setter is a destructor to release the key.
If you are wondering about “pointing one queue to another”, it means exactly that. For example, I can point a queue A to the main queue, and it will cause all blocks in the queue A to run in the main queue (usually this is done for UI updates).