TL;DR: The Reduce/Compose Functional Programming Idiom
# define/import: compose(fn1, f2)(arg) => fn2(fn1(arg))
# define/import: reduce(fn, [l1, l2, l3, ...]) => fn(fn(l1, l2), l3)...
commands = [Math.sqrt, Math.floor, String]
floorSqrt = reduce(compose, commands)
# floorSqrt(2) == String(Math.floor(Math.sqrt(2)))
Notes:
Using reduce
and compose
is a very common idiom used when you want to setup a pipeline of transformations for data to go through. It's both readable and elegant.
<rant>
There are too many programmers who conflate readability with familiarity and vice versa. They believe an unfamiliar idiom is unreadable, like the reduce/compose idiom only because it is unfamiliar. To me this behavior is more aligned with a diletante, not someone who is serious about their craft. </rant>
PHP
Your cache read/write is a critical section. You'll need to protect it with your choice of mutual exclusion to prevent the false read you describe. For better or worse, locking in PHP isn't straightforward. The cross-platform solution uses a file (guaranteed to cause grief if your server gets busy). Beyond that it depends on both the OS and server configuration. You can read more here: https://stackoverflow.com/a/2921596/7625.
Node.js
Since Node is single-threaded, you don't need a lock unless you execute an async operation (I/O and related). This doesn't necessarily solve all your problems, however. Read more below.
All
As described, you have a looming big problem. I see a hunch when you say, "...that will block everything else node is doing, won't it?" That's not the exact problem -- you can create a non-blocking wait. But does waiting solve your problems? Not really. If your site is really busy, each waiting request increases the chance the next request will have to wait. The requests are piling up... If there's enough traffic, the waits will get longer and longer. There will be timeouts. There will be hand-wringing. There will be tears.
This is an equal opportunity problem. Neither PHP or Node are immune. (In fact, everyone's vulnerable given a throttled resource and the approach you describe.) A message queue doesn't save you. All a message queue gets you is a bunch of queued requests that are waiting. We need a way to dequeue those requests!
Luckily, this can be pretty straight-forward if we push more responsibility to the browser. With a little re-jiggering, the response back to the browser can contain a status and an optional result. On the server, send a "success" status and result if you get an API token. Otherwise, send a "not yet" status. In the browser, if the request is successful proceed as normal. If not, proceed as you see fit. If you're sending requests asynchronously, you can retry in a half second, then a full second, then... There are great opportunities for giving the user feedback. Beyond great feedback, this approach also keeps server resources to a minimum. The server isn't punished for the third party API's bottleneck.
The approach isn't perfect. One not-so-nice feature is that requests aren't guaranteed to resolve in the order received. Since it's a bunch of browsers trying and retrying, a really unlucky user could continually lose their turn. Which brings me to the penultimate solution...
Open Your Wallet
I'm guessing that your third party API is throttled because it's free! (or inexpensive) If you really want to wow your users, consider paying for better service. Instead of engineering your way out of the problem (which is sorta cool, sorta chintzy), fix the issue with cash. Remember, many, many operations that run on-the-cheap feel cheap. If you want to keep your users, you don't want that.
Best Answer
I think you are looking for the
async
module on npm.Find it on github at https://github.com/caolan/async.
This module supports a lot of utilities for doing things asynchronously that are typically synchronous, like a linear search through an array.
More importantly for you, it allows for chaining asynchronous functions. It has 2 modes you can use: parallel or serial. In parallel mode, it runs the async functions in parallel (this is a lie as nothing in parallel in node.js, but it will simply context switch based on I/O calls as any async stuff works in node). In serial mode, it runs each function after the previous one finishes by chaining the callbacks together.
From the quick examples on their README:
So if I understand your question correctly, what you want to do is this:
This is how under the hood async will handle this and you could do it yourself if you don't want to use the module:
As you can see, this gets messy as you add more and more functions to the chain. That is why the async.series call is more maintainable.
You also need to update your functions to use callbacks. No matter what tools or frameworks you use, if your async functions don't pass a callback, there is no way to know whey they finished and therefore no way to move to the next function in the chain.
Function changes: