is there any good reason to avoid Node.js for traditional web apps
Yes, if you have N years in web platform X then clearly you can developer an application in platform X faster.
If you want to do Y and platform X has a pre-made solution Y that does X then do so.
All the generic reasons of why you should use one platform over another.
the sort of CRUD apps you build for internal business applications?
Yes there are other platform that let you write a generic application faster, ruby on rails comes to mind.
However, that said. I have experience with node and can't claim I would choose another platform over node unless it does a massive amount of features out of the box for me.
Basically it's a simple question of
Does a tool exists that does all of this for me? No, then pick the most convenient platform to write the tool.
There are no solid reasons why node.js is an inconvenient platform (other then "i hate javascript")
PHP
Your cache read/write is a critical section. You'll need to protect it with your choice of mutual exclusion to prevent the false read you describe. For better or worse, locking in PHP isn't straightforward. The cross-platform solution uses a file (guaranteed to cause grief if your server gets busy). Beyond that it depends on both the OS and server configuration. You can read more here: https://stackoverflow.com/a/2921596/7625.
Node.js
Since Node is single-threaded, you don't need a lock unless you execute an async operation (I/O and related). This doesn't necessarily solve all your problems, however. Read more below.
All
As described, you have a looming big problem. I see a hunch when you say, "...that will block everything else node is doing, won't it?" That's not the exact problem -- you can create a non-blocking wait. But does waiting solve your problems? Not really. If your site is really busy, each waiting request increases the chance the next request will have to wait. The requests are piling up... If there's enough traffic, the waits will get longer and longer. There will be timeouts. There will be hand-wringing. There will be tears.
This is an equal opportunity problem. Neither PHP or Node are immune. (In fact, everyone's vulnerable given a throttled resource and the approach you describe.) A message queue doesn't save you. All a message queue gets you is a bunch of queued requests that are waiting. We need a way to dequeue those requests!
Luckily, this can be pretty straight-forward if we push more responsibility to the browser. With a little re-jiggering, the response back to the browser can contain a status and an optional result. On the server, send a "success" status and result if you get an API token. Otherwise, send a "not yet" status. In the browser, if the request is successful proceed as normal. If not, proceed as you see fit. If you're sending requests asynchronously, you can retry in a half second, then a full second, then... There are great opportunities for giving the user feedback. Beyond great feedback, this approach also keeps server resources to a minimum. The server isn't punished for the third party API's bottleneck.
The approach isn't perfect. One not-so-nice feature is that requests aren't guaranteed to resolve in the order received. Since it's a bunch of browsers trying and retrying, a really unlucky user could continually lose their turn. Which brings me to the penultimate solution...
Open Your Wallet
I'm guessing that your third party API is throttled because it's free! (or inexpensive) If you really want to wow your users, consider paying for better service. Instead of engineering your way out of the problem (which is sorta cool, sorta chintzy), fix the issue with cash. Remember, many, many operations that run on-the-cheap feel cheap. If you want to keep your users, you don't want that.
Best Answer
I work on a node.js web application that is not yet in production, but we do run a development server setup similarly to how I would setup production.
It's a Linux server (CentOS specifically, but any distro would do) with a service account dedicated for running the web app. Deployment is as simple as SCP'ing the tarball file up to the server and running:
npm allows for life cycle scripts to be setup via package.json which is how
npm stop
andnpm start
are implemented. By default,npm start
callsnode server.js
if there is a server.js in the top-level directory. You can define your own start script easily via package.json.We use the daemon module when starting up via
npm start
in order to run the app in the background, totally disconnected, with output redirected to a log file. Starting directly via server.js, however, skips the daemon step so I have an interactive terminal in development.Since we use npm for packaging and installation, the code ends up living under $HOME/node_modules/. And again, this runs under a dedicated user account, so I know exactly what modules are and are not available at runtime.
In order to leverage multiple CPU's, the server.js script will fork a worker (via the cluster module) for each CPU and monitor them for death. If a worker dies, the master process restarts it. There are modules out there to kick off your app and auto-restart if it dies, however, given that our master does no work except for baby sitting the workers, I don't bother watching the master. The game of "who watches the watchers" could go on infinitely.
When going to production, there are only three more elements to build out, which we have not done yet: