Short answer:
No, synchronous processing in node.js WILL cause performance problems.
Long answer:
You want to avoid synchronous processing in node as much as possible. Obviously, sometimes you're going to need it, but that may be a point at which you move the relevant synchronous code out of node and into something more suitable.
Basically, as node follows an evented model of concurrency, where a single thread runs a stack of events one at a time, any long-running events are going to block the execution of anything else. So, what you'll be looking to do is to scythe off as much functionality as possible into small, asynchronous, function calls.
What you'll want to do is look to use a library, such as async or Step, which allows you to avoid the "callback hell" approach to node programming. JQuery has a similar thing in the browser that you may have used, which is its promise() method and Deferred objects. This will make it much quicker to write, and maintain, your node source.
If you really do need to put a large synchronous procedure in your web app, then node is not the right tool for that job. Try looking at having another API that node can call out to when it needs to accomplish a long-running task, whether that's a command-line C++ app or a web API in a more standard-concurrency model language.
It's particularly strong at handling a ton of file I/O and I would expect it to handle a ton of network communication well too. It seems particularly popular for socket-driven apps. The important thing to keep in mind is that if your needs aren't met by existing libraries (there are many) you may need to dive into some C which can be bound to JS commands. You can also spawn additional Node processes but I suspect doing a lot of that could get taxing (I'm assuming - might be wrong - there's a V8 instance spawned for each one of those).
JS is single-threaded and blocking, meaning nothing else can execute until a function call has completed. This was a desired feature of JS, essentially taking all the threading and queuing concerns out of your hands. The JS doesn't stop the C/C++ stuff from from running in a more multi-threaded fashion under the hood so JS's role is really more architecture/messenger. If you're image-processing, you're not going to want to handle that with synchronous JavaScript commands because everything else on your app or server will be blocked until it's done. The idea is that you call for an image to be processed by bound C/C++ functionality, and then respond to the 'done' event when the image is finished being processed.
This requires that the JS in any Node.js app be heavily event and callback driven or it will likely perform very poorly. So you won't see a lot of method calls in Node that don't get handed a function for later use. One thing that becomes very clear very fast in Node is that you're in for a world of ugly if you don't find a way to handle the callback pyramid. e.g.
//event CBs are more DOM-style than Node style and this isn't built-in Node file I/O
//keeping it simple and quick since I'll just get Node stuff wrong from memory
file.get('someFile.txt', function(e){
e.fileObj.find('some snippet', function(e){
someFinalCallBackHandler( e.snippetLocations );
} );
} );
Fortunately there are plenty of tools and examples out there for handling this better. Most tend to revolve around promise mechanisms and simply chaining a series of functions meant to respond to each other's callback states in an array that does the ugly pyramid stuff for you under the hood.
Personally, I freaking love that we get JS at the high level and C/C++ closer to the chrome. It's the ultimate combo and it inspired me to start learning C. And don't let the lack of library potential freak you out until you've done some research. Node libraries are being produced at a very rapid pace and are maturing very rapidly. If you're not doing anything highly unusual odds are good somebody has it covered.
The biggest difference from Rails, is that JS is never likely to be on rails as it were. We tend to code for being able to have it however you want it very rapidly so there is the rope to hang yourself with factor and architecture has been pretty DIY in JS until more recent years. I call that freedom, but I realize that's not seen as ideal to a lot of devs.
Also, you will never ever have a "gem" problem in Node.js because you tried to install on something other than a Mac. Client-side web devs despise dependency issues and that's where a lot of Node's core is coming from. If it doesn't work out of the box in 5 minutes or less on every popular platform, we generally crumple it up and toss it. I have yet to run into a popular module that required I do anything special to get it working. The package system, is excellent.
But to answer your core question more explicitly/succinctly: Is it good with background processes?
Yes, Node basically IS background processes with a means of driving an app via events and callbacks.
Best Answer
This is the workflow that I currently use for a project with monthly releases.
The objective is not to have automated updates at all. They might break your system in a way that you don't expect and very likely when you were making other changes, which will not help in figure out what the problem is.
Updating dependencies should be a conscious process, and if any of them break your system, you should be aware of which one.
Minor updates that break and go unnoticed will likely be picked up in your sprint of development or testing, and this is why you do it after release, because you want as much time to detect it and react to that issue before going live.
Note that I actually use this process on a .NET + Nuget dependencies project, but it pretty much applies to Node and npm / bower, Rails + bundle etc.
Finally, you can make use of nice commands that will help you in freezing/unfreezing dependencies, even adding them to your repo. See
npm shrinkwrap
.