You did a great job of summarizing what's awesome about Node.js. My feeling is that Node.js is especially suited for applications where you'd like to maintain a persistent connection from the browser back to the server. Using a technique known as "long-polling", you can write an application that sends updates to the user in real time. Doing long polling on many of the web's giants, like Ruby on Rails or Django, would create immense load on the server, because each active client eats up one server process. This situation amounts to a tarpit attack. When you use something like Node.js, the server has no need of maintaining separate threads for each open connection.
This means you can create a browser-based chat application in Node.js that takes almost no system resources to serve a great many clients. Any time you want to do this sort of long-polling, Node.js is a great option.
It's worth mentioning that Ruby and Python both have tools to do this sort of thing (eventmachine and twisted, respectively), but that Node.js does it exceptionally well, and from the ground up. JavaScript is exceptionally well situated to a callback-based concurrency model, and it excels here. Also, being able to serialize and deserialize with JSON native to both the client and the server is pretty nifty.
I look forward to reading other answers here, this is a fantastic question.
It's worth pointing out that Node.js is also great for situations in which you'll be reusing a lot of code across the client/server gap. The Meteor framework makes this really easy, and a lot of folks are suggesting this might be the future of web development. I can say from experience that it's a whole lot of fun to write code in Meteor, and a big part of this is spending less time thinking about how you're going to restructure your data, so the code that runs in the browser can easily manipulate it and pass it back.
Here's an article on Pyramid and long-polling, which turns out to be very easy to set up with a little help from gevent: TicTacToe and Long Polling with Pyramid.
There are edge cases when you might use multiple stores (e.g. if you have performance problems with updating lists of thousands of items that are on screen at the same time many times per second). That said it's an exception and in most apps you never need more than a single store.
Why do we stress this in the docs? Because most people coming from Flux background will assume multiple stores is the solution to making update code modular. However Redux has a different solution for this: reducer composition.
Having multiple reducers that are further split into a reducer tree is how you keep updates modular in Redux. If you don't recognize this and go for multiple stores without fully understanding reducer composition first, you will miss many benefits of Redux single store architecture:
Using reducer composition makes it easy to implement "dependent updates" a la waitFor
in Flux by writing a reducer manually calling other reducers with additional information and in a specific order.
With a single store, it's very easy to persist, hydrate, and read the state. Server rendering and data prefetching is trivial because there is just one data storage that needs to be filled and rehydrated on the client, and JSON can describe its contents without worrying about store's ID or name.
A single store makes Redux DevTools time travel features possible. It also makes community extensions like redux-undo or redux-optimist easy because they operate on the reducer level. Such "reducer enhancers" can't be written for stores.
A single store guarantees that the subscriptions are called only after the dispatch has been processed. That is, by the time listeners are notified, the state has been fully updated. With many stores, there are no such guarantees. This is one of the reasons Flux needs the waitFor
crutch. With a single store, this is not a problem you see in the first place.
Above all, multiple stores are unnecessary in Redux (except for performance edge cases which you are supposed to profile first anyway). We make it an important point in the docs so you are encouraged to learn reducer composition and other Redux patterns instead of using Redux as if it was Flux, and losing its benefits.
Best Answer
First let's consider a case where you have a single reducer.
Say you don't use
combineReducers()
.Then your reducer might look like this:
Now let's say you create a store with it.
The initial state is zero. Why? Because the second argument to
createStore
wasundefined
. This is thestate
passed to your reducer the first time. When Redux initializes it dispatches a “dummy” action to fill the state. So yourcounter
reducer was called withstate
equal toundefined
. This is exactly the case that “activates” the default argument. Therefore,state
is now0
as per the defaultstate
value (state = 0
). This state (0
) will be returned.Let's consider a different scenario:
Why is it
42
, and not0
, this time? BecausecreateStore
was called with42
as the second argument. This argument becomes thestate
passed to your reducer along with the dummy action. This time,state
is not undefined (it's42
!), so ES6 default argument syntax has no effect. Thestate
is42
, and42
is returned from the reducer.Now let's consider a case where you use
combineReducers()
.You have two reducers:
The reducer generated by
combineReducers({ a, b })
looks like this:If we call
createStore
without theinitialState
, it's going to initialize thestate
to{}
. Therefore,state.a
andstate.b
will beundefined
by the time it callsa
andb
reducers. Botha
andb
reducers will receiveundefined
as theirstate
arguments, and if they specify defaultstate
values, those will be returned. This is how the combined reducer returns a{ a: 'lol', b: 'wat' }
state object on the first invocation.Let's consider a different scenario:
Now I specified the
initialState
as the argument tocreateStore()
. The state returned from the combined reducer combines the initial state I specified for thea
reducer with the'wat'
default argument specified thatb
reducer chose itself.Let's recall what the combined reducer does:
In this case,
state
was specified so it didn't fall back to{}
. It was an object witha
field equal to'horse'
, but without theb
field. This is why thea
reducer received'horse'
as itsstate
and gladly returned it, but theb
reducer receivedundefined
as itsstate
and thus returned its idea of the defaultstate
(in our example,'wat'
). This is how we get{ a: 'horse', b: 'wat' }
in return.To sum this up, if you stick to Redux conventions and return the initial state from reducers when they're called with
undefined
as thestate
argument (the easiest way to implement this is to specify thestate
ES6 default argument value), you're going to have a nice useful behavior for combined reducers. They will prefer the corresponding value in theinitialState
object you pass to thecreateStore()
function, but if you didn't pass any, or if the corresponding field is not set, the defaultstate
argument specified by the reducer is chosen instead. This approach works well because it provides both initialization and hydration of existing data, but lets individual reducers reset their state if their data was not preserved. Of course you can apply this pattern recursively, as you can usecombineReducers()
on many levels, or even compose reducers manually by calling reducers and giving them the relevant part of the state tree.