(ignoring the productivity side of things, b/c that is developer specific).
First off, you shouldn't do that. Experience shows that having successfully built a alike product on the platform before is the number one predictor of success. Pick your team first, on their merits. Unless your team is suggesting a crazy outlandish platform, then go with whatever platform they're familiar with and recommending.
Now for argument's sake let's say a single server can handle 1K users.
I think that's your central mistake. How many users a server can handle will of course totally and completely depend on the application. But for most webapps >98% of the users are not logged in at any given time. A basic web application server can often handle 200 - 800 request pr second. Do the math, and you'll see that many applications can have several tens of thousands of users per server.
To give you an example you're familiar with, the Stack Overflow family of site (of which this site is one) handles more than 6 million unique visitors each month, and does many more pageviews than that. And here is the full server farm for it.
Commercial open-source costs money too. Take a proper look at MySQL, Ubuntu, Red Hat Linux, JBoss and all the others -- you will typically want to have an expert on staff at all times, or purchase a support subscription from the backing company.
Last but not least, in the end people costs tend to completely dominate for <200 server installations. If you build out your own infrastructure -- as opposed to using cloud computing or a full-service provider -- the salaries and hours spent on sysadmins will generally exceed your software costs. For Google with their 250,000+ servers it's a completely different matter, but for smaller installations this generally holds true.
One major caveat: Some types of sites, such as social networks and video transcoding sites, have very different hardware needs than what I describe above. Do a proper analysis of your specific application.
Mass GRANT/REVOKE is only available on PostgreSQL 9.0+.
Instead, you can create the functions pg_grant() and pg_revoke() found at http://blog.bhushangahire.net/2009/06/17/grant-privileges-to-all-tables-in-a-database-for-postgresql/
The parameters to both functions are:
- User Role
- Permissions
- Object name, may be a pattern. "%" affects all objects
- Schema name
You will need to run the command on each database and each schema. The syntax looks like this:
SELECT pg_grant('web_user','SELECT,INSERT,UPDATE,DELETE','%','public');
If you like, you may be able to change this line in the function definition:
nspname = $4 AND
to this:
nspname like $4 AND
in order to be able to specify schema as a wildcard, but I haven't tested it.
Best Answer
If the points that I lay out in this answer are taken care of I would say PostgreSQL still performs reasonably well with Windows. Comparing to the year 2011 I don't see any major difference though.
From my experience it's mostly about taking care that no Windows process gets in the way so this quote from the book PostgreSQL 9.0 High Performace (2010) still largely holds:
You might think that favouring Server Core edition of Windows 2012 might remedy this a bit. MSDN article Why Is Server Core Useful? (about Windows 2008) says however:
Other than that the memory handling is also different in Windows and at times counterintuitive. This too is the same as in 2011. The
shared_buffers
for example (from PostgreSQL Documentation - 18.4. Resource Consumption):