Sql-server – Optimizing performance across thousands of SQL Server databases

Architecturedatabase-performancesql server

We are building an application where each client will have their own database. None of the databases are particularly large (20MB to 400MB each), but there will be ~5,000 to start and at any one time 100 or so will be active.

Our team has been debating how best to set up the system. Clients only access their database once every 2 weeks (401k/finance processing), and only use it for 10-30 minutes at a time. Operations are evenly spread between reads/writes.

Half of our team feels that as a result we should spread the databases across multiple cheap servers and just use SQL Express … they say that memory/caching wouldn't be as useful given the short period of time each database is used (we don't have the budget for the full SQL Standard on more than 1 server).

Is this the case? A higher memory limit is really the only advantage I see MSSQL Standard bringing us (we already have scripts for doing backups/restores, schema upgrades, migrating data, etc).

Update

I'm particularly interested in performance characteristics of multiple databases vs. one database. Wouldn't the end user experience be better hitting a single 200MB database than a 1TB database (even if both were well indexed)? This also means we can easily backup/restore single client databases very very fast, right? Would we need to tune SQL Server to better handle the 'thousands of databases' scenario?

Best Answer

I'd throw it all on one server. Maintaining the several cheap sql express servers will be a pain. You can spread databases, logs and the temp database across different RAID disk arrays. You should consider moving your temp database to it's own array as it may be in use by all the databases at once.

Check out the 2008 resource governor to make sure no one user will bring the server to a crawl. It is only in the enterprise version though.