If SQL Server is started in single-user mode, any user who has membership in the BUILTIN\Administrators group can connect to SQL Server as a SQL Server administrator. The user can connect regardless of whether the BUILTIN\Administrators group has been granted a server login that is provisioned in the SYSADMIN fixed server role. This behavior is by design. This behavior is intended to be used for data recovery scenarios.
So if you have access as an administrator to the machine, you can still access SQL Server - but single-user mode will be required.
Hope this helps.
Raj
As mentioned in other answers:
Old-style database mirroring and new-style AlwaysOn need threads and you will definitely run out of threads with 2000 databases. I vaugely recall that the practical limit is well below 200 databases. (There is a white paper on this somewhere, but I'm too lazy to look for it right now and this answer is already super-long.) Of course, that 200 databases per instance. Theoretically, you could start up 20 instances and run 100 databases on each instance. Managing all of that would be a hassle, and I suspect that managing the memory between all of those instances would be a headache.
SQL Server replication (replicating tables (or subsets of tables), rather than files) isn't really intended for DR. It's hard to set up and administer, even for a few databases. You may need to alter your data model to get it to work, that might mean changes to your app. You'd need an automated way to apply the same replication configuration to each of your 2000 (presumably identical or nearly identical) databases. The stored procedures that you need to use to configure replication are messy. Administering 2000 databases that are configured for replication via the GUI would be a nightmare. When/if you failover, you might need to make changes to get everything working again. Failover time isn't when you want to be doing any finicky changes or work that you can avoid. You want to get everything back up and running ASAP. It just seems like a pack of problems.
Replication between SAN storage units can be expensive, especially if you are talking about hardware from an outfit like EMC. Once you start with a vendor, you are pretty much married to them for upgrades, maintenance, additional space, etc.
Suggestion #1:
Have you looked at something like Steeleye's DataKeeper? It's a software-based replication product that runs on your servers that leverages Windows Failover Clustering. I've never actually used it and I have no connection to the company other than sitting through a few dog-and-pony shows. It looks ideal for your situation.
Suggestion #2:
If it were me and I had absolutely no budget, I would look at some home-grown log shipping system. I have doubt that the built-in log shipping will deal with 2000 databases very well. It's not that hard to write a log shipping system, and it can address all of the issues specific to your environment. (For example, maybe you need to send the files via sftp to your DR site.)
Basically, there are three parts to the system. Each part needs to run on a regular schedule:
One part takes the transaction log backups, putting the tlog backup files for each database into a different folder (for file system scaling). I would not use the maintenance wizard for this, I've seen it go wonky too many times and start skipping databases and generally misbehaving. If you want to provide a 30 minute guarantee, maybe this runs every 15 minutes.
One part copies the backup files from the staging area to your DR site. This could be something as simple as a robocopy CMD file if you have VPN to your DR. You could write a package or a powershell script if you need something fancier (sftp or ssh/scp, or maybe zip/unzip if you don't have built-in backup compression). This can run more quickly, maybe every 5 minutes, to make sure it gets everything. Once something is copied offsite, it is "safe".
- One part restores the tlog backups that it finds at the DR site onto your secondary server. You need to be sure to identify the tlogs that have been restored and move them off or delete them on some schedule or you will eventually run out of space. This doesn't need to run as frequently, but you need to make sure that it has run on all available tlog backups have been applied before declaring the DR secondary "live" when you have a problem.
You want tables that audit all three steps, some reports/scripts that show you what has happened (is a particular database running on your primary or secondary site? has any database on the secondary not seen a tlog restore in, say, two hours?) and an alerting scheme.
On top of that, I'd also want to be able to choose a particular database to failover as well as being able to failover everything. Being able to pick a db to failover allows for easy testing (you failover a test database, not a customer's database) and might give you a rudimentary load-balancing scheme if you get into scaling trouble. You will also want an automated way to "resync" between primary and secondary (take a full backup from the primary and apply it to the secondary, start the tlogs flowing, etc.) These features might be better for a release 2.0.
(Everyone has forgotten that the earliest tlog shipping that MS supported was implemented via a few scripts that you could download and run on SQL 7.0. There was go GUI, the UI was some SQL reports and a few stored procedures.)
Other than writing a little tsql code, the challenges here are:
Changing to the full recovery model (it sounds to me that you might be running in simple recovery model) and the increases in storage usage that are likely for log backups, increased database sizes, what-have-you.
Making sure that your storage system can handle the load of frequent tlog backups and copying them to a DR site in a timely way. IOW, if you have 2000 databases and want to guarantee data up until the last hour, you need to able to take one transaction log backup of each of those 2000 databases and get it onto networked storage (somewhere that isn't in your primary server).
Making sure that everything keeps up generally.
After I got all of that working, I'd start looking at automating failover, how to tell my web sites where the live version of a particular customer's database was running, etc. If you aren't running clustered systems, making sure that you keep all of the logins/passwords, jobs, linked servers, etc. etc. in sync is a PITA.
Best Answer
Database files, transaction log files and temporary data files are usually placed on different volumes to reduce I/O contention between them, and this can also be extended further by creating multiple data files for each database, spreading them across even more volumes and telling SQL Server where specific tables and indexes should be stored; so, yes, this is a common practice for DBAs, and can indeed lead to significant performance gains.
It is also true than it can become completely meaningless when using a SAN; depending on how that SAN has been actually configured, different LUNs (volumes) can be mapped to any set of phsyical disks, RAID arrays and storage controllers, and thus the I/O contention between two LUNs can range anywhere from "completely isolated" to "effectively sharing the same disks". So, unless the DBA is working closely together with the storage admin, asking for different LUNs in order to spread database files between them can indeed become a completely wasted effort.
However, separating database files and transaction log files is considered a best practice not only for performance, but also for reliability: due to the transactional storage engine being used by SQL Server (and to similar engines used by almost any existing DBMS), log files can be used to replay transactions in a disaster recovery scenario, enabling the system to recover what happened after the last backup was taken. But, in order for this to work, you don't want to lose data files and log files at the same time, so it's better to at least place them on different volumes, even if there is no performance gain from this.