Yes, database mirroring is a reliable technology and is in use by many Enterprise customers today. I've helped people set up mirroring both while inside and outside Microsoft - I used to own database mirroring when I responsible for the SQL Server Storage Engine.
As with any HA technology, you need to understand your requirements and limitations before choosing technologies to help you implement an HA strategy. If you choose technologies before doing this and try to retrofit them to your requirements, you won't get a successful strategy.
I've just finished writing a 35-page whitepaper on HA with SQL Server for the SQL team - it'll be published before SQL PASS. In the meantime, to get an idea of the kind of questions to ask yourself, see this blog post of mine: HA: Where do you start when choosing a high-availability solution?. For database mirroring, you need to consider:
- How many databases are in your application ecosystem? I.e. what resources need to all be failed over at once. More than 2-3 and you might look at failover clustering with SAN replication instead.
- Outside a single application ecosystem, how many databases do you want to mirror? Going more than 10 and you'll start to see issues with worker threads, memory, and waits on the shared mirroring send buffer.
- What is the transaction log generation rate?
- What is the network bandwidth and latency? These two things determine whether the mirror will be in synch with the principal - and how much data loss you may be looking at in asynch mode (from a large send queue) or workload throughput degeneration from having to wait for transactions to be hardened on the mirror (for synch mode)
And so on.
Once you set up mirroring, you'll need to monitor it to make sure its working within your desired parameters - most important things to monitor are the SEND and REDO queues - which give you a measure of how far behind the mirror is, and how much time a failover will take, respectively. It doesn't need continuous tuning in common scenarios, but YMMV as with any technology.
Automatic failover - it depends. There are a variety of scenarios to consider, in terms of which network links go down, which mirroring partners go down, and what the actual failure is. Checkout this blog post for a list of failures: Search Engine Q&A #3: Database mirroring failover types and partner timeouts.
This has all been a bit of a brain dump, but to summarize, yes, mirroring is reliable and shouldn't need excessive tuning and monitoring. SQL Server 2008 added some cool new stuff to help - log stream compression and automatic page repair - both of which I've blogged about in my Database Mirroring blog post category.
Finally - there are a bunch of good whitepapers you should look at (see the database mirroring links on our whitepapers uber-links page - I wrote the one on combining database mirroring with transactional replication in SS2008) and a new book Pro SQL Server Mirroring.
Hope this helps!
Brent covered log shipping and database mirroring well so I won't go into that. Required reading on this topic is Allan Hirt's book Pro SQL Server 2005 High Availability. I know this is for 2005, but it's 95% relevant for SQL Server 2008 as well. You must read this to have a good understanding of the options available. Here are my additions to Brent's response:
Failover Clustering
If financial, power and server room resources are not a constraint then this is my preferred choice for high availability for SQL Server. You need shared disk storage, usually a SAN for this to work and I prefer to place the C drives on the SAN too for easy DR. The way I set it up is to have a quorum LUN (Q), an MSDTC LUN (M), and a mount point for each Instance of SQL Server in the cluster. Within the mount point set up a LUN for SQLData, SQLLogs, SQLBackups and optionally SQLtempdb. For ONE instance you will end up with D:\SQLData, D:\SQLLogs, D:\SQLBackups, D:\SQLtempdb (for example). For the next instance you might have E:\SQLData, E:\SQLLogs, E:\SQLBackups, E:\SQLtempdb. All of the shared disks need to be presented to all nodes in the cluster. Failover is automatic and takes around 20 seconds in my production environment. It is robust, but can be tricky to set up if you are inexperienced.
Virtualised SQL Servers
An option you haven't explored is the use of vmware ESX server to host your database servers. I really like this option but haven't the confidence to deploy it in production environments yet. I have deployed it very successfully in non-production environments and the technology is outstanding. I think it is only suitable for moderate to lightly loaded SQL Servers and should not be used if performance is critical or you have high workloads. A one to one mapping of SQL Server to ESX hosts is a very desirable configuration. vmware VMotion is great technology with much shorter downtimes than failover clustering. I saw a demonstration once of a video being played on a server and the server was failed over with the video running with no glitches. Now, that's impressive!
SQL Server replication
This may not work well for third-party applications because it may require changes to the schema. SQL Server replication was not designed for high availability, rather it was designed to make copies of data available in other locations. I would not recommend using this for high availablity due to the complexities of it. However it can be useful in certain scenarios due to the low level of granularity that it offers - you can do horizontal and vertical partitioning of data for example.
Third party disk replication
A solution such as NSI's double take could be considered for high availability also, however I prefer to use it for disaster recovery for non-SAN based systems. It basically replicates data at the block level to a target server and the target server watches the source server for availablity. If it becomes unavailable, it triggers a failover condition and you can set it up to automatically fail-over or alert for manual fail-over. Fail-over times are similar to SQL Server clustering. The advantages are you don't need any special hardware to do it, but the software licenses can be expensive.
Backup and Restore
Not really a high availability solution, but for some people with looser requirements, this very simple solution may offer everything you need. Simply backup the databases on a schedule to a backup server, and make sure the backup files are available on the target machine. Set up a job to restore the files as they are backed up and you have a crude high availability solution on the cheap.
Best Answer
You can use AlwaysOn.
AlwaysOn is the 'new' way to do this, and the reason DB mirroring is becoming obsolete in SQL Server.
It basically works like this:
You need to create a Windows Cluster for the Networking-aspect. Shared storage is not needed, but you will need a file-share for the instances to sync the logs (only DB's in FULL recovery can be used) and for Quorum in the cluster (if you have only 2 nodes).
When you set them up in Synchronous Commit, you can have a near online fail-over. You can write your application to handle the small interference. (fail once? => try again)
With SQL 2014 STD, you can set up AlwaysOn for 2 nodes. In Windows 2012 Failover Clustering is also in the standard edition.