Brent covered log shipping and database mirroring well so I won't go into that. Required reading on this topic is Allan Hirt's book Pro SQL Server 2005 High Availability. I know this is for 2005, but it's 95% relevant for SQL Server 2008 as well. You must read this to have a good understanding of the options available. Here are my additions to Brent's response:
Failover Clustering
If financial, power and server room resources are not a constraint then this is my preferred choice for high availability for SQL Server. You need shared disk storage, usually a SAN for this to work and I prefer to place the C drives on the SAN too for easy DR. The way I set it up is to have a quorum LUN (Q), an MSDTC LUN (M), and a mount point for each Instance of SQL Server in the cluster. Within the mount point set up a LUN for SQLData, SQLLogs, SQLBackups and optionally SQLtempdb. For ONE instance you will end up with D:\SQLData, D:\SQLLogs, D:\SQLBackups, D:\SQLtempdb (for example). For the next instance you might have E:\SQLData, E:\SQLLogs, E:\SQLBackups, E:\SQLtempdb. All of the shared disks need to be presented to all nodes in the cluster. Failover is automatic and takes around 20 seconds in my production environment. It is robust, but can be tricky to set up if you are inexperienced.
Virtualised SQL Servers
An option you haven't explored is the use of vmware ESX server to host your database servers. I really like this option but haven't the confidence to deploy it in production environments yet. I have deployed it very successfully in non-production environments and the technology is outstanding. I think it is only suitable for moderate to lightly loaded SQL Servers and should not be used if performance is critical or you have high workloads. A one to one mapping of SQL Server to ESX hosts is a very desirable configuration. vmware VMotion is great technology with much shorter downtimes than failover clustering. I saw a demonstration once of a video being played on a server and the server was failed over with the video running with no glitches. Now, that's impressive!
SQL Server replication
This may not work well for third-party applications because it may require changes to the schema. SQL Server replication was not designed for high availability, rather it was designed to make copies of data available in other locations. I would not recommend using this for high availablity due to the complexities of it. However it can be useful in certain scenarios due to the low level of granularity that it offers - you can do horizontal and vertical partitioning of data for example.
Third party disk replication
A solution such as NSI's double take could be considered for high availability also, however I prefer to use it for disaster recovery for non-SAN based systems. It basically replicates data at the block level to a target server and the target server watches the source server for availablity. If it becomes unavailable, it triggers a failover condition and you can set it up to automatically fail-over or alert for manual fail-over. Fail-over times are similar to SQL Server clustering. The advantages are you don't need any special hardware to do it, but the software licenses can be expensive.
Backup and Restore
Not really a high availability solution, but for some people with looser requirements, this very simple solution may offer everything you need. Simply backup the databases on a schedule to a backup server, and make sure the backup files are available on the target machine. Set up a job to restore the files as they are backed up and you have a crude high availability solution on the cheap.
I believe this would work fine. Keep in mind that unless you are using 2008 you'd have to look into some kind of 3rd party tool for the compression anyways.
The only real downside I see is it's a bit more work for you to maintain as you'll be relying on more than just SQL server to do the work.
Best Answer
It depends. ha ha.
You also need to think about what data-loss requirements the customer wants for the publisher, and whether you're already taking log backups (I'm guessing you are).
Database mirroring can be set to have zero data-loss (as long as the mirror stays synchronized), but depending on the transaction log generation rate and the network bandwidth available, waiting for the log records to be hardened on the mirror before the transactions can commit on the principal may slow the workload down. Depends on what kind of transactions you're doing (long or short) whether this will have a noticeable effect on the overall response time.
With log shipping, it's just backup-copy-restore, repeat. So if you're already taking log backups, you're not going to be impacting performance at all. If you're not used to taking log backups, you may run into issues with transaction log size management.
Be aware that mirroring requires the FULL recovery model, so it could impact your database maintenance, especially if you're used to using the BULK_LOGGED recovery model. Depending on the network bandwidth available, this could also lead to log size management issues.
Both require network bandwidth, but in different ways. Log shipping is a burst every time a log backup is copied, database mirroring is more sustained, obviously depending on the log generation rate again. I'd need ot know a lot more to be able to tell whether the amount of extra bandwidth required for either would impact the movement of data in the replication stream and so affect latency there.
With log shipping, you'd have to manually failover to the log shipping secondary in the event of a failure, and there's the potential for data loss (all data since last log backup that was copied from the primary). And then you'd need to kick-start repl again.
With database mirroring, you can set it to failover automatically and you can specifically set the failover partner in the repl agent jobs to startup automatically on the new principal (which is also the new publisher). The trickiness is making sure that the datbase mirroring failover doesn't occur before the local cluster failover gets a chance to happen. You can do this by changing the mirroring partner timeout value. I blogged about this at http://www.sqlskills.com/BLOGS/PAUL/post/Search-Engine-QA-3-Database-mirroring-failover-types-and-partner-timeouts.aspx.
I wrote a whitepaper for Microsoft that describes how to use mirroring and tranasactional replication together: see http://www.sqlskills.com/BLOGS/PAUL/post/SQL-Server2008-New-whitepaper-on-combining-transactional-replication-and-database-mirroring.aspx.
All other things being equal, I'd recommend database mirroring because of the ease of management with potential for less dataloss. You may have some other requirements I'm not aware of that would prevent that though.
Hope this helps.