Solution 1 is close to sharding but the entire architecture needs to be considered and designed to accomplish that in the best manner. Sharding typically comes up with large scale installations pushing limits of the technology platforms.
Solution 2 or dual master replication would be applicable but as your links are physically separate it would be risky to have the application dynamically point to either database. You would want to pick one database and if the database failed, manually repoint the application to the new database. Automatic failover of the application introduces the risk of split brain. You could take nightly snapshots of the secondary databases for backups.
As described in solution 3, replication is often used to distribute the readonly load to different database servers. It also lets you use different engines and configurations for the read queries. For example, MyISAM can be quicker for readonly queries.
Replication is typically only subject to the physical hardware limitations be it network or system resources. Unless you are storing binary data in the database on a large scale, I would not worry about replication delays under normal load.
Being as that your main requirement is speed, I would first focus on the local system configuration and resources. Chances are substantial optimizations can be made there.
Automatic high availability solutions are typically best localized to a single physical environment and in case of extreme failure manual solutions can be applied to enable the physically separate site.
I'm generalizing based on a LAMP stack and focusing on Web applications. Different applications, protocols, and technologies change thing a bit but in regards to Web servers and databases what I describe is more generally applicable.
The traditional setup would be to separate your web/application server and your DB server. Having them on the same box is going to be pretty restrictive. If your web application consists of a mix of static and dynamic content then further separation (a separate web server, application server and content server) will improve performance.
As far as MySQL goes, you might want to try these;
- Definitely have MySQL on its own dedicated server.
- Put in as much memory as you can afford and the machine can take, MySQL loves memory.
- Put your OS, bin logs and data on three seperate physical disks.
Best Answer
As other people mentioned, what you are talking about is multi-master replication. There are several open-source tools out there like Maatkit or MMM to help w/multi-master replication but even w/these you are still asking for a world of hurt.
I would make sure that you truly need your writes to happen synchronously (aka instantly). If you can stand to have a couple seconds to a minute of lag between when a user performs an action on the site and data is inserted or updated in the DBs, I would consider setting up some sort of offline logging process. For example, wherever you had the insert/update statement, instead put something that writes to a file - or an in memory DB if you can't take the hit of writing to disk - and have some process/daemon that sits on your web head(s), reads in the log file, and writes the appropriate data to a single master; which then replicates it to the other load balanced machine.
Take this for what it is worth though b/c I have never had to actively maintain multi-master replication.