MySQL replication happens as close to real-time as possible, as limited by disk and network I/O. The slaves open a socket to the master, which is kept open. When a transaction occurs on the master, it gets recorded in the binlog, and is simply replayed on the slave(s). If the socket between master and slave is interrupted, the binlog is replayed for the slave upon the next successful connection.
Multi-master replication does the same thing, but in both directions.
Some basic calculations will assist you in making a better determination of your bandwidth needs.
Average transaction size * number of slaves * updates/minute = bandwidth needed
Hope this helps.
Master-master replication is asynchronous, hence it will definitely break if you write to both servers at once.
Even if the auto-increments are working, any other unique index and many other situations can break it - it's too brittle to be used.
But it is possible to use master-master as PART of a HA solution, you just need to ensure that applications only ever write to one of the pair and in a "clean" failover situation, e.g. admin failing over, it waits for the slave to catch up before switching.
This is not extremely difficult in practice, but a bit inconvenient.
Your main other option is to use DRBD, which is also not massively difficult to set up - but in this case, the 2nd machine is not even usable as a read replica - it just sits there being a hot spare. DRBD synchronously replicates the underlying storage, so everything is written safely to both machines.
There are some applications which are specially designed to tolerate the multi-master problems - these need to be designed VERY carefully with that exact situation in mind - in which case, it's ok. You can't use applications not designed for it though.
auto-increment is not the only, or the main problem.
Best Answer
The question is quite old, but as it's still unanswered, here is a link to some information about derby replication:
http://db.apache.org/derby/docs/10.4/adminguide/cadminreplication.html
Note, that there is no automatic failover or restart of replication after one of the instances has failed.