Seems that like it's a problem with communication between the two mysql servers. Last_IO_Error: error connecting to master 'abc@192.xxx.x.xxx:3306.
Did you verify that
1) 192.xxx.x.xxx is listening either on 192.xx.x.xxx or 0.0.0.0/0
2) the other server, lets call it 192.xx.x.yyy can connect to it? Determine with mysql, netcat or something
Also seems no master log file nor position was entered. Im sure there are other problems with the configuration, it's very hard reading your configuration and pastes
In all honesty, I have never trusted the --execute parameter for mk-table-sync. I always use --print instead.
Replace this
mk-table-sync -v --execute --databases=forum --sync-to-master h=localhost,D=forum,t=user
with this if you have binary logging enabled
echo "SET SQL_LOG_BIN=0;" > DBChanges.sql
mk-table-sync -v --print --databases=forum --sync-to-master h=localhost,D=forum,t=user >> DBChanges.sql
or this if the slave does not have binary logging
mk-table-sync -v --print --databases=forum --sync-to-master h=localhost,D=forum,t=user > DBChanges.sql
This way, you can see the actual SQL to run safely on the slave.
UPDATE 2011-05-31 12:57
"Interesting. Correct me if I am wrong, but shouldn't the queries run on the master be propagated to the slave through replication? I don't quite understand why that doesn't happen"
That's a fair question. Yet, think of the way MySQL Replication works. When an SQL statement is completed on a master, it is recorded in the master's binary logs. The I/O thread of the slave reads any new entries in the master's binary logs and appends them to the last of the slave's relay logs. The slave's SQL thread reads the relay log entries as a FIFO queue and processes the SQL statements in the order of their recording. If the slave has log-slave-updates and log-bin in its configuration, the SQL statament upon completion will be recorded in the slave's binary logs.
Enough small talk on MySQL Replication.
Now, why would a master not replicate to a slave ???
Here are some possibilities for you to explore:
POSSIBILITY #1 : Network latency causing binlog entries from the master not to propogate over to the relay logs of the slave in a timely manner or not at all.
POSSIBILITY #2 : MySQL packets are too small and errors being ignored. This could happen only in the following scenario: The max_allowed_packet in the master is bigger than the max_allowed_packet in the slave. This would normally stop replication cold in its tracks. If you are skipping all slave errors (if you have slave-skip-errors=all in /etc/my.cnf) then various kinds of normal data can be ignored in this unique scenario.
POSSIBILITY #3 : Config to skip any duplicate key error in the slave's SQL thread
[mysqld]
slave-skip-errors=1062 (skips duplicate key errors)
POSSIBILITY #4 : Config to skip any SQL error in the slave's SQL thread
[mysqld]
slave-skip-errors=all (skips every SQL error under the sun)
POSSIBILITY #5 : Having the slave's I/O thread simply die without telling mysqld. This can happen. Simple correction? Do the following to reestablish the slave I/O thread:
STOP SLAVE IO_THREAD;
START SLAVE IO_THREAD;
POSSIBILITY #6 : Having a wrong combination of replicate-do-db and replicate-ignore-db in /etc/my.cnf (Disclaimer : This is strictly my opinion)
Some mix both options in /etc/my.cnf and think nothing of it. IMHO, these options are supposed to be mutually exclusive. You follow the logic of filtering out data or filtering in data in a slave. They should not be used together for you can get spurious results from replication. Either data should be there or not, NOT data should be there and not.
Best Answer
Make sure that you have started Y with --log-slave-updates option so that update received from X are logged by Y to its binary log.
Use the boolean value instead of switch value:
The results: