Understanding Consistent Dataset in MySQL Backups

backupdumpMySQL

I do daily backups of my mysql databases with mysqldump+scheduled tasks

but, as i can read on this answer, mysqldump does not guarantee a consistent dataset… what's the meaning?
That means that if someone will access the website while i am backing up, and his actions will edit two tables, and one table is has already parsed by mysqldump, resulting in an "incomplete" backup that can lead to problems?
If mysqldump is not good, what is the best way to backup?

Best Answer

Your scenario of what might make an inconsistent backup is exactly right.

You can use mysqldump to take a consistent backup by adding the flag --lock-all-tables. The caveat is, as the name suggests, that all tables will be locked against writes until the backup is completed. Which depending on the size of your database could be a long time and cause great disruption.

There are some alternatives. Some of which are commercial. I'm going assume you are using MyISAM, which is generally more tricky to backup consistently than InnoDB.

One solution is to place MySQL's data onto storage that facilitates quicker backups by another method. Such as a SAN or LVM2 block device, which support snapshots. You will still have to place MySQL into a locked state but because snapshots take very little time to complete it the disruptive effect is negligible. You can then start another MySQL daemon against the snapshot of data should wish to take a consistent mysqldump and export it elsewhere at leisure.

I use a similar method myself against iSCSI backed storage.