I typically dump the schema using mysqldump. However, I believe the blackhole engine could be used as you describe.
You would ALTER
all tables to BLACKHOLE
and INSERT
, UPDATE
, and DELETE
records would not affect the data, as blackhole does not store the data.
You can skip-innodb
and set default-storage-engine=BLACKHOLE
and all tables created will default to BLACKHOLE
except those that are created as MyISAM
, as MyISAM
cannot be disabled. You need to be aware of this as you create tables on the master, as you may need to later ALTER
back to BLACKHOLE
on the slave replicant.
Nevertheless, what you describe is an administrative issue and not a technical issue. I recommend configuring at least two development databases, which would be maintained by the infrastructure team.
The first, would be a development database. The developers have more access to this database and it would be regularly refreshed with scrubbed data.
The second database would be a QA or testing environment, which developers would not have RW access to. This allows proper staging.
Both of these databases would be verified by the IT person making production changes to the database and would be included as part of the staging process to production, which insures consistent schemas across environments.
If you want developers to have databases all over the place, they simply cannot be maintained and it will have to be their responsibility to insure that their dataset is current enough for their needs. You could programmatically provide a schema dump on a shared drive or perhaps a blackhole replication slave with RO access.
I assume you use InnoDB as a storage engine. If so, you need to turn on bin-logging. If it's not on now, you need to restart MySQL after modifying my.cnf. It is the only downtime, after which you can take dump of the database with binlog position without blocking the database:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A
Based on this backup, restore data on the slave. After this, you can follow any MySQL replication tutorial and let slave catch up/run together with the master.
Best Answer
I'm not totally sure what you mean by large database but I use a simple cron job on a LAMP server to create an sql file nightly, this is overwritten weekly but I use another backup server to copy them off anyway...
The job creates sql files containing structure, inserts etc for any databases on the the local server.
Hope that helps.