MySQL replicate structure only

MySQLmysql-replicationsynchronization

Can I set up MySQL as a slave only replicating structure changes? (CREATE / ALTER TABLE etc.). I've got 2 reasons for this:

  1. Multiple developers with development branches in code which should always work with a 'fresh' datastructure, fetched from a 'main' development database.
  2. Multiple projects / clients which share (part of) the same code/scripts (think of a symlinked 'core' directory), but with different databases (1 per client), for which structure should be updated simultaneously with code changes in the shared portion of the code, with a minimum amount of downtime.

If this isn't possible, what are other good ways to sync mysql-datastructures without the actual data across sites / databases / servers? And no, structional changes don't happen all that often, but with enough different databases hanging around it's quite some time spend syncing all instances, and I'd like to both limit downtime and spend my time on more interesting jobs ;).

Best Answer

I typically dump the schema using mysqldump. However, I believe the blackhole engine could be used as you describe.

You would ALTER all tables to BLACKHOLE and INSERT, UPDATE, and DELETE records would not affect the data, as blackhole does not store the data.

You can skip-innodb and set default-storage-engine=BLACKHOLE and all tables created will default to BLACKHOLE except those that are created as MyISAM, as MyISAM cannot be disabled. You need to be aware of this as you create tables on the master, as you may need to later ALTER back to BLACKHOLE on the slave replicant.

Nevertheless, what you describe is an administrative issue and not a technical issue. I recommend configuring at least two development databases, which would be maintained by the infrastructure team.

The first, would be a development database. The developers have more access to this database and it would be regularly refreshed with scrubbed data.

The second database would be a QA or testing environment, which developers would not have RW access to. This allows proper staging.

Both of these databases would be verified by the IT person making production changes to the database and would be included as part of the staging process to production, which insures consistent schemas across environments.

If you want developers to have databases all over the place, they simply cannot be maintained and it will have to be their responsibility to insure that their dataset is current enough for their needs. You could programmatically provide a schema dump on a shared drive or perhaps a blackhole replication slave with RO access.