You can minimize almost all downtime by preventing the table from being locked.
Use a SELECT INSERT
to migrate the data you want to a temporary table.
Rename the tables.
You'll need to consider any INSERTS
during the time the SELECT INSERT
was running.
Finally, drop the table that contains the data you don't want.
Be aware that if you do not have innodb_file_per_table
enabled in your configuration that the disk space allocated will not be freed.
Additionally, special considerations will be necessary for your application and specific situation.
Here's a procedure that I wrote using this method..
# Temp table, recreating forty_first_transaction_
#
CREATE TABLE working_table_temp_ LIKE working_table_;
# Increment if there's an auto_increment field.
#
ALTER TABLE working_table_temp_ AUTO_INCREMENT = 15000000;
# Testing inserts and select while this query was running resulted success.
# Verified no active lock.
#
INSERT INTO working_table_temp_ SELECT * FROM working_table_ WHERE id > $NUM;
RENAME TABLE working_table_ TO working_table_old_;
RENAME TABLE working_table_temp_ TO working_table_;
# Verify that all rows were caught
#
# The last row < 15000000 in both tables should be identical.
#
SELECT * FROM working_table_ where id < 15000000 order by id desc limit 5;
SELECT * FROM working_table_old_ where id < 15000000 order by id desc limit 5;
# If not, we need to move them !
#
# This query will need to be modified.
#
INSERT INTO working_table_ SELECT * FROM working_table_old_ WHERE id > 138376577;
# Verify application functionality, if necessary.
#
# LAST CHANCE FOR BACKOUT !!!
#
# Once verified, get rid of the old data to free up some space.
#
DROP TABLE working_table_old_;
^D
df -h
## BACKOUT ##
RENAME TABLE working_table_ TO working_table_new_;
RENAME TABLE working_table_old_ TO working_table_;
I have the complete answer for this one.
Once innodb_file_per_table is put in place, and new InnoDB tables can be shrunk using ALTER TABLE <innodb-table-name> ENGINE=InnoDB';
This will shrink new .ibd
files GUARANTEED.
If you run ALTER TABLE <innodb-table-name> ENGINE=InnoDB';
on an InnoDB table created before you used innodb_file_per_table, it will yank the data and indexes for that table out of the ibdata1 file and store it in a .ibd
file, This will leave a permanent pigeon whole in the ibdata1 that can never be reused.
The ibdata1
file normally houses four types of information
Here is the guaranteed way to shrink the ibdata1 file pretty much forever...
STEP 01) MySQLDump all databases into a SQL text file (call it SQLData.sql)
STEP 02) Drop all databases (except mysql, information_schema and performance_schema schemas)
STEP 03) Shutdown mysql
STEP 04) Add the following lines to /etc/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
innodb_data_file_path=ibdata1:10M:autoextend
Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
- STEP 05) Delete ibdata1, ib_logfile0 and ib_logfile1 (see update below before deleting!)
At this point, there should only be the mysql schema in /var/lib/mysql
This will recreate ibdata1 at 10MB (do not configure the option) , ib_logfile0 and ib_logfile1 at 1G each
- STEP 07) Reload SQLData.sql into mysql
ibdata1
will grow but only contain table metadata and intermittent MVCC data.
Each InnoDB table will exist outside of ibdata1
Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb
, you will see two files representing the table
mytable.frm
(Storage Engine Header)
mytable.ibd
(Home of Table Data and Table Indexes for mydb.mytable
)
ibdata1
will never contain InnoDB data and Indexes anymore.
With the innodb_file_per_table option in /etc/my.cnf
, you can run OPTIMIZE TABLE mydb.mytable
OR ALTER TABLE mydb.mytable ENGINE=InnoDB;
and the file /var/lib/mysql/mydb/mytable.ibd
will actually shrink.
I have done this numerous times in my career as a MySQL DBA without so much as a single problem thereafter. In fact, the first time I did this, I collapsed a 50GB ibdata1 file into 50MB.
Give it a try. If you have further questions on this, email me. Trust me. This will work in the short term and over the long haul.
UPDATE 2013-07-02 15:08 EDT
There is a caveat I have in this regard that I updated in other posts of mine but I missed this: I am updating my answer a little more with innodb_fast_shutdown because I used to restart mysql and stop mysql to do this. Now, this one-step is vital because every transaction uncommitted may have other moving parts within and outside of the InnoDB Transaction Logs (See InnoDB Infrastructure).
Please note that setting innodb_fast_shutdown to 2 would clean the logs out as well but more moving parts still exist and gets picked on Crash Recovery during mysqld's startup. Setting of 0 is best.
Best Answer
If you are considering migrating to another DB Server with the exact same version of MySQL, you may want to
rsync
thedatadir
from the old server to the new server.This will work regardless of InnoDB file layout or even the presence of MyISAM tables.
RESET MASTER;
to erase all binary logs before the rsycn process. If binary logging is not enabled, you can skip this step.SET GLOBAL innodb_max_dirty_pages_pct = 0;
from mysql and about 10 minutes (This purges dirty pages from the InnoDB Buffer Pool. It also helps perform a mysql shutdown faster) If your database is all MyISAM, you can skip this step.service mysql stop
on ServerAservice mysql start
on ServerBservice mysql start
on ServerA (optional)Essentially, here is what such a script would like this
A fellow member of the DBA StackExchange said I should stay away from
FLUSH TABLES WITH READ LOCK;
based on something in mysqlperformanceblog.comI read through and learned that SELECTs against InnoDB tables in the middle of a
FLUSH TABLES WITH READ LOCK;
can still allow writes to occur in some way. As pointed out in the comment by Arlukin, LVM would work withFLUSH TABLES WITH READ LOCK
on InnoDB just fine (+1 for his comment).For all non-LVM users, you are OK with an all-MyISAM database for use with
FLUSH TABLES WITH READ LOCK;
. For InnoDB, stick to--single-tranaction
usage in mysqldumps please.