The following exerpt came out of the book "High Performance MySQL, Second Edition".
This is an excellent book and I would recommend it to anyone.
The short answer is:
With your table size and conditions, no matter what method you choose, I think you're potentially in for a long wait.
Table Conversions
There are several ways to convert a table from one storage engine to another, each
with advantages and disadvantages.
ALTER TABLE
mysql> ALTER TABLE mytable ENGINE = Falcon;
This syntax works for all storage engines, but there’s a catch: it can take a lot of time.
MySQL will perform a row-by-row copy of your old table into a new table. During
that time, you’ll probably be using all of the server’s disk I/O capacity, and the original
table will be read-locked while the conversion runs.
Dump and import
To gain more control over the conversion process, you might choose to first dump
the table to a text file using the mysqldump utility. Once you’ve dumped the table,
you can simply edit the dump file to adjust the CREATE TABLE statement it contains. Be
sure to change the table name as well as its type, because you can’t have two tables
with the same name in the same database even if they are of different types—and
mysqldump defaults to writing a DROP TABLE command before the CREATE TABLE, so you
might lose your data if you are not careful!
CREATE and SELECT
The third conversion technique is a compromise between the first mechanism’s
speed and the safety of the second. Rather than dumping the entire table or converting
it all at once, create the new table and use MySQL’s INSERT ... SELECT syntax to
populate it, as follows:
mysql> CREATE TABLE innodb_table LIKE myisam_table;
mysql> ALTER TABLE innodb_table ENGINE=InnoDB;
mysql> INSERT INTO innodb_table SELECT * FROM myisam_table;
That works well if you don’t have much data, but if you do, it’s often more efficient
to populate the table incrementally, committing the transaction between each chunk
so the undo logs don’t grow huge. Assuming that id is the primary key, run this
query repeatedly (using larger values of x and y each time) until you’ve copied all the
data to the new table:
mysql> START TRANSACTION;
mysql> INSERT INTO innodb_table SELECT * FROM myisam_table
-> WHERE id BETWEEN x AND y;
mysql> COMMIT;
After doing so, you’ll be left with the original table, which you can drop when you’re
done with it, and the new table, which is now fully populated. Be careful to lock the
original table if needed to prevent getting an inconsistent copy of the data!
I had the same issue. In 10.04, apparmor does not allow MySQL to read the InnoDB plugins. Add the following lines to /etc/apparmor.d/usr.sbin.mysqld
:
/usr/lib/mysql/plugin/ r,
/usr/lib/mysql/plugin/* mr,
Then reload apparmor and restart the MySQL service.
Best Answer
You need to do a mysqldump of everything !!!
With regard to the error message, you have what I call a pidgeon hole. It is essentially a table's metadata that got corrupted in ibdata1. There is no way to erase it. You cannot drop the table the metadata is looking for because the corresponding data outside ibdata1 cannot be referenced via its inode. Sometimes, even mysqldumps won't work when it hits the table entry via the .frm.
From another perspective, the metadata contained in ibdata1 is Lunix-ish and inode centric, which are concepts foreign to FAT-based Windows. I would not trust InnoDB metadata built this way. Doing a mysqldump gives you logcial data representation via SQL that is both OS and hardware agnostic.
If the datadump is too big, you need to do parallel dumps of the databases or tables and load those mysqldumps into MySQL Windows.
If you are unsure or wary of scripting this, get MAATKIT and use mk-parallel-dump (deprecated tool but good for adhoc dumps) to spit out the data as CSV files. Then, use 'mysqldump --no-data --routines --triggers' and generate table structures file. Run the table structures file in MySQL Windows. Finally, load the CSV into MySQL Windows using LOAD DATA INFILE.