The following exerpt came out of the book "High Performance MySQL, Second Edition".
This is an excellent book and I would recommend it to anyone.
The short answer is:
With your table size and conditions, no matter what method you choose, I think you're potentially in for a long wait.
Table Conversions
There are several ways to convert a table from one storage engine to another, each
with advantages and disadvantages.
ALTER TABLE
mysql> ALTER TABLE mytable ENGINE = Falcon;
This syntax works for all storage engines, but there’s a catch: it can take a lot of time.
MySQL will perform a row-by-row copy of your old table into a new table. During
that time, you’ll probably be using all of the server’s disk I/O capacity, and the original
table will be read-locked while the conversion runs.
Dump and import
To gain more control over the conversion process, you might choose to first dump
the table to a text file using the mysqldump utility. Once you’ve dumped the table,
you can simply edit the dump file to adjust the CREATE TABLE statement it contains. Be
sure to change the table name as well as its type, because you can’t have two tables
with the same name in the same database even if they are of different types—and
mysqldump defaults to writing a DROP TABLE command before the CREATE TABLE, so you
might lose your data if you are not careful!
CREATE and SELECT
The third conversion technique is a compromise between the first mechanism’s
speed and the safety of the second. Rather than dumping the entire table or converting
it all at once, create the new table and use MySQL’s INSERT ... SELECT syntax to
populate it, as follows:
mysql> CREATE TABLE innodb_table LIKE myisam_table;
mysql> ALTER TABLE innodb_table ENGINE=InnoDB;
mysql> INSERT INTO innodb_table SELECT * FROM myisam_table;
That works well if you don’t have much data, but if you do, it’s often more efficient
to populate the table incrementally, committing the transaction between each chunk
so the undo logs don’t grow huge. Assuming that id is the primary key, run this
query repeatedly (using larger values of x and y each time) until you’ve copied all the
data to the new table:
mysql> START TRANSACTION;
mysql> INSERT INTO innodb_table SELECT * FROM myisam_table
-> WHERE id BETWEEN x AND y;
mysql> COMMIT;
After doing so, you’ll be left with the original table, which you can drop when you’re
done with it, and the new table, which is now fully populated. Be careful to lock the
original table if needed to prevent getting an inconsistent copy of the data!
My two thoughts on this are that either it's taking a long time to load and you're disconnecting with a timeout. There are some ways to autoreconnect in python found here
Here are the relevant timeout variables in mysql: wait_timeout and interactive_timeout
Second idea, and from discussion in comments it looks like the right one. You're probably hitting the mysql connection limit by opening a connection per file.
Try opening a single connection and running the test (when you do this, you might hit the timeout, depending on how long it takes).
Best Answer
If you were using LOAD DATA INFILE, you need to make sure you increase bulk_insert_buffer_size to something significant like 256M.