Mysql – thesqldump –where with = operator doesn’t get all rows

databaseexportMySQL

I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition.

This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2 million records, and that's just more difficult.

I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it.

I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use:

mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table \
 --max_allowed_packet=1000000000 database table --where="domain_id=10" \
 > domains10.sql

Using this I expect every row with the domain_id 10 to be exported.

However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up.

I have tried various operators. Using the < or > I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough.

So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field.

Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again.

Looking forward to any helpful answers.

Best Answer

From what you write it appears that database has been corrupted (thinking 4PB instead of 60GB is sort of a giveaway).

I doubt you can get any assurance of retrieved information reliability, unless you repair db first. Have you tried this?

Otherwise, what happens if you do "-f" key -- to continue even if errors encountered?