Linux – How to reduce resource usage when copying a large file

copyfilesystemslinuxrsync

I need to move a large file (corrupt MySQL table ~40GB) onto a seperate server in order to repair it. (When trying to repair on my production server, it quickly killed the server).

In order to do this, I want to rsync the .frm, .MYI and .MYD files from my production server to a cloud server.

I am copying the files from /var/lib/mysql/{database}/ to /home/{myuser} so that I don't need to enable root access for the rsync command and be 100% sure the database file isn't in use (it shouldn't be written to or read from, but obviously I don't want to shut down my production database to make sure).

The first file I tried to copy was around 10GB. I am transfering from one part of my production server to the other, i.e. to the same array of disks.

Unfortunately the copy command "cp filename newfilename" took so much resources it brought the server to a standstill.

How can I use less resources when copying the file to a different directory? (It doesn't really matter how long it takes).

Assuming I manage to do this, what resource usage can I then expect when rsyncing the file to the cloud?

Can anyone suggest a better way to do this? I am quickly running out of disk space so need to get this table repaired and archived ASAP.

Best Answer

Have you tried nice -n10 prefixed to the command ?

10 is default value. The range goes from -20 (highest priority) to 19 (lowest).