Linux – How to Transfer 10 TB of Data Efficiently

copyfile-transferlinuxnetworkingrsync

I've looked at all the previous similar questions, but the answers seemed to be all over the place and no one was moving a lot of data (100 GB is different from 10 TB).

I've got about 10 TB that I need to move from one RAID array to another, gigabit Ethernet, the free encyclopedia, XFS file systems. My biggest concern is having the transfer die midway and not being able to resume easily. Speed would be nice, but ensuring transfer is much more important.

Normally I'd just tar & netcat, but the RAID array I'm moving from has been super flaky as of late, and I need to be able to recover and resume if it drops mid process. Should I be looking at rsync?

Looking into this a bit more, I think rsync might be too slow, and I'd like to avoid this taking 30 days or more. So now I'm looking for suggestions on how to monitor / resume the transfer with netcat.

Best Answer

yep, rsync

outside oddball, the async features DRBD came out with recently.