Linux – Transfer 15TB of tiny files

archivefile-transferlinuxlinux-networking

I'm archiving data from one server to another. Initially I started a rsync job. It took 2 weeks for it to build the file list just for 5 TB of data and another week to transfer 1 TB of data.

Then I had to kill the job as we need some down time on the new server.

It's been agreed that we will tar it up since we probably won't need to access it again. I was thinking of breaking it into 500 GB chunks. After I tar it then I was going to copy it across through ssh. I was using tar and pigz but it is still too slow.

Is there a better way to do it? I think both servers are on Redhat. Old server is Ext4 and the new one is XFS.

File sizes range from few kb to few mb and there are 24 million jpegs in 5TB. So I'm guessing around 60-80 million for 15TB.

edit: After playing with rsync, nc, tar, mbuffer and pigz for a couple of days. The bottleneck is going to be the disk IO. As the data is striped across 500 SAS disks and around 250 million jpegs. However, now I learnt about all these nice tools that I can use in future.

Best Answer

I have had very good results using tar, pigz (parallel gzip) and nc.

Source machine:

tar -cf - -C /path/of/small/files . | pigz | nc -l 9876

Destination machine:

To extract:

nc source_machine_ip 9876 | pigz -d | tar -xf - -C /put/stuff/here

To keep archive:

nc source_machine_ip 9876 > smallstuff.tar.gz

If you want to see the transfer rate just pipe through pv after pigz -d!