Rsync Performance – Why Is It So Slow?

linuxperformancersync

My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly.

I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here?

EDIT: I conducted some experiments.

Write performance on the laptop

The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s.

iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024
1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s

Read performance on the workstation

The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold).

iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M
10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s

Network performance

I ran iperf between the two clients. Network performance is 939 Mbit/s

iblue@raven $ iperf -c 94.135.XXX
------------------------------------------------------------
Client connecting to 94.135.XXX, TCP port 5001
TCP window size: 23.2 KByte (default)
------------------------------------------------------------
[  3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes   939 Mbits/sec

Best Answer

Another way to mitigate high CPU usage but still keep the functionality of rsync, is by moving from rsync/SSH to rsync/NFS. You could export the paths you want to copy from via NFS and then use rsync locally from the NFS mount to your destination location.

In one test from a WD MyBook Live network disk, one or more rsyncs from the NAS on a Gigabit network towards 2 local USB disks would not copy more than 10MB/sec (CPU: 80% usr, 20% sys), after exporting over NFS and rsyncing locally from the NFS share to both disks I got a total of 45MB/sec (maxing out both USB2 disks) and little CPU usage. Disk utilization when using rsync/SSH was about 6% and using rsync/NFS was closer to 24%, while both USB2 disks where close to 100%.

So we effectively moved the bottleneck from the NAS CPU to both USB2 disks.