Networking – What Limits SCP Performance?

bandwidthnetworkingscp

I've two Debian Linux machine connected via 1 Gbit LAN. I can measure this with a raw HTTP file transfer with wget which gets around 100MB/s in either direction.

When I now use scp, the maximum without compression I get is around 15MB/s. Enabling compression with the -C flag gives me, dependent on the contents, up to 50MB/s.

Still, there's a lot bandwidth wasted here it seems. I didn't bother for a long time until I really had to think about some very large logfile transfers and just realized how oddly slow scp is. It's naturally for me to use scp, even in company environment, because all infrastructure is set up for it.

What limits the performance of scp that much? Is it CPU bound because of the encryption? When I use htop it seems it doesn't make use of multicore CPUs, just one of four CPUs is maxed.

Is there a way to increase the throughput? I've HTTP servers and samba available, but for moving files between Linux machines I usually just use SSH, that's the way I grew up with it. But this now makes me think about it, seems I need to consider other ways of transfer for large amounts of data.

HTTP is only used for specific application in PHP/Java/whatever and samba is used for some special reasons we need access from Windows machines.

Best Answer

Probably it's the encryption. You can try scp with different ciphers, for example:

scp -c arcfour src dest

Check ssh_config manual page for available Ciphers. RC4 (arcfour) is a fast cipher, but probably not as secure as some alternatives.