FTP transfers slow on a long fat pipe

ftpnetwork-speedperformancevps

We've recently gotten new internet – 100Mb/s fibre – and have been complaining bitterly to our new ISP about transfer speeds to some FTP servers in the USA (300ms away).
To one server in particular, we were getting only 1Mb/s.
After they assured us that they were in no way shape or form throttling the transfers.

So after a visit from a techie who said he'd seen this same problem at another client and showed that pretty much any international FTP site he tried to download from exhibited similar speed issues. He said it's just how FTP is, the higher the latency, the lower the speed.
I've never heard of this kind of limitation before. So I did some reading.

I learned that "long fat pipes" need nice big buffers to ensure that things can flow smoothly. And that there's no hard and fast recipe for the buffer sizes.

The FTP server is on a windows VPS running Filezilla. – The client on our side is a special 3rd party app that monitors the server for new orders, when they are complete, downloads them and deletes the files from the server.

I can't play with the buffer size of the client (I am asking the developers, but I haven't seen a way) – but I can adjust the buffer size in File Zilla.

So I did some transfers at different speeds and seemed to find a nice sweet spot where I can get up to almost 7Mb/s. But that's still only a fraction of what I should be able to get.
https://www.dropbox.com/s/0dlwwuteq2o6txq/Screenshot%202016-03-08%2016.45.03.png?dl=0

I looked at a lot of questions like this:
Filezilla FTP slow upload (350KBps) on 1 Gbits fiber? and
https://stackoverflow.com/questions/30847433/very-slow-ftp-download which mostly all say "Buffer size, buffer size, buffer size" But surely I should get better than 7Mb/s

So here's the questions:

  1. If I don't trust the ISP, and believe there's shaping on my line, how can I prove that?

  2. FileZilla has an "Internal buffer" that's capped at 6 digits, and a "Socket buffer" that can go higher. How do these two play together, I found that having the internal as half of the socket seemed best, but are there other configuration I should try?

  3. Do the buffer sizes need to be in squares of 8 (32768, 65536, 131072 etc) or can I go with other numbers in between?

3b. My best case I found was 262144 (socket) and 131072 (internal) – should I start testing with smaller increments around there?

  1. If anyone else is 300ms away from ftp.rapidstudio.co.za can you see what speeds you get? (user: test Password: test)

Thanks
Steven

Best Answer

So from what I gather from Michael Hampton's "bandwidth-delay product" This "long fat pipe" isn't going to give the performance we need. We can either bring the FTP closer, or use a better file transfer method that isn't as "back and forth".

Because there are other client software problems involved in not using FTP we opted to bring the server closer. I am getting a local VPS - double the price for half the spec, but only 2ms away and the transfer speeds are beautiful.

An alternative would have bene to install Dropbox on the server, and sync it with a local folder. This would have significantly speeded things up. But due to other software and workflow requirements / restrictions, it wasn't a suitable solution.

Related Topic