Linux – Very bad nfs / cifs performance

cifslinuxnetwork-attached-storagenfsscp

I mount a NAS under Ubuntu Linux 10.04.

Unfortunately I get a very bad read/write performance, although I played around with various options (I have to admit that I do not really know what to do there – I just altered the buffer sizes and such).

I found some hints that the Linux cifs-client is known to be somehow problematic. But using nfs-common rather than cifs gives similar results in terms of performance.

The strange thing is: When I try a secure-copy (scp), everything works fine. Unfortunately, secure login is only allowed for the admin of the NAS – so this is no option for daily use :(.

Edit:
I tried mounting with and without the async option and tested the troughput for different sized blocks. Here are some benchmark results:

with /etc/fstab

xxx.xx.xx.xx:Share  /media/Share       nfs    rw,nodev,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountvers=3,mountproto=tcp   0       0

dd tells me:

dd if=/dev/zero of=/media/Share/bigfile bs=1M count=20
20+0 Datensätze ein
20+0 Datensätze aus
20971520 Bytes (21 MB) kopiert, 33,4046 s, 628 kB/s

dd if=/dev/zero of=/media/Share/bigfile bs=1k count=2000
2000+0 Datensätze ein
2000+0 Datensätze aus
2048000 Bytes (2,0 MB) kopiert, 3,60063 s, 569 kB/s

with /etc/fstab

xxx.xx.xx.xx:Share  /media/Share       nfs    rw,nodev,relatime,vers=3,rsize=8192,wsize=8192,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountvers=3,mountproto=tcp,async   0       0

dd tells me:

dd if=/dev/zero of=/media/Share/bigfile bs=1M count=20
20+0 Datensätze ein
20+0 Datensätze aus
20971520 Bytes (21 MB) kopiert, 34,2046 s, 613 kB/s

dd if=/dev/zero of=/media/Share/bigfile bs=1k count=2000
2000+0 Datensätze ein
2000+0 Datensätze aus
2048000 Bytes (2,0 MB) kopiert, 3,79684 s, 539 kB/s

Edit: I tried to access another NAS in the network with very similar results. So it seems the problem is really at my client system.

I am grateful for any hints to solve this issue.

Best Answer

The usual suspect would be synchronous write. Try mounting the NFS with async option.

I've never played with this big wsize and rsize. Try something about 8k and see if it maybe helps you.

Edit:

Can you verify on the NAS, that it's exporting the filesystem with async option?

I would also try different option, starting with reducing their number:

rw,hard,async
rw,hard,async,rsize=8192,wsize=8192
rw,hard,async,rsize=8192,wsize=8192,vers=3
rw,hard,async,rsize=8192,wsize=8192,vers=3,relatime
...

etc

Can you log in to the NAS and monitor its performance too? One case I encountered was a NAS spawning multiple NFS daemons and dying under the load when a client connected.