The usual suspect would be synchronous write. Try mounting the NFS with async option.
I've never played with this big wsize and rsize. Try something about 8k and see if it maybe helps you.
Edit:
Can you verify on the NAS, that it's exporting the filesystem with async option?
I would also try different option, starting with reducing their number:
rw,hard,async
rw,hard,async,rsize=8192,wsize=8192
rw,hard,async,rsize=8192,wsize=8192,vers=3
rw,hard,async,rsize=8192,wsize=8192,vers=3,relatime
...
etc
Can you log in to the NAS and monitor its performance too? One case I encountered was a NAS spawning multiple NFS daemons and dying under the load when a client connected.
I had this same problem on CentOS 5.4 and found a workaround in a RedHat bug report, which recommended using -i
which tells it to not use the umount.<filesystem>
helper.
In your case, just run
umount -i /home/s3backup/S3Backup/mnt/FOO
This worked, although it will write the following to stderr:
umount: //10.0.1.38/FOO: not found
umount: /home/s3backup/S3Backup/mnt/FOO: not mounted
umount: //10.0.1.38/FOO: not found
umount: /home/s3backup/S3Backup/mnt/FOO: not mounted
Best Answer
The
hard
mount option is (similiar to NFS) avoiding to return errors to the clients when servers are unresponsive. (Default seems to be thesoft
option). Add this to your/etc/fstab
and remount it (or reboot).See the man page: http://linux.die.net/man/8/mount.cifs