Nfs – Unreliable NFS with large number files in a directory

nfs

I have a NFS directory mounted on a host. That directory has 0.6 million log files now, and will have 1.6 million eventually. The files are small, most of them are less than 1MB.

The problem is that I could not reliably find all files of a day in that directory.

If I run such a command below, I should get 4320 files for a day, but I could get any number from 1 to 4320, for example:

$ find /mnt/log -type f -name "some-prefix-rolling.log.2015-07-05*" | wc -l
2548

I have to read this directory as it is. I could not make any change, e.g. put one day's log files in one folder, because some other applications depend on this setting.

The mount options are: ro,noatime,bg,hard,rsize=32768,wsize=32768,vers=3

Does anybody know how to fix this issue?

Best Answer

While storing files in subdirs would be ideal, the right (and expected) behavior is not what you are seeing. Some hints to track down the problem:

  • check your source filesystem: if you run your command directly on the data source, does it complete correctly?
  • for so many files, your source filesystem should be XFS or ZFS. Avoid EXT4 and BTRFS
  • try to toggle client-side caching (FS-cache module)
  • does a simple ls -al | wc -l return consistent results?