Linux – Howto force kernel to cache more files on NFS

kernellinuxmemory

I've a dedicated web server serving static files vía NFS. The server has 32GB of RAM, but the cached memory never grows up more than 16GB.

I'm pretty sure that the SO can allocate more than this, because throughout the day, the cached memory stays fixed on 16GB, but when logrotate runs, cached memory grows up to 30GB.

I 've been playing with several values from /proc/slabinfo (nfs_direct_cache nfs_read_data nfs_inode_cache) without success.

Any tip or link about this will be gratefully.

Thanks in advance.

Best Answer

If your talking about page cache, this grows in relation to the files your accessing, the amount of data inside the files you have accessed and how active the files are.

On modern kernels when a file is cached it is immediately marked inactive until it is used again. This does not mean its evicted from cache, but on a system with higher memory pressure (active anonymously mapped pages) it could be discarded pretty quickly.

The cache size isalso dictated by memory pressure. This is basically a fraction of the number of times the cache was scanned against the number of pages that were promoted from 'inactive' to 'active' and vice-versa. Against how many disk seeks would be required to access this data from disk. So in effect should it take longer to scan and maintain the cache than it would to access some data from disk dont increase the page cache size.

It could be that you have not produced enough memory pressure through file accesses to dictate using more page cache would be of any benefit. What this translates to is that there are approximately 16G of pages accessed on a daily basis and this does not change often.

I guess in logrotate if the file you just read sequentially in (leading to 30G) was unlinked, then the pages it used dont need to be there anymore. This might explain why the page cache size reduces again.

AFAIK this only is relevent for post 2.6.28 kernels where they changed the memory management algorithm.