I don't think you can disable all disk caching in Linux.
As a hack, you could keep running "sync; echo 3 > /proc/sys/vm/drop_caches" to flush almost anything that is cached in memory. From the console
watch -n 1 `sync; echo 3 > /proc/sys/vm/drop_caches`
would do the trick. In the above example nothing will remain cached by the kernel for more than a second, though it will have no effect on data held in memory by Apache or other processes. It may also not flush stuff from any memory-mapped files that are still open with portions locked.
If you only want nothing cached at the start of a test run, and don't care if it caches stuff during the tests, then you could just add a single call to "sync" and "echo 3 > /proc/sys/vm/drop_caches" at the start of your test run.
If your test involves scripts that access a database you will need to be able to tell your database back-end to not cache stuff in RAM between tests also.
I've been banging my head against this problem all day, with no solution found.
You can turn up the debugging of the NFS server, but that doesn't provide much detail (if that exmaple is accurate) and will probably dominate a busy NFS server's disk with useless baggage logged in addition to the bare filenames.
Another solution is adding rules to auditd/auditctl to log all reads or writes to the NFS directories, but that doesn't work for our Centos 6.X machines, for reasons I can't quite figure out yet. In /etc/audit/audit.rules on a client machine:
# First rule - delete all
-D
# Increase the buffers to survive stress events.
# Make this bigger for busy systems
-b 8192
# Feel free to add below this line. See auditctl man page
-w /auto/ -p r -k read -k home
-w /auto/ -p w -k write -k home
-w /auto/ -p xa -k other -k home
...where I've given separate keys to reading, writing, and executing/changing attributes. My clients are autofs'd to mount a few different NFS directories, including their home directory, to /auto/
with soft links pointing the client machine's /home/users/
back to /auto/
. I get logging of lots of stuff, but none of the files the users themselves seem to be modifying.
Troll the audit logs with ausearch -k read | aureport -f
, for instance. grepping for .ODT or .PDF comes up with nothing, the only results are for metacity's configs, Chrome's crap, etc., etc.
Naturally, enabling audit on the server pointing at the real /home/users/XYZ
only shows accesses from things interfacing with the server directly (mail clients) or users SSH'd into the server.
If you can figure out the right recipe for audit, or a dedicated solution all together, please, please, please share it! You'd think this would have been solved in 1993.
Best Answer
Answer A8 of the Linux NFS FAQ has an explanation.
A summary: it's up to the client to poll the server to ask for changes (by checking file attributes to see if they've changed since last time the client checked). Clients traditionally do that at regular intervals, but also any time they open a file. They also flush back any writes on close. This means that you get the results you'd expect as long as you ensure that no other client opens a file while one client holds it open for write.
This behavior is usually configurable using mount options, for example if you prefer stronger cache consistency at the expense of performance. See for example "man nfs" on a Linux client.