First use-case: Small File Server
You have so few demands on that system that optimizing the read-ahead settings won't get you much. Such workloads are significantly random I/O, but infrequent. Read-ahead will get some advantages, but the users aren't likely to notice.
Second use-case: Backup-to-Disk target
This type of system will be primarily write. Read-ahead in this instance is not used much since it's writing most of the time. The cache will be used to reorder writes. Which should go pretty well since this is primarily sequential writes happening (unless it's a deduplication system, at which point it'll be highly random) which makes things go a lot faster.
The caveat here is if backups are later staged to tape. The staging process will be primarily read, and if you're doing that kind of thing Read Ahead settings will absolutely net you gains so set them as high as you can (sequential writes mean sequential reads here! Unless it's a dedupe system, at which point it doesn't matter any more).
Third use-case: VM Host
This type of system is the most demanding on storage since it's a highly random mix of I/O types. Of the three types presented, it'll need the most tuning. Some read-ahead will be valuable, but not much due to the nature of the I/O demands.
These systems are designed to just plug in and go. Here's how each tier handles I/O.
OS
Writes are cached briefly (dirty pages) in RAM while the I/O subsystem actually commits things. Once a write is committed, the page is then cached in case it is immediately read again. The OS Cache does not maintain a pool of uncomitted writes, it maintains a pool of already comitted writes that may need to be read again. It is, in effect, a 100% read cache.
RAID Controller
The BBC of the RAID controller receives the Write from the OS. Depending on the cache policy of the volume being written to (write-thru vs write-back), the RAID controller may report the write as Comitted at this time. It will then queue the write for comitting to actual disk
Disk
Some RAID cards actually do disable the HD cache. Others, don't. I don't remember how HP does theirs, but would not be surprised if the HD cache is disabled and the write-optimization logic is pushed up into the RAID controller itself; there is a reason HP uses custom firmware on their drives.
Operating systems, and the filesystems they support, know very well that sudden power-loss is a failure mode that can kill writes between the time the OS determines that it needs to happen and when the storage system reports it is done. We've been doing this a while now, and we're pretty good at defending against it.
The XFS filesystem has a bad reputation for survivability in sudden power-loss situations due to how it handles metadata writes. But then, it's intended environment is one where power is presumed to be adequately redundant. Other filesystems, the ext series, btrfs, and of course zfs, survive that just fine as well.
If you're operating in an environment with known bad power, to ensure no data loss during power outages:
- Use a filesystem known to be robust for sudden power loss (basically, anything but XFS)
And that's it. The BBC on the RAID card ensures the RAID cache is preserved until power is restored. The disk caches are likely disabled. No need to tune the RAID card cache to be all-read. No need to disable the OS block caches.
Really.
Best Answer
You have answered the question yourself. If you have a UPS you can leave them on, if you don't then they should be off or else you risk data loss.
On most servers in the datacenter they will generally be using OEM firmware that uses the cache in a read only mode. (The equivalent of off) Writes will be cached by the RAID card with battery backed memory.