Does hard drive space affect performance

hardwareperformance

I saw a presentation years ago that said hard drives had the best performance when they were < 50% full and that for busy servers, you want to keep your drives < 80% used. The reasoning was that the tracks are written from the inside out and that access, especially random access, was quicker for inner tracks than outer ones. Rotational latency was lower.

On the other side, with today's caching, and sometimes read ahead in products like SQL Server, a longer outside track, with no track to track movement, might be negating factors.

Is this true? Is there a reason to keep space free on a modern hard disk system? Is it different for Windows than *Nix?

Best Answer

In my experience, worrying about outer track versus inner track is no longer worth the effort. The difference in performance is just too small, when factored against other performance impacting things (RAID, caching, filesystem fragmentation, etc).

However, to answer your question directly, there is definitely still a reason to keep a decent amount of free space on a modern hard disk (especially rotational (non-SSD) disks), and that's file fragmentation and seek time. When there is a good amount of free space, files can be written sequentially, allowing them to be read in without multiple seeks. This allows a file to be retrieved much faster than if a disk head has to seek all over to pick up little chunks of a file.

This article/blog post is more targeted to file fragmentation than disk performance, but it offers one of the better explanations I've found for file fragmentation and why available free space impacts it: Why doesn't Linux need defragmenting?

The more a disk fills up, the more files (especially large files) will become fragmented and slower to read and access. This is also the reason that Linux filesystems reserve a percentage of space (usually 5%) that is only available to root. This reserved space is very useful for emergencies (so a user can't completely fill a disk and cause problems), but primarily intended to reduce disk fragmentation as the disk fills up. When dealing with very large files, as are common with databases, the fragmentation problem can be reduced by pre-allocating your data files (assuming the database (or other application) supports it).

In these days of very large and relatively inexpensive disks, there is rarely a valid justification for letting a filesystem reach capacity. This is even more true in situations where performance matters.

Related Topic