Linux – How long does it take to fsck a volume

capacity-planningfsckhard drivekernellinux

We are running a website which is currently serving 3-5 million page views. Our site is a file sharing site and so it contains 250,000 files and few thousand symbolic links.

The hard disk is a 1500GB SATA disk.

Using hdparm we came to know that our hard disk speed has been reduced to 15-20 MB/s, which was 80 MB/s.

So now we want to run fsck to fix the disk problem.

  1. Will fsck will solve this issue?
  2. How much time will fsck take to complete (just we want to calculate the downtime which we are going to have)?

Best Answer

The speed degradation is to be expected as the number of files being accessed simultaneously increases. Hard disk drives do not like to be accessed in parallel: every time the read/write head needs to switch cylinders you lose several milliseconds. Even if two files are on the same cylinder, or even the same track, you may still have to wait a rotation to move from one to another. If you measure drive performance in megabits per second, expect that to drop exponentially as parallel access increases.

fsck will not help with this: it only repairs damage to the directory structure, it does not perform any optimization.

The ideal solution would be switching to solid-state storage since that does not have any of the physical limitations of spinning platters. But that's probably cost-prohibitive.

The next best would be to use a RAID optimized for parallel access. Keep in mind that RAIDs can be configured for many different performance profiles, so you will need to take some time to learn the settings of any given RAID hardware and drivers.

You may be able to reduce the problem using aggressive filesystem caching. If your system has sufficient RAM, linux should be doing this fairly well already. Run a program like top to see how much free RAM there is. But if the most commonly used files do not fit in RAM (or any RAM you are likely to acquire), this won't really help.

A poor-man's work-around would be to split your files across several different physical hard-drives (not just different partitions on the same drive). That's not really a long-term scaleable solution and would end up costing you more than a decent RAID. But it might be a quick fix if you have drives lying around.

For any solution involving hard disk drives, make sure they have a fast rotation speed and low seek latency.

I have written an article with some general background on hard-drive performance here:

UNIX Tips - Filesystems