How to maintain file server integrity without going offline with chkdsk

filesystemswindows-server-2008windows-server-2008-r2

I'm just wondering how folks handle ongoing file system stability when using a Windows Server as a file server without taking the system offline to perform chkdsk /f or chkdsk /r? Obviously, one doesn't really want a file server to be unavailable…and file servers now have so much storage that it could take days to run a chkdsk…so how are you protecting data from corruption?

Best Answer

Microsoft has published prescriptive guidance for improving the performance and minimizing downtime when running checkdisk:

NTFS Chkdsk Best Practices and Performance
https://www.microsoft.com/downloads/en/details.aspx?FamilyID=35a658cb-5dc7-4c46-b54c-8f3089ac097a

Of particular note:

  • Volume size has no effect on performance.

  • For volumes with large numbers of files (hundreds of millions/billions), the performance increase of utilizing more memory for chkdsk is dramatic.

  • Windows 2008 R2 chkdsk is between two - five times the performance of Windows 2008. Windows 2003 was so bad, they were probably too embarrassed to publish the statistics.

  • You should proactively check if the volume(s) are dirty before a scheduled restart. This can help mitigate the effect of unexpected multi-hour startup delays.

Not in the document, but highly recommended: using a multi-purpose server for file serving hundreds of millions of files increases that probability that a crash may occur, and a volume will be marked dirty. Measures should be taken to ensure that a crash would not occur. An example would be not using the file server as a print server (printer drivers have a long notorious history in blue screen land). Another example would be "file archiving software". A backup power source with extended runtime is highly recommended.