Honestly, I'd hold off on ext4 right now for production use.
There are other options if you're running into real performance problems with the filesystem (and I can understand that situation, at my last job we had performance limitations in an application due to ext3). Depending on your chosen distribution, you might be able to use jfs, xfs, or reiserfs. All three will generally outperform ext3 in different ways, and all three are much more tested and stable than ext4 right now.
So, my recommendation would be multiple parts. First, investigate thoroughly to make sure you're optimizing in the right place. Test your application on different filesystems and ensure that the performance is improved enough to make a filesystem change valid.
Also, depending on your application, adding more RAM might improve performance. Linux, by default, will make use of any RAM that isn't committed to applications as disk cache. Sometimes having a few GB of "unused" RAM can have a significant performance increase on boxes with heavy disk activity.
Finally, what's your timeline requirement here? If ext3 wasn't cutting it and I had to build a machine with a different filesystem today, I'd probably use xfs or jfs. If I could push it off for 6-8 months, I'd probably wait and see how ext4 has shaped up.
Server 2008 can use all of the available RAM in a system to increase performance. Under 2003 that limit in size had a default value and a max size governed by SystemCacheDirtyPageThreshold. 2008 has a completely different management scheme and because of that there is an optional service (Microsoft Windows Dynamic Cache Service) you can use to manage the cache size. This service will run on 2003 but frankly is rarely a problem on 2003.
More RAM will always be helpful but the counter to look at would be hard page faults. Cache is what makes that number closer to 0.
Best Answer
Microsoft has published prescriptive guidance for improving the performance and minimizing downtime when running checkdisk:
NTFS Chkdsk Best Practices and Performance
https://www.microsoft.com/downloads/en/details.aspx?FamilyID=35a658cb-5dc7-4c46-b54c-8f3089ac097a
Of particular note:
Volume size has no effect on performance.
For volumes with large numbers of files (hundreds of millions/billions), the performance increase of utilizing more memory for chkdsk is dramatic.
Windows 2008 R2 chkdsk is between two - five times the performance of Windows 2008. Windows 2003 was so bad, they were probably too embarrassed to publish the statistics.
You should proactively check if the volume(s) are dirty before a scheduled restart. This can help mitigate the effect of unexpected multi-hour startup delays.
Not in the document, but highly recommended: using a multi-purpose server for file serving hundreds of millions of files increases that probability that a crash may occur, and a volume will be marked dirty. Measures should be taken to ensure that a crash would not occur. An example would be not using the file server as a print server (printer drivers have a long notorious history in blue screen land). Another example would be "file archiving software". A backup power source with extended runtime is highly recommended.