- The problem is not so much the file system (NTFS actually is ok with some hundred thousands) - it is all the tools around. Don't even DARE opening the directory in windows explorer. Shell scripts will take ages if a dir returns 2 million etc.
Better in a folder hierarchy.
Give every file a 16 byte hex code
- Make folders / File names in 4 char segments,
So a file would maybe be in affc/2548/2224.... etc.
Keeps the directories shorter
AND you may be able to implement mount points here (though a 4 symbol file level is too wide for that).
DO not forget, too, you need to possibly backup/restore that
ZFS on linux is unfortunately still not a viable solution, even if you dismiss the issue of being a FUSE module (which can seriously cramp performance on certain workloads). It simply isn't complete enough. Also, I don't think there's a debugfs for ZFS on linux, which is a serious negative.
debugfs is the traditional name for low level filesystem repair tool on unices. e2fsprogs include one for Ext2/3/4, XFS tools have xfs_db and others. Other filesystems, especially longer-existing ones like FFS and JFS have such tools too. It's basically a tool that allows you to read and manipulate the data on volume at much lower level, useful especially in recovery.
As for ext4, I'd suspect it's fairly usable in production, but I'd recommend actually simulating your workload on it. Be wary of various unsafe code paths in various applications that can corrupt the data depending on settings of ext4 (mind you, AFAIK those issues can happen in XFS and JFS as well).
XFS is still a good, stable solution, though I'll admit I moved from XFS to ext4 due to XFS' lackluster create/unlink performance. Still a very good choice if you don't have many small files being constantly created and deleted. Hard numbers can be taken from most benchmarks on the net. The slowdown is related to particular optimizations of XFS that cause certain journal operations to be quite slow (create/unlink). It's very fast in metadata access and read/write, though. Good choice for big files, IMHO (multimedia editing?).
Haven't really tested JFS, though I heard rather good opinions about it - just check first if it has a debugfs tool that you feel you can use reliably.
Best Answer
If you're at all unsure (and it sounds like you are), stick with the older stuff you know.
That doesn't just apply to filesystems, either. Production equals solid. If you have to ask if something is ready for production, you're not ready to use it for production, and that's what matters.
Make a lab and test it while you deploy ext3 in your production environment.