How much do you value the data?
Seriously, each filesystem has its own tradeoffs. Before I go much further, I am a big fan of XFS and Reiser both, although I often run Ext3. So there isn't a real filesystem bias at work here, just letting you know...
If the filesystem is little more than a container for you, then go with whatever provides you with the best access times.
If the data is of any significant value, you will want to avoid XFS. Why? Because if it can't recover a portion of a file that is journaled it will zero out the blocks and make the data un-recoverable. This issue is fixed in Linux Kernel 2.6.22.
ReiserFS is a great filesystem, provided that it never crashes hard. The journal recovery works fine, but if for some reason you loose your parition info, or the core blocks of the filesystem are blown away, you may have a quandry if there are multiple ReiserFS partitions on a disk - because the recovery mechanism basically scans the entire disk, sector by sector, looking for what it "thinks" is the start of the filesystem. If you have three partitions with ReiserFS but only one is blown, you can imagine the chaos this will cause as the recovery process stitches together a Frankenstein mess from the other two systems...
Ext3 is "slow", in a "I have 32,000 files and it takes time to find them all running ls
" kinda way. If you're going to have thousands of small temporary tables everywhere, you will have a wee bit of grief. Newer versions now include an index option that dramatically cuts down the directory traversal but it can still be painful.
I've never used JFS. I can only comment that every review of it I've ever read has been something along the lines of "solid, but not the fastest kid on the block". It may merit investigation.
Enough of the Cons, let's look at the Pros:
XFS:
- screams with enormous files, fast recovery time
- very fast directory search
- Primitives for freezing and unfreezing the filesystem for dumping
ReiserFS:
- Highly optimal small-file access
- Packs several small files into same blocks, conserving filesystem space
- fast recovery, rivals XFS recovery times
Ext3:
- Tried and true, based on well-tested Ext2 code
- Lots of tools around to work with it
- Can be re-mounted as Ext2 in a pinch for recovery
- Can be both shrunk and expanded (other filesystems can only be expanded)
- Newest versions can be expanded "live" (if you're that daring)
So you see, each has its own quirks. The question is, which is the least quirky for you?
That limit is per-directory, not for the whole filesystem, so you could work around it by further sub-dividing things. For instance instead of having all the user subdirectories in the same directory split them per the first two characters of the name so you have something like:
top_level_dir
|---aa
| |---aardvark1
| |---aardvark2
|---da
| |---dan
| |---david
|---do
|---don
Even better would be to create some form of hash of the names and use that for the division. This way you'll get a better spread amongst the directories instead of, with the initial letters example, "da" being very full and "zz" completely empty. For instance if you take the CRC or MD5 the name and use the first 8 bits you'll get somethnig like:
top_level_dir
|---00
| |---some_username
| |---some_username
|---01
| |---some_username
...
|---FF
| |---some_username
This can be extended to further depths as needed, for instance like so if using the username not a hash value:
top_level_dir
|---a
| |---a
| |---aardvark1
| |---aardvark2
|---d
|---a
| |---dan
| |---david
|---o
|---don
This method is used in many places like squid's cache, to copy Ludwig's example, and the local caches of web browsers.
One important thing to note is that with ext2/3 you will start to hit performance issues before you get close to the 32,000 limit anyway, as directories are searched linearly. Moving to another filesystem (ext4 or reiser for instance) will remove this inefficiency (reiser searches directories with a binary-split algorimth so long directories are handled much more efficiently, ext4 may do too) as well as the fixed limit per directory.
Best Answer
EXT4 either has the limit of 64k or no limit, depending on which wiki you read (I assume that earlier versions had 64k limit and newer have no hard limit). It is still limited by the maximum count of links the Directory Index can contain, and that depends on that particular filesystem attributes (block size for example).
XFS, AFAIK, has no limit, so does Reiser4. Out of my head I wouldn't recall the case of other filesystems; VxFS is definitely very robust and if it has a limit, it's very high (not sure how much helpful this information is :-) ).