How much do you value the data?
Seriously, each filesystem has its own tradeoffs. Before I go much further, I am a big fan of XFS and Reiser both, although I often run Ext3. So there isn't a real filesystem bias at work here, just letting you know...
If the filesystem is little more than a container for you, then go with whatever provides you with the best access times.
If the data is of any significant value, you will want to avoid XFS. Why? Because if it can't recover a portion of a file that is journaled it will zero out the blocks and make the data un-recoverable. This issue is fixed in Linux Kernel 2.6.22.
ReiserFS is a great filesystem, provided that it never crashes hard. The journal recovery works fine, but if for some reason you loose your parition info, or the core blocks of the filesystem are blown away, you may have a quandry if there are multiple ReiserFS partitions on a disk - because the recovery mechanism basically scans the entire disk, sector by sector, looking for what it "thinks" is the start of the filesystem. If you have three partitions with ReiserFS but only one is blown, you can imagine the chaos this will cause as the recovery process stitches together a Frankenstein mess from the other two systems...
Ext3 is "slow", in a "I have 32,000 files and it takes time to find them all running ls
" kinda way. If you're going to have thousands of small temporary tables everywhere, you will have a wee bit of grief. Newer versions now include an index option that dramatically cuts down the directory traversal but it can still be painful.
I've never used JFS. I can only comment that every review of it I've ever read has been something along the lines of "solid, but not the fastest kid on the block". It may merit investigation.
Enough of the Cons, let's look at the Pros:
XFS:
- screams with enormous files, fast recovery time
- very fast directory search
- Primitives for freezing and unfreezing the filesystem for dumping
ReiserFS:
- Highly optimal small-file access
- Packs several small files into same blocks, conserving filesystem space
- fast recovery, rivals XFS recovery times
Ext3:
- Tried and true, based on well-tested Ext2 code
- Lots of tools around to work with it
- Can be re-mounted as Ext2 in a pinch for recovery
- Can be both shrunk and expanded (other filesystems can only be expanded)
- Newest versions can be expanded "live" (if you're that daring)
So you see, each has its own quirks. The question is, which is the least quirky for you?
First and foremost, modern SSD drives and especially the kind I'd use for 'enterprise' workloads have sufficient wear-leveling built in that even poorly behaved filesystems won't seriously degrade the lifespan of the drive itself. Even file-systems that use the same blocks over and over again for metadata operations or the journal won't do this, since the drives are smart enough to move that logical hot-block onto different physical blocks as the drive ages.
A file-system that is good for maximum SSD lifespan will be one that causes a minimum of write I/O operation overhead when writing storage blocks. Overhead generally comes from metadata and journal operations. This is not unique to SSDs though, as those kinds of write amplification features impact rotational media as well.
Where true Solid State Drive oriented file-systems, such as LogFS, come into their own is when they're managing storage that doesn't have wear leveling built in. If you're building storage based off of Compact Flash or SD cards, these filesystems will indeed perform the wear-leveling that modern Solid State Drives do internally. Embedded devices will probably use these file-systems far more often than end-users or server admins.
If you have a real SSD on your hands, it still pays dividends to ensure your legacy rotational media oriented filesystems align block boundaries on logical drive block boundaries. This prevents write amplification due to misaligned writes, which both increases performance and lifespan of the device.
Even on SSDs I still like XFS for my filesystem. But EXT4 looks promising for other workloads. I'm far more confident that fiddling XFS to do block-aligned writes will give me both lifespan and performance than I am confident that experimental file-systems like LogFS will survive the test of time.
Best Answer
Having separate partition can help if you use different filesystem/mount configuration for different partition based on usage. Some filesystems are optimised for SSDs like NILFS. So,yes, they do matter. If you have sufficient RAM, you can mount /var/log on tmpfs to reduce pressure on SSD. Options like noatime can certainly be used to reduce disk updates. If you do not have sufficent RAM, you can still mount it and use SSD for swap device as well. Also it has been strongly suggested to use noop I/O scheduler instead of default CFQ on linux. This should certainly reduce CPU usage.
Also make sure to use linux 2.6.33 if possible since it supports ATA Trim which should improve life of the device significantly. If 2.6.33 is not available then you can look at backporting them.
EDIT: Since you have mentioned embedded, I presume memory will be scarce for tmpfs. So in that situation I suggest using swap partition on SSD or if that is also not possible, a swap file(as backing store) will also do.