But assuming you have more than enough RAM, I think page file should be disabled on SSD to extend the life time. I know you would lose the core dump on crash, but not many people need that information
This sounds rather like premature optimisation. You haven't discussed which SSDs you plan on using, and without actually looking at your server workload and your planned SSD datasheet, you cannot have any idea about what effect a page file will have on the lifespan of your SSD.
There is also a large volume of misinformation, both on the greater Internet and here on Server Fault, about SSDs suffering from poor lifespans. Early model SSDs may well have had issues, and USB flash drives definitely start to degrade, but enterprise-class SSDs have much better wear leveling algorithms and some make use of spare flash to improve performance and wear.
Intel X25-E drives, for example, claim a write duration of 1 petabyte of random writes for the 32 GB drive. If you're saturating the write interface (200 MB/sec) nonstop, with overwrite, my estimate is that will last you about 58 days. But that's writing something like 17 TB of data per day to that drive.
Typical server workload on the OS drive is going to be far, far less, even if you have a page file. Call it 50 GB per day. If the 1 PB figure is accurate (and I know it may be considered an average figure, more discussion later), that's still somewhere north of 50 years.
Those figures seem preposterously high, of course, so let's look at actual figures cited by Intel for expected longevity of drives. Intel were happy to qualify the MLC (non-enterprise) drives to write 100 GB of data, every day, for five years. Standard understanding of SLC vs. MLC flash says that SLC flash lasts about 10x longer than MLC (the above link shows this on a graph as well).
The truth will be borne out by time, of course - we'll either start seeing drives fail early or we won't. But the numbers behind the drives add up to drive longevity not being a problem with decent quality SSDs at all.
If you're using an MLC SSD, then you're perhaps right to be worried. But bear in mind that if Intel is happy to rate the drive at 100 GB/day for five years, that's still fundamentally the same as 50 GB/day for 10 years. And, back to my original point, you still need to know what kind of actual workload you're going to do on the drive.
Personally, I'd strongly say not to use an MLC SSD in a production server environment. If a decent SLC SSD is too expensive, stick to spinning disks for now.
(As an aside, if you do the numbers on, say 100 GB per day for 50 years, which is the "SLC lasts 10x longer than MLC" rating, it looks like Intel is saying their 32 GB drive actually has a total write lifetime of closer to 2 PB of data, not the 1 PB cited on the product specification. Even if I only trust the smaller of those two values to be happy that my X25-E drives should last well north of 10 years.)
Having separate partition can help if you use different filesystem/mount configuration for different partition based on usage. Some filesystems are optimised for SSDs like NILFS. So,yes, they do matter.
If you have sufficient RAM, you can mount /var/log on tmpfs to reduce pressure on SSD. Options like noatime can certainly be used to reduce disk updates.
If you do not have sufficent RAM, you can still mount it and use SSD for swap device as well.
Also it has been strongly suggested to use noop I/O scheduler instead of default CFQ on linux. This should certainly reduce CPU usage.
Also make sure to use linux 2.6.33 if possible since it supports ATA Trim which should improve life of the device significantly. If 2.6.33 is not available then you can look at backporting them.
EDIT: Since you have mentioned embedded, I presume memory will be scarce for tmpfs. So in that situation I suggest using swap partition on SSD or if that is also not possible, a swap file(as backing store) will also do.
Best Answer
Your reasoning is correct, though you're missing the scale of the problem.
Enterprise SSDs are being made with higher endurance MLC cells, and can tolerate very high write-rates. SLC still blows high-endurance MLC out of the water, but in most cases the lifetime write-endurance of HE-MLC exceed the expected operational lifetime of a SSD.
These days, endurance is being listed as "Lifetime Writes" on spec-sheets.
As an example of this, the Seagate 600 Pro SSD line has a listing of this, roughly:
Given a 5 year operational life, to reach the listed endurance for that 100GB drive, you need to write 123GB to that drive per day. That may be too little for you, which is why there are even higher endurance drives on the market. Stec, OEM provider for certain top-tier vendors, has drives listed for "10x full-drive writes for 5 years". These are all eMLC device.
Yes, R5 does incur a write amplification. However, it doesn't matter under most use-cases.
There is another issue here, as well. SSDs can take writes (and reads) so fast that the I/O bottleneck moves to the RAID controller. This was already the case with spinning metal drives, but is put into stark light when SSDs are involved. Parity computation is expensive, and you'll be hard pressed to get your I/O performance out of a R5 LUN created with SSDs.