People generally recommend CentOS or Debian for servers because they are conservative distributions, but in practice this doesn't mean a lot - especially for a humble web server, which is really just a network file server. Pretty much any Linux distro will be fine, so don't sweat too much over that.
What is more important is that the server is monitored (is the web server up? how's the disk space?, etc), and updated now and again with security patches.
BTW: Linux has a very sophisticated memory caching mechanism on top of disk access. So, unless your web site is much bigger than available memory (unlikely), then the SSD won't make any difference at all.
So after many days working on this, I was able to demonstrate that BtrFS does use TRIM. I was unable to successfully have TRIM work on the server that we will be deploying these SSDs to. However, when testing using the same drive plugged into a laptop, the tests succeed.
Hardware used for all of this testing:
- Crucial m4 SSD 512GB
- HP DL160se G6
- LSI LSISAS9200-8e HBA
- generic SAS enclosure
- Dell XPS m1210 laptop
After many failed attempts at verifying BtrFS on the server, I decided to try this same test using an old laptop (remove the RAID card layer). The initial attempts of this test using both Ext4 and BtrFS on the laptop fail (data not TRIM'd).
I then upgraded the SSD drive firmware from version 0001 (as shipped out of the box) to version 0009. The tests were repeated with Ext4 and BtrFS and both filesystems successfully TRIM'd the data.
To ensure the TRIM command had time to run, I did a rm /mnt/testfile && sync && sleep 120
before performing validation.
One thing to note if you're attempting this same test: SSDs have erase blocks that they operate on (I don't know the size of the Crucial m4 erase blocks). When the file system sends the TRIM command to the drive, the drive will only erase a complete block; if the TRIM command is specified for a portion of a block, that block will not be TRIM'd due to the remaining valid data within the erase block.
So to demonstrate what I'm talking about (output of the sectors.pl
script above). This is with the test file on the SSD. Periods are sectors that only contain zeros. Pluses have one or more non-zero bytes.
Test file on drive:
24600 .......................................+++++++++++
24650 ++++++++++++++++++++++++++++++++++++++++++++++++++
24700 ++++++++++++++++++++++++++++++++++++++++++++++++++
-- cut --
34750 ++++++++++++++++++++++++++++++++++++++++++++++++++
34800 ++++++++++++++++++++++++++++++++++++++++++++++++++
34850 +++++++++++++++++++++++++++++.....................
Test file deleted from drive (after a sync && sleep 120
):
24600 .......................................+..........
24650 ..................................................
24700 ..................................................
-- cut --
34750 ..................................................
34800 ..................................................
34850 ......................+++++++.....................
It appears that the first and last sectors of the file are within a different erase blocks from the rest of the file. Therefore some sectors were left untouched.
A takeaway form this: some Ext4 TRIM testing instructions ask the user to only verify that the first sector was TRIM'd from the file. The tester should view a larger portion of the test file to really see if the TRIM was successful or not.
Now to figure out why manually issued TRIM commands sent to the SSD through the RAID card work but automatic TRIM commands to not...
Best Answer
An LSI/Broadcom 9207-8i supports TRIM only with IT firmware. So you can flash the IT firmware (see LSI/Broadcom downloads for this card) in order to gain TRIM support. However, IT firmware does not support RAID, so you will lose the RAID functionality.