I don't know of any RAID controller that supports TRIM commands.
As your Wikipedia link explains, the TRIM command provides a way for the file system to tell an SSD when a block of data is no longer needed. For example, after a file is deleted.
Life gets more complicated if you have a RAID layer between the file system and the SSDs. First you need to update the RAID software (or firmware) to accept TRIM commands from the file system. Then the RAID layer has to figure out what to do with them. For RAID 1 (mirroring) it would be pretty straight-forward. RAID would just pass the TRIM commands to the underlying SSDs.
For parity-based RAID, however, there's not much you could easily do with TRIM commands. Even when the file system is done using a block, you can't TRIM it, as RAID needs the contents of the block for parity calculations. RAID could subtract the block from the corresponding parity block and then TRIM it, but you've now added 3 extra I/O operations so you can get an unknown gain from issuing the TRIM command. I can't see how this would be worth it.
All in all, the SSD TRIM command is still quite new. Many SSDs don't support it, and I'm not even sure how many file systems have support for it. So it is likely to be a while before RAID systems start supporting it.
RAID 10 is better for databases. Even though you'll be heavier on reads, caching and changes to the actual database will still impact the performance of your server.
I would go with the 6x SAN disks. SSDs in RAID 5 is just asking for trouble since you only have a single drive tolerance, and SSDs tend to fail rather suddenly where spinning drives tend to have warning signs.
Best Answer
Most likely either ext4 of xfs. Format it each way and test your workload.
If you don't give a single shit about availability, then fine. If you do, I'd reconsider this approach.
If the only server process running on this is mysql, there's not a ton of benefit to running it on a separate disk. If it is a server that runs apache and other processes as well, this makes a bit more sense. There will be slight performance gains by putting it on a separate physical disk, but I honestly would run the disks in RAID 1 ten times out of ten.
Seriously, though. If you care one bit about the users of the server, it's negligent to not run RAID. Think about it like this:
How frequently do you take backups? If it's daily, imagine that a disk dies right before the next backup window. How would your users react to losing a day's worth of work?
Now imagine that it takes you 4-6 hours to restore from backup, test, and bring everything back up. Now your users have lost a days worth of work and haven't been able to use the server for the better part of the day.
Is it really worth the slight bit of extra performance? Probably not.
If you really want to separate your DB, get two more SSDs and run two RAID 1s or a single RAID 10.