My preferred layout would be:
2 disks for the OS in RAID1 (protected against a HDD failure)
2n disks for the RAID10 (protected against a HDD failure)
rest of the disks for RAID0 (no redundancy, maximum speed)
If you really insist on having a single system disk, it doesn't change the layout.
On (each of) the system disks I'd have a separate partition for /boot filesystem. The rest of the disk(s) I'd made into an LVM physical volume. I like to have OS filesystems that can fill up on separate logical volumes (/tmp, /var, /opt if something writes logs in there). So that would result in /boot (mirrored, no LVM) and /, /tmp, var and possibly /opt filesystems, each on a separate logical volume on a mirrored disk.
On each of the other disks I'd create a single partition of the type Linux raid and create appropriate RAID arrays (one RAID 10, one RAID 0). On each of the arrays I'd create a single partition of the type Linux LVM and make two separate volume groups, one for redundant and one for non-redundant data. Then for each file system you plan to make I'd make a logical volume.
In each volume group I would recommend to leave some space unused, so that you can do an LVM snapshot and do fsck of a file system without bringing the server down. I would also disable automatic fsck of all filesystems (tune2fs -i 0 -c 0 /device/name).
Rationale
1) Mirroring of the OS disks.
Failure of the system HDD brings down the whole machine. Your data is protected, but your production stops until you can bring a replacement disk and reinstall / restore the OS. In a production environment it is usually cheaper to have one more disk installed.
2) Partitioning disks for RAID arrays.
All the servers I use have partition tables. You may use just whole disks as RAID / LVM volumes, but then you end up with some machines that have partition tables (stuff on /dev/sdX1) and some, which don't (stuff on /dev/sdX). In case of a failure and need for recovery under stress I like to have one variable less in the environment.
3) LVM on the RAID arrays
LVM gives two advantages: easy changes of filesystem sizes and ability to fsck filesystems without bringing the whole server down. Silent data corruption is possible and happens. Checking for it may save you a lot of excitement.
4) tune2fs -i 0 -c 0
Having a surprise fsck of a large filesystem after a reboot is a time-consuming and nerve-whacking affair. Disable it and do regular fscks of LVM snapshots of filesystems.
A question:
/opt/backup is where you plan to keep backups of your production environment?
Don't.
Have the backups somewhere else from the machine. A malicious program, a mis-spelled command (e.g. rm -rf / tmp/unimportant/file
) or some water spilled in / flooding the wrong place will leave you without your system and with no backups. If all else fails, have two external USB disks for backups, still better than a partition inside the same box.
Best Answer
Definitely XFS. XFS initialisation is much faster, performance is excellent, and XFS has been use for multi-terabytes volumes for ages. I currently support 230 machines with 8 to 76 TB XFS volumes. Tens are built with two or more RAID volumes aggregated through LVM without problem, so this is safe enough.
xfs_check speed depends mostly on the number of files. For typical large volumes (30 TB), xfs_repair takes less than 15 minutes given that the system has enough memory (older xfs_repair tends to gobbles tons of RAM), like 8 GB or more.