/boot
needs to not be encrypted otherwise the boot loader (unless I'm behind the times and one of them supports encrypted volumes) will not be able to ready the Kernel and initrd. It does not need to be encrypted as it should never contain anything other than the kernel, the initrd, and perhaps a few other support files.
The the device that is your LVM PV is encrypted, then /boot
will need to be elsewhere: probably a separate RAID volume. If the device used as the PV is not encrypted (instead you encrypted the LV that is to be /
) then /boot
could be in the LVM except for the GRUB-can't-boot-off-all-RAID-types issue (see below).
Historically /boot
had to be near the start of the disk, but modern boot loaders generally remove this requirement. A few hundred Mb should be perfectly sufficient, but with such large drives being standard these days there will be no harm in making it bigger just in case unless you are constrained by trying to fit into a very small device (say, a small SD card in a Pi or similar) as might be the case for an embedded system.
Most boot loaders do not support booting off RAID or if they do they only support booting off RAID1 (where every drive has a copy all the data) "by accident", so create the small partition on all the drives and use a RAID1 array over them. This way /boot
is readable as long as at least one drive is in a working state. Make sure the boot loaded installs into the MBR of all four drives on install, otherwise if you BIOS boots off another (due to the first being offline for instance) you will have to mess around getting the loader's MBR onto the other drive(s) at that point rather than it already being there.
Update: As per Nick's comment below, modern boot loaders can deal directly with some forms of encrypted volumes so depending on your target setup there are now less things to worry about.
There are up to 3 levels of alignment you need to keep in mind - 1). volume manager, 2). volume partitioning, 3). file system. If you are not using LVM then 1 is irrelevant. If you are not partitioning you volumes with fdisk, then 2 is irrelevant as well. The most important alignment for performance is 3. With proper alignmet you may see up to 15% boost in performance.
For cases 1 and 2 a good general rule would be aligning to a megabyte boundaries.
1). LVM usually does a good job by a) placing it's metadata at the end of the volume and b) giving you an option of specifying size of metadata (for example "pvcreate -M2 --metadatasize 2048K --metadatacopies 2 ")
2). If you need to partition any of these volumes with fdisk then again, try to stick to MB boundaries. Modern Linux fdisk versions have this option as well as recent version of gparted.
3). Aligning of file system is most important of all. I have experience with aligning xfs and ext3 (ext4 should be similar to ext3) and you will need to do some math here and then specify right parameters when creating the file system. Look at the documentation for specific parameters, namely something called "stripe width". Be careful though with interpretation - depending on fs type it is either expressed in 512B blocks or in bytes, so you will need to do calculations accordingly. This interpretations is also depends on number of drives in the RAID array and RAID level. You may also find some useful info in this thread.
Also, you can specify parameters when mounting a file system that may improve performance even further. Here are parameters I use with my 18TB xfs file system "noatime,attr2,nobarrier,logbufs=8,logbsize=256k". But be careful, these are not universal rules and if used incorrectly may compromise reliability of your system (especially "nobarrier").
Another thing to keep in mind that if you planning for future expansion of any of these RAID arrays you should take it into account when you create file systems since it will imminently affect your perfect alignment ;-)
I hope this points you in a right direction. Have fun :-)
Best Answer
If the kernel you're currently using to run the source server contains all of the drivers required to run the hardware on the destination server then the process isn't too painful. Start by using tar to create archives of your partitions and then store them in a accessible location (removable media or nfs server).
Boot another server with a live distro. Create the partitions you want an untar the archives to those partitions. Install grub to the MBR of the boot device. That's it.
If the server has different hardware or requires drivers for a different boot device, you'll need to compile a kernel for this to work, but, you'd have to do that anyway if you use imaging software and attempt to restore to mismatched hardware.
Nutshell: Restore partitions > Make /boot bootable (install grub) > Compile new kernel and make available to Grub (only if necessary) > done.