Linux – Is GPT needed on a 16 TB data disk

gpthplinuxraidxfs

I have made /dev/sdb which is a 16 TB disk using hardware RAID, where I am temped to put XFS directly on /dev/sdb without making partitions. In the future will I need to expand this to double the size.

The hardware is an HP ProLiant DL380 Gen 9 with 12 SAS disk trays in the front.

One advantage of not making partitions is that a reboot isn't needed, but are things different on >2 TB disks?

Do I need to have a GPT, or can I run into trouble when expanding the RAID array and XFS without one?

Best Answer

You can do this without any problems...

I'm assuming /dev/sdb is a separate HP Smart Array Logical Drive.

Don't use any partitioning for this setup... Just create the filesystem on the block device:

mkfs.xfs -f -l size=256m,version=2 -s size=4096 /dev/sdb

When you want to expand at a later date, add disks and expand the HP logical drive using the hpssacli or Smart Storage Administrator tools.

You can rescan the device to get the new size with:

echo 1 > /sys/block/sdb/device/rescan

Confirm the device size change with dmesg|tail.

At that point, you can run xfs_growfs /mountpoint (not device name) and the filesystem will grow online!