RAID1 – Adding Btrfs Devices to RAID1 Array to Increase Size

btrfsraid1raid10

I have a btrfs array comprising of two 2TB disks in raid 1. I am running out of space, and I want to add two 3TB disks that I have lying around. The final setup will be 2x2TB+2x3TB drives. One drive failure redundancy is sufficient for me (the data is backed up somewhere else too).

I am unsure how to proceed: the btrfs wiki clearly states how to add new devices, but I am unsure how this will end up looking in terms of available disck space. Will I end up with:

  1. 2TB of space with 4x redundancy or
  2. 4TB of space with 2x redundancy or
  3. something else?

Based on some research (e.g. this answer) I think the outcome should be 2, which is what I would want. Is this correct?

Another possibility is that I switch to a raid 10 setup, if this is possible with disks of different size. If this is advisable, how should I do so without temporarily hosting the data on external storage?

Best Answer

2TB of space with 4x redundancy or

A quote from official wiki explains it in details:

"… btrfs combines all the devices into a storage pool first, and then duplicates the chunks as file data is created.

RAID-1 is defined currently as "2 copies of all the data on different devices".

This differs from MD-RAID and dmraid …"

— 2 copies only.

Another possibility is that I switch to a raid 10 setup, if this is possible with disks of different size.

Again reading same source: "…

RAID-10 is built on top of these definitions.

Every stripe is split across to exactly 2 RAID-1 sets and those RAID-1 sets are written to exactly 2 devices (hence 4 devices minimum). A btrfs RAID-10 volume with 6 × 1 TB devices will yield 3 TB usable space with 2 copies of all data. …"

how should I do so without temporarily hosting the data on external storage?

It's all explained as well in "Conversion" paragraph — starting btrfs balance over a mount point with specifications of needed profile for data and metadata. Trivially adjusting theirs example it becomes this:

btrfs balance start -dconvert=raid10 -mconvert=raid10 /…mntPoint…

I suppose there's no need in intermediate balance after you'd have two new disks added into that pool — addition itself won't do re-balance, only newly written data would use additional devices. Therefore, it's pretty trivial: add and start balance to any of the layout you're liking.


Conclusion

Theoretically you would have the best possible outcome because "btrfs combines all the devices into a storage pool first"

Practice

It would be very simple to test all this using a setup on, say, loopback devices of much smaller sizes.

A word of caution

In despite Btrfs became pretty mature I heard that multiple device support still isn't robust enough compared to non-RAID-like mode. I'd recommend trying mentioned above conversion on loopback devices first and even if it would go successfully, backuping at least most important data somewhere else.

Also, expressing my own opinion I'd say that I'd prefer converting such a setup into traditional LVM-2 over LSR (Linux Software RAID aka MD) that would give you ability to create logical sub-volumes for later use either with Btrfs or any other desired FS.