Hi all!
I have a Debian stable server with two hdds in a md RAID which contains an encrypted ext-4 filesystem.
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 1G 0 part
│ └─md0 9:0 0 1023M 0 raid1 /boot
├─sda2 8:2 0 2.7T 0 part
│ └─md2 9:2 0 2.7T 0 raid1
│ └─mdcrypt
│ 253:0 0 2.7T 0 crypt /
└─sda3 8:3 0 1M 0 part
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md0 9:0 0 1023M 0 raid1 /boot
├─sdb2 8:18 0 2.7T 0 part
│ └─md2 9:2 0 2.7T 0 raid1
│ └─mdcrypt
│ 253:0 0 2.7T 0 crypt /
└─sdb3 8:19 0 1M 0 part
I’d like to migrate that over to BTRFS to make use of deduplication and snapshots.
But I have no idea how to set it up since BTRFS has its own RAID-1 configuration. Should I rather use the existing MD array? Or should I take the drives out of the array, add encryption and then add the BTRFS RAID inside that?
Or should I do something else entirely?
Personally I’d do zfs and luks
Why not ZFS’s own encryption?
Though I would rather go with BTRFS since I don’t have any experience with ZFS.
luks encrypt both drive, raid 1 btrfs the lvm?
but you need to decrypt both drive then. i think exist some script to decrypt two drive with same key but cant find.
edit: btrfs raid superior because bitrot detection + healing, most normal raid can detect but not heal.
Decryption isn’t a problem if you use the systemd hooks when creating your initrams. They try to decrypt every given luks volume with the first key provided and only ask for additional keys if that fails.
I have 3 disks in a btrfs raid setup, 4 partitions (1 for the raid setup on each, plus a swap partition on the biggest disk), all encrypted with the same password.
No script needed, just add
rd.luks.name=<UUID1>=cryptroot1 rd.luks.name=<UUID2>=cryptroot2 rd.luks.name=<UUID3>=cryptroot3 rd.luks.name=<UUID4>=cryptswap
to your kernel parameters and unlock all 4 with one password at boot.I will be decrypting from a small busybox inside the initrd. I suspect that it will decrypt both drives if the passphrase is the same. At least that’s how it works on the desktop.
If I had to do encrypted btrfs RAID from scratch, I would probably:
- Set up LUKS on both discs
- Unlock both
- Create a btrfs partition on one mapper
- Add the other with
btfs device add /path/to/mapper /path/to/btrfs/part
- Balance with
btrfs balance start -mconvert=raid1 -dconvert=raid1 /path/to/btrfs/part
- Add LUKS’ to crypttab, btrfs partition to fstab and rebuild/configure bootloader as necessary
In that scenario, you would probably want to use a keyfile to unlock the other disc without rentering some password.
Now, that’s from the top of my head and seems kinda stupidly complicated to me. iirc btrfs has a stable feature to convert ext4 to btrfs. It shouldn’t matter whatever happens outside, so you could take your chances and just try that on your ext volume
(Edit: But to be absolutely clear: I would perform a backup first :D)