Opensuse h4ckweek 23

Logo

Opensuse h4ckweek 23 navigation log

View the Project on GitHub mpagot/opensuse.hackweek.23

7 November 2023

Day2 is in a dark and hidden corner of a library

by mpagot

Back

This day is mostly dedicated to read documentation and to collect user experience

DONE

On the shoulders of others

Previous ZFS hackweeks

Read some existing documentation linked to previous Hackweeks

BTRFS reading

Read through the official BTRFS wiki

Are there openQA tests about BTRFS? Yes, most of them are using XFSTESTS

Archlinux wiki

Warning: The RAID 5 and RAID 6 modes of Btrfs are fatally flawed

Multiple devices can be used to create a RAID. Supported RAID levels include RAID 0, RAID 1, RAID 10, RAID 5 and RAID 6. Starting from kernel 5.5 RAID1c3 and RAID1c4 for 3- and 4- copies of RAID 1 level.

The RAID levels can be configured separately for data and metadata using the -d and -m options respectively.

By default, the data has one copy (single) and the metadata is mirrored (raid1).

more information about how to create a Btrfs RAID volume.

mkfs.btrfs -d single -m raid1 /dev/_part1_ /dev/_part2_ ...

In order to use multiple Btrfs devices in a pool, user needs either:

Mkinitcpio#Common hooks for more information.

It is possible to add devices to a multiple-device file system later on.

Devices can be of different sizes. However, if one drive in a RAID configuration is bigger than the others, this extra space will not be used.

Btrfs does not automatically read from the fastest device: mixing different kinds of disks results in inconsistent performance. cite

#RAID is about maintenance of multi-device Btrfs file systems.

Gentoo wiki

btrfs is licensed under the GPL and open for contribution from anyone.

ZFS reading

I only read a 2020 article about license issue : Linus-Says-No-To-ZFS-Linux

Talk with humans using ZFS

Talk with friends and colleague that has DIY home NAS.

Mr Brrr

He uses zfs on his home server and he really likes it.

He is using ZFS on Ubuntu (but would probably use Debian now) and docker on an old HP microserver gen 8.

➜  ~ zpool status space-backup
  pool: space-backup
 state: ONLINE
  scan: scrub repaired 0B in 11:42:42 with 0 errors on Sun Oct  8 11:42:43 2023
config:

	NAME                                                STATE     READ WRITE CKSUM
	space-backup                                        ONLINE       0     0     0
	  raidz1-0                                          ONLINE       0     0     0
	    ata-WD_CaviarGreen1TB-######-part1   ONLINE       0     0     0
	    ata-WD_CaviarGreen1TB-######-part1   ONLINE       0     0     0
	    ata-MediaMax1TB_######-part1        ONLINE       0     0     0
	    ata-SAMSUNG_Ecogreen1.5TB_######-part1        ONLINE       0     0     0
	    ata-Segate10TB-######-part1         ONLINE       0     0     0
	logs
	  ata-CrucialSSD128GB_#######-part3     ONLINE       0     0     0
	cache
	  ata-INTEL_SSDS250GB_######-part6  ONLINE       0     0     0

errors: No known data errors

Are you using raidz on it?

yes

Have you ever try to expand a pool adding new disks? Or generally speaking expanding the space adding new disks?

yes

Have you ever experienced how zfs behaves if a disk break?

broken disk - somehow - I’ve intentionally broken the raid to replace a disk   not because it was broken… no, I wanted to put in a bigger disk and have had no more SATA ports, therefore I had to replace one.

I even boot from zfs, not the same pool tho

Mr Lu

He define himself a not a heavy user. He is using ZFS on TrueNAS

Are you using raidz on it?

Yep. I have one 4 disk  pool in RaidZ-2

Have you ever try to expand a pool adding new disks? Or generally speaking expanding the space adding new disks?

I remember that was exactly a problem with ZFS that it is not as flexible as BTRFS with changing raid levels and disk numbers. So you can be quite stuck with the layout you are starting with. Replacing disks with bigger ones one by one should not be an issue. I was adding lately 2x4tb disks but they ended up in new pool since they would not fit into existing 4x1TB pool.

Have you ever experienced how zfs behaves if a disk break?

This one is a weird thing. the 4x1TB disks are probably 10+ years old and still doing fine. Didn’t have a failure since they were bought. But on the other hand that NAS is running 24/7 only maybe last 3 years. Previously I used it mostly as NAS storage for cold data and had a WOL setup when the sever would be woken up by router or kodi instance upon access. Hmm… now I really wanna check how old those disks are

Mr Doh

He is a openSUSE expert user running a NAS on mixed BTRFS/ZFS

Which OS distro are you using?

leap 15.5, with the filesystems repo added and the zfs package installed. I’m using btrfs for root fs with 2 redundant disks and zfs for data.

Are you using raidz on it?

zfs is using the zfs equivalent of raid1 (mirror)

Have you ever try to expand a pool adding new disks? Or generally speaking expanding the space adding new disks?

expanding is not possible with zfs - feature is planned IIRC. You could degrade the raid and then create a new one with a fake virtual disk, the old disk from the now degraded old zfs and a new disk, remove the virtual disk (degrading the new one too, then copying the data and then moving the 2nd old disk to the new zfs.

tags: