
truenas - grow/shrink a zfs RAIDZ - Server Fault
As for expanding there are a number of options of how to grow your ZFS 5x3TB raidz pool: Add a mirror VDEV (pairs of disks) pool spans the two VDEVs (12TB raidz & 3TB mirror). Add a raidz VDEV (3-8 disks) pool spans the two VDEVs (12TB raidz & 12TB raidz). Upgrade each disk (5x3TB to 5x4TB disks one at a time) pool on a single VDEV (16TB raidz).
opensolaris - ZFS: Mirror vs. RAID-Z - Server Fault
I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage.
solaris - How can I add one disk to an existing raidz zpool ...
1) Add a raidz of the same configuration to the pool (think 3 disk raidz + 3 disk raidz or 5 + 5, for example) 2) Replace each (and every) disk in your raidz pool one by one, letting it resilver after inserting each upgraded disk. 3) Backup your data, destroy your pool and create a new raidz pool with a bigger amount of disks.
zfs - Expanding a FreeNAS RAIDZ Pool - Server Fault
Sep 6, 2013 · Datasets are not constrained to one vdev, their data is stored wherever ZFS can find in the pool to put it. This is the main concept of 'pooled' storage. The zpool merges X number of disks into a pool, which just becomes one big storage area for ZFS file systems to …
zfs - Is a large RAID-Z array just as bad as a large RAID-5 array ...
Mar 13, 2012 · +1 Also, the perblock checksums allows ZFS, should it find corruption in an array, to single out the affected files. Most R5 HBAs will simply mark the whole volume as corrupted, or report back to the OS that a sector is corrupted, either way the HBA has no way of knowing which disk is wrong in a corruption scenario.
software raid - What are potential dangers of spanning ZFS RAIDZ …
Aug 14, 2024 · # note the -f (force), it instruct ZFS to ignore size/layout mismatch and safety checks (use with care!) [root@localhost test]# zpool create zzz raidz loop0 loop1 loop2 loop3 loop4 loop5 loop6 loop7 loop8 -f [root@localhost test]# zpool status zzz pool: zzz state: ONLINE config: NAME STATE READ WRITE CKSUM zzz ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ...
zfs - Does it make sense to create a zpool with lots of mirror vdevs ...
Oct 1, 2021 · Using multiple mirror vdevs is not an issue for zfs - at all. They provide much higher performance than raidz vdevs and faster resilver. An all-mirrors pool is easier to extend and, with recent zfs versions, even to shrink (ie: vdev removal is supported). raidz has better space efficiency and, in its raidz2 and raidz3 versions, better resiliency.
ZFS: RAIDZ versus stripe with ditto blocks - Server Fault
Dec 29, 2009 · One alternative to expanding your raidz vdev is to use zfs send to store all you data somewhere temporarily while you add a disk and rebuild your raidz vdev and then zfs receive to get it back. It will be hard once you get past a few …
migration zfs raidz zfsonlinux - Server Fault
Dec 22, 2014 · The solution was to a) degrade the RAID5 and b) build the initial RAIDZ with a sparse file as "virtual third drive", which was taken immediately taken offline after pool creation: Create sparse file: dd if=/dev/zero of=/zfs1 bs=1 count=1 seek=4100G; Create the raidz pool: zpool create zfspool raidz /dev/disk1 /dev/disk2 /zfs1
Is it possible to change zfs raid-z1 to raid-z2 or raid-z3 on freenas?
Aug 6, 2024 · A really good explanation why can be found in the blog post ZFS: You should use mirror vdevs, not RAIDZ. Simplified, there are several reasons: Higher availability; ZFS RAID does not degrade on single failed drives and stays accessible; Makes it faster to recover and resilver; Avoids recovery problems