> That isn't my understanding as far as raidz reshaping goes. You can
> create raidz's and add them to a zpool. You can add individual
> drives/partitions to zpools. You can remove any of these from a zpool
> at any time and have it move data into other storage areas. However,
> you can't reshape a raidz.
ZFS is organized into pools, which are transactional object stores.
Various things can go into these transactional object stores, such as
ZFS data sets and zvols. A ZFS data set is what you would consider to
be a filesystem. A zvol is a block device on which other filesystems
can be installed. Data in pools are stored in vdevs, which can be
files masquerading as block devices, single disks, mirrored disks or a
ZFS is designed to put data integrity first. I question how many other
volume managers are capable of recovering from a crash during a
reshape without some sort of catastrophic data loss. WIth that said, I
do not see what your point is to talk about this. There are things you
can use your extra disk to do, but as far as storage requirements go,
a single disk does not go very far. You are better off replacing
hardware if your storage requirements grow beyond the ability of your
current disks to handle. > Suppose I have a system with 5x1TB hard drives. They're merged into a
> single raidz with single-parity, so I have 4TB of space. I want to
> add one 1TB drive to the array and have 5TB of single-parity storage.
> As far as I'm aware you can't do that with raidz. What you could do
> is set up some other 4TB storage area (raidz or otherwise), remove the
> original raidz, recycle those drives into the new raidz, and then move
> the data back onto it. However, doing this requires 4TB of storage
> space. With mdadm you could do this online without the need for
> additional space as a holding area.
If you have proper backups, you should be able to destroy the pool,
make a new one and restore the backup. If you do not have backups,
then I think there are more important things to consider than your
ability to do this without them. > ZFS is obviously a capable filesystem, but unless Oracle re-licenses
> it we'll never see it take off on Linux. For good or bad everybody
> seems to like the monolithic kernel. Btrfs obviously has a ways to go
> before it is a viable replacement, but I doubt Oracle would be sinking
> so much money into it if they intended to ever re-license ZFS.
I heard a statement in IRC that Oracle owns all of the next generation
filesystems, which enables them to position btrfs for the low-end and
use ZFS at the high-end. I have no way of substantiating this, but I
can say that this does appear to be the case.
With that said, ebuilds are in the portage tree and support has been
integrated into genkernel. I have a physical system booting off ZFS
(no ext4 et al) and genkernel makes kernel upgrades incredibly easy,
even when configuring my own kernel through --menuconfig. Gentoo users
in IRC are quite interested in this and they do not seem to care that
the modules are out-of-tree or that the licensing is different. As far
as I can tell, there is no need for them to care.
You might want to look at Gentoo/FreeBSD, which also supports ZFS with
a monolithic kernel design, but has no licensing issues. There is
nothing forcing any of us to use Linux and if the licensing is a
problem for you, then perhaps it would be a good idea to switch.
Also, to avoid any confusion, a proper bootloader for ZFS does not
exist in portage at this time. I hacked the boot process to enable the
system to boot off ZFS using GRUB and it will require some more work
before this is ready for inclusion into portage. I made an
announcement to the ZFSOnLinux mailing list not that long ago
explaining what I did. I was waiting until ZFS support in Gentoo
reached a few milestones before I made an announcement about it here,
although most of the stuff you need is already in-tree: http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/d94f597f8f4e3c88