How to Use the ZFS Filesystem on Ubuntu Linux

ZFS on Ubuntu

There are a myriad of filesystems available for Linux. So why try a new one? They all work, right? They’re not all the same, and some have some very distinct advantages, like ZFS.

ZFS is awesome. It’s a truly modern filesystem with built-in capabilities that make sense for handling loads of data.

Now, if you’re considering ZFS for your ultra-fast NVMe SSD, it might not be the best option. It’s slower than others. That’s okay, though. It was designed to store huge amounts of data and keep it safe.

ZFS eliminates the need to set up traditional RAID arrays. Instead, you can create ZFS pools, and even add drives to those pools at any time. ZFS pools behave almost exactly like RAID, but the functionality is built right into the filesystem.

ZFS also acts like a replacement for LVM, allowing you to partition and manage partitions on the fly without the need to handle things at a lower level and worry about the associated risks.

It’s also a CoW filesystem. Without getting too technical, that means that ZFS protects your data from gradual corruption over time. ZFS creates checksums of files and lets you roll back those files to a previous working version.

Install ZFS on Ubuntu

Installing ZFS on Ubuntu is very easy, though the process is slightly different for Ubuntu LTS and the latest releases.

Ubuntu 16.04 LTS

Ubuntu 17.04 and Later

After you have the utilities installed, you can create ZFS drives and partitions using the tools provided by ZFS.

Create ZFS Pool

Pools are the rough equivalent of RAID in ZFS. They are flexible and can easily be manipulated.


RAID0 just pools your drives into what behaves like one giant drive. It can increase your drive speeds, but if one of your drives fails, you’re probably going to be out of luck.

To achieve RAID0 with ZFS, just create a plain pool.


You can achieve RAID1 functionality with the mirror keyword in ZFS. Raid1 creates a 1-to-1 copy of your drive. This means that your data is constantly backed up. It also increases performance. Of course, you use half of your storage to the duplication.


ZFS implements RAID5 functionality as RAIDZ1. RAID5 requires drives in multiples of three and allows you to keep 2/3 of your storage space by writing backup parity data to 1/3 of the drive space. If one drive fails, the array will remain online, but the failed drive should be replaced ASAP.


RAID6 is almost exactly like RAID5, but it works in multiples of four instead of multiples of three. It doubles the parity data to allow up to two drives to fail without bringing the array down.

RAID10/Striped Mirror

RAID10 aims to be the best of both worlds by providing both a speed increase and data redundancy with striping. You need drives in multiples of four and will only have access to half of the space. You can create a pool in RAID10 by creating two mirrors in the same pool command.

ZFS pool Status

There are also some management tools that you have to work with your pools once you’ve created them. First, check the status of your pools.


When you update ZFS you’ll need to update your pools, too. Your pools will notify you of any updates when you check their status. To update a pool, run the following command.

You can also upgrade them all.

Adding Drives

You can also add drives to your pools at any time. Tell zpool the name of the pool and the location of the drive, and it’ll take care of everything.

ZFS in File Browser

ZFS creates a directory in the root filesystem for your pools.  You can browse to them by name using your GUI file manager or the CLI.

ZFS is awesomely powerful, and there are plenty of other things that you can do with it, too, but these are the basics. It is an excellent filesystem for working with loads of storage, even if it is just a RAID array of hard drives that you use for your files. ZFS works excellently with NAS systems, too.

Regardless of how stable and robust ZFS is, it’s always best to back up your data when you implement something new on your hard drives.


  1. Isn’t ZFS a bit of an overkill for personal computers? Isn’t it designed for industrial-strength installations?

    You sing the praises of ZFS but omit mentioning the warts such as:
    ZFS is a memory hog in many instances.
    ZFS can bog down if there isn’t sufficient free space available.
    while the expansion of the zpool is relatively easy, the shrinking of that pool is a royal PITA.

    Isn’t there some kind of a clash between ZFS’s CDDL and Linux’s GPL?

  2. dragonmouth asked, “Isn’t there some kind of a clash between ZFS’s CDDL and Linux’s GPL?”.

    Not according to Canonical’s lawyers. They are one of the small handful of distros that dare to ship ZFS.

    I think Red Hat should like… talk to Oracle and the Free Software Foundation to see if they could somehow work out an agreement… without necessarily changing from the CDDL and the GPL. Canonical doesn’t seem scared. Have they worked out an agreement?

  3. I was kinda of hoping this article would cover sub-volumes, quotas, and when and how to turn off CoW.

    • funny you should mention that, realistically those are the only aspects of btrfs that I care about (I leave crypto and raid to luks and mdadm.) I’m still shy of aes-xts and still prefer my tried and true cbc even in with the higher strength ciphers. It has to do with how the initialization vectors are setup and performance, realistically xts seems strong, I’m difficult to adopt new ideas when it comes to crypto and understanding what attack vectors there are.

      getting back to subvolumes, snapshots, quotas, docker makes use of a btrfs filesystem’s ability to create subvolumes and copy-on-write snapshots to create guests without duplicating a lot of data, much the same way as aufs but more efficiently. The performance is very appreciable. Snapshots are also immensely useful on an operating system such as Gentoo where making changes to your OS down to the metal is ideal when you want to experience bleeding edge builds and optimizations at the level of the machine itself and without any potential henderence from virtualization. Personally I find that KVM / QEMU really gets down close enough to the metal, but even for a guest BTRFS is really nice to have.

      I experimented with using BTRFS’s software raid, in the context of creating a guest with a small amount of virtual disk space allocated and extending it by hot-adding another disk post install; it actually works after a rebalance? it’s been awhile since I tried, interesting idea I thought.

    • honestly on that note the aspects of btrfs that I actually care about (subvolumes, ect) make me wonder why I would want to use ZFS over BTRFS now. They have a better grasp on software native raid and encryption but I dont trust those things enough to use them anyway

  4. The sections covering RAIDZ1 and RAIDZ2 are inaccurate. They do not require drives to be installed in multiples of 3 or 4. I’ve run both modes with 7 drives. What it boils down to is this: for RAIDZ1 or RAIDZ2 to be effective and worthwhile you should have a MINIMUM of 3 or 4 (respectively).

    • Jeff is correct, both that the infomation for RAIDZ1 and RAIDZ2 is inaccurate and that the minimums listed are truly minimums.

      The principle for RAIDZ1 is one drive of capacity for parity (i.e. n-1 for usable space). RAIDZ2 uses two drives of capacity for parity, with n-2 for usable space.

Comments are closed.

Sponsored Stories