Giter Site home page Giter Site logo

openzfs-docs's Introduction

This repo is archived. Alternate processes for contributing ZFS changes include following the illumos procedures, or opening a PR against ZFSonLinux.

The reasoning behind this are outlined in an email to the [email protected] mailing list, which is reproduced below:

The OpenZFS code repository on github (http://github.com/openzfs/openzfs) is a clone of the illumos repo, with basically identical code. The OpenZFS repo made it easier to contribute ZFS code to illumos, by leveraging the github pull request and code review processes, and by automatically building illumos and running the ZFS Test Suite.

Unfortunately, the automated systems have atrophied and we lack the effort and interest to maintain it. Meanwhile, the illumos code review and contribution process has been working well for a lot of ZFS changes (notably including ports from Linux).

Since the utility of this repo has decreased, and the volunteer workforce isn't available to maintain it, we will be archiving http://github.com/openzfs/openzfs in the coming week. Thank you to everyone who helped maintain this infrastructure, and to those who leveraged it to contribute over 500 commits to ZFS on illumos! Alternate processes for contributing ZFS changes (including those in open PR's) include following the illumos procedures, or opening a PR against ZFSonLinux.

openzfs-docs's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openzfs-docs's Issues

Debian: Should /boot/efi be mounted during boot?

In the last days I migrated my server successfully on ZFS, root and all. Thanks for the really helpful documentation of this tedious process!

System is running latest Debian Testing.

I stumbled upon an issue, that the /boot/efi partition is not mounted during boot, and I see the following error messages:

Dez 30 12:23:13 cloudfoo systemd[1]: dev-disk-by\x2duuid-E032\x2dA5C1.device: Job dev-disk-by\x2duuid-E032\x2dA5C1.device/start timed out.
Dez 30 12:23:13 cloudfoo systemd[1]: Timed out waiting for device /dev/disk/by-uuid/E032-A5C1.
Dez 30 12:23:13 cloudfoo systemd[1]: Dependency failed for File System Check on /dev/disk/by-uuid/E032-A5C1.
Dez 30 12:23:13 cloudfoo systemd[1]: Dependency failed for /boot/efi.
Dez 30 12:23:13 cloudfoo systemd[1]: boot-efi.mount: Job boot-efi.mount/start failed with result 'dependency'.
Dez 30 12:23:13 cloudfoo systemd[1]: systemd-fsck@dev-disk-by\x2duuid-E032\x2dA5C1.service: Job systemd-fsck@dev-disk-by\x2duuid-E032\x2dA5C1.service/start failed with result 'dependency'.
Dez 30 12:23:13 cloudfoo systemd[1]: dev-disk-by\x2duuid-E032\x2dA5C1.device: Job dev-disk-by\x2duuid-E032\x2dA5C1.device/start failed with result 'timeout'.

I assume (for whatever reason) that it's too early to mount the EFI partition. As a workaround I added x-systemd.requires=zfs-import-bpool.service into fstab and increasing the timeout to 2 seconds resulting in the following line:

/dev/disk/by-uuid/E032-A5C1 /boot/efi vfat nofail,x-systemd.device-timeout=2,x-systemd.requires=zfs-import-bpool.service 0 1

On the next boot the error message disappeared, and /boot/efi was successfully mounted.

I'm not entirely sure that this workaround is the right way to handle it, maybe someone else has any thoughts?

Upon further reading, it seems that persistently mounting /boot/efi is not such a great idea due to potential filesystem corruption. There the recommended solution is:

/dev/disk/by-uuid/E032-A5C1 /boot/efi vfat x-systemd.idle-timeout=1min,x-systemd.automount,noauto 0 1

Ubuntu 20.04 ZFS Root: boot pool naming -not- arbitrary

Currently 20.04 ZFS Root docs state:

   - The pool name is arbitrary. If changed, the new name must be used
     consistently. The bpool convention originated in this HOWTO.

It turns out the bpool name is actually required. The /etc/grub.d/10_linux_zfs scripts installed by grub-common contain a couple hard-coded references to bpool that are used when scanning for boot entries.

I found this the hard way by using <hostname>-bpool for my boot pool and having grub fail to boot. Thankfully I remembered just enough grub-fu to insert the correct modules, pass manual boot args, and then debug the issue.

It seems like there are at least several potential fixes:

  • Work with the Ubuntu grub maintainers to update that script to support arbitrary boot pool names
  • Document some sort of workaround patch in the docs (such as a sed replacement of bpool in /etc/grub.d/10_linux_zfs)
  • Update the docs to state that the bpool naming convention is actually necessary.

I'd personally prefer the first option, but I think that might also involve updating the guide to run the apt dist-upgrade step prior to the initial grub install. Otherwise debootstrap wouldn't pull in the updated grub-common (I think?).

In any event, I'm happy to help post a PR with doc updates, but would be interested in getting thoughts/feedback on the described situation first. Thanks!

Ubuntu 20.04 on root with "persistent datasets" created at install - `zfs-mount-generator` doesn't run on first boot?

I recently read this article (and the related ones) zsys, and in the article they talk about the notion of "persistent datasets" for zsys. I was inspired to move my Ubuntu server installation to ZFS on root (single disk, no encryption) so that I could play around with zsys, and I wanted to replicate what they called the "server layout" i.e.:

$ zsysctl show --full
[…]
System Datasets:
 - bpool/BOOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1/usr
 - rpool/ROOT/ubuntu_e2wti1/var
 - rpool/ROOT/ubuntu_e2wti1/var/lib/AccountsService
 - rpool/ROOT/ubuntu_e2wti1/var/lib/NetworkManager
 - rpool/ROOT/ubuntu_e2wti1/var/lib/apt
 - rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg
User Datasets:
[…]
Persistent Datasets:
 - rpool/srv
 - rpool/usr/local
 - rpool/var/lib
 - rpool/var/games
 - rpool/var/log
 - rpool/var/mail
 - rpool/var/snap
 - rpool/var/spool
 - rpool/var/www

So, I decided to deviate a little from the instructions and create a layout as close to the above as I could. I ended up with the below datasets:

root@ubuntu:~# zfs list -o name,canmount,mountpoint,com.ubuntu.zsys:bootfs
NAME                                              CANMOUNT  MOUNTPOINT                    COM.UBUNTU.ZSYS:BOOTFS
bpool                                                  off  /mnt/boot                     -
bpool/BOOT                                             off  none                          -
bpool/BOOT/ubuntu_5235te                            noauto  /mnt/boot                     -
rpool                                                  off  /mnt                          -
rpool/ROOT                                             off  none                          -
rpool/ROOT/ubuntu_5235te                            noauto  /mnt                          yes
rpool/ROOT/ubuntu_5235te/srv                            on  /mnt/srv                      yes
rpool/ROOT/ubuntu_5235te/usr                            on  /mnt/usr                      no
rpool/ROOT/ubuntu_5235te/var                            on  /mnt/var                      no
rpool/ROOT/ubuntu_5235te/var/lib                       off  /mnt/var/lib                  no
rpool/ROOT/ubuntu_5235te/var/lib/AccountsService        on  /mnt/var/lib/AccountsService  no
rpool/ROOT/ubuntu_5235te/var/lib/NetworkManager         on  /mnt/var/lib/NetworkManager   no
rpool/ROOT/ubuntu_5235te/var/lib/apt                    on  /mnt/var/lib/apt              no
rpool/ROOT/ubuntu_5235te/var/lib/dpkg                   on  /mnt/var/lib/dpkg             no
rpool/USERDATA                                         off  /mnt                          -
rpool/USERDATA/root_5235te                              on  /mnt/root                     -
rpool/usr                                              off  /mnt/usr                      -
rpool/usr/local                                         on  /mnt/usr/local                -
rpool/var                                              off  /mnt/var                      -
rpool/var/games                                         on  /mnt/var/games                -
rpool/var/lib                                           on  /mnt/var/lib                  -
rpool/var/lib/docker                                    on  /mnt/var/lib/docker           -
rpool/var/log                                           on  /mnt/var/log                  -
rpool/var/mail                                          on  /mnt/var/mail                 -
rpool/var/snap                                          on  /mnt/var/snap                 -
rpool/var/spool                                         on  /mnt/var/spool                -
rpool/var/www                                           on  /mnt/var/www                  -

Based on my understanding of the zfs-mount-generator and zsys, the above configuration would be a valid installation. I followed the rest of the guide to the letter, including setting up the zfs-mount-generator. When I got to the point where I rebooted into the new installation for the first time (i.e. not using the live usb), I got dropped into an EFI options menu that would only let me get to the grub prompt from there. I couldn't figure out how to get the system to boot. I was so surprised by this, that I thought I had made a mistake, and I went through the entire installation process again with the same result.

Thinking about this, the only thing I can come up with is that somehow the zfs-mount-generator is not used on first boot, so things are mounting in the incorrect order and the system can't boot. Once I went back and used the exact dataset layout from the guide, things worked fine, so it wasn't some other problem with my installation I'm pretty sure. I've also (after the first boot) moved some datasets around to more closely replicate the above, and it boots fine. So, it seems to just be a first boot problem. My current dataset layout is below:

$ zfs list -o name,canmount,mountpoint,com.ubuntu.zsys:bootfs | grep pool
NAME                                                                                       CANMOUNT  MOUNTPOINT                    COM.UBUNTU.ZSYS:BOOTFS
bpool                                                                                       off  /boot                                               -
bpool/BOOT                                                                                  off  none                                                -
bpool/BOOT/ubuntu_v49l6x                                                                     on  /boot                                               -
rpool                                                                                       off  /                                                   -
rpool/ROOT                                                                                  off  none                                                -
rpool/ROOT/ubuntu_v49l6x                                                                     on  /                                                   yes
rpool/ROOT/ubuntu_v49l6x/usr                                                                off  /usr                                                no
rpool/ROOT/ubuntu_v49l6x/var                                                                off  /var                                                no
rpool/ROOT/ubuntu_v49l6x/var/lib                                                             on  /var/lib                                            no
rpool/ROOT/ubuntu_v49l6x/var/lib/AccountsService                                             on  /var/lib/AccountsService                            no
rpool/ROOT/ubuntu_v49l6x/var/lib/NetworkManager                                              on  /var/lib/NetworkManager                             no
rpool/ROOT/ubuntu_v49l6x/var/lib/apt                                                         on  /var/lib/apt                                        no
rpool/ROOT/ubuntu_v49l6x/var/lib/dpkg                                                        on  /var/lib/dpkg                                       no
rpool/ROOT/ubuntu_v49l6x/var/log                                                             on  /var/log                                            no
rpool/ROOT/ubuntu_v49l6x/var/spool                                                           on  /var/spool                                          no
rpool/USERDATA                                                                              off  /                                                   -
rpool/USERDATA/michael_20a1ei                                                                on  /home/michael                                       -
rpool/USERDATA/root_v49l6x                                                                   on  /root                                               -
rpool/srv                                                                                    on  /srv                                                no
rpool/usr                                                                                   off  /usr                                                -
rpool/usr/local                                                                              on  /usr/local                                          -
rpool/var                                                                                   off  /var                                                -
rpool/var/games                                                                              on  /var/games                                          -
rpool/var/lib                                                                               off  /var/lib                                            -
rpool/var/lib/docker                                                                         on  /var/lib/docker                                     -
rpool/var/mail                                                                               on  /var/mail                                           -
rpool/var/snap                                                                               on  /var/snap                                           -
rpool/var/www                                                                                on  /var/www                                            -

Am I correct on this? Is there any way to fix that so that so that you could successfully complete the first boot?

Failing some way to fix that, I think it would be great to have a warning in the installation instructions to keep people like me from shooting themselves in the foot.

Question: grub install for mirror setup (ubuntu)

This is not an issue, just a question about a step in the doc that I cannot google nor figure out why is needed, so your help would be very appreciated. Thank you!

From the doc:
For UEFI booting, install GRUB to the ESP:

grub-install --target=x86_64-efi --efi-directory=/boot/efi \
    --bootloader-id=ubuntu --recheck --no-floppy

For a mirror or raidz topology, run this for the additional disk(s), incrementing the “2” to “3” and so on for both /boot/efi2 and ubuntu-2:

cp -a /boot/efi/EFI /boot/efi2
grub-install --target=x86_64-efi --efi-directory=/boot/efi2 \
    --bootloader-id=ubuntu-2 --recheck --no-floppy

Why the cp -a /boot/efi/EFI /boot/efi2 ?
I see that grub-install in the new efi-directories installs all the files, and the /boot/efi2/EFI/ubuntu2/grub.cfg is correctly identical in all the installations pointing to the right grub path. However without the cp grub would start in console mode.

Why?

Thank you

Missing step to populate /boot/efi on mirrored disks in Ubuntu 20.04 guide?

I'm setting up a 2-way mirror. At Step 6: First Boot, Install GRUB to additional disks:, there's a line to run dpkg-reconfigure grub-efi-amd64 to populate /boot/efi on all drives. When I run it, I'm not given an option to select multiple partitions, only showing screens for grub options.

I only have a single line in for /dev/vda1 in /etc/fstab, so it's not clear to me how grub would know there's multiple partitions to use. Do I need to manually copy the EFI files to the partition as mentioned in the boot errata? I'm happy to file a PR against the docs, but I'm not sure what exactly, if anything needs to be changed.

Debian Buster Root on ZFS: zed requires /run/lock to populate zfs-list.cache files (which does not exist when mounting /run as tmpfs)

While following the Root on ZFS guide for Debian Buster, I was unable to have zed populate the /etc/zfs/zfs-list.cache/{rpool,bpool} files until I ran mkdir /run/lock.

It seems the fix referenced in #97 (comment) caused this issue (originally reported in openzfs/zfs#9945) to resurface.

@rlaager, would you add mkdir /mnt/run/lock to the steps preceding chroot, or is there a better solution?

Ubuntu 20.04 installation issue from guide

I know a GitHub issue isn't the right place but I'm hoping that this issue will spur adding some more "if this happens, do this" cases to the documentation. I recently followed the 20.04 install guide and rather than doing a debootstrap, I rsync'd over an existing system. At the reboot at step 6.4 I was prompted with:

zfs-boot-failure

Ultimately I fixed this with the following:

zpool import -R /root -N 1622245071594617821
zpool import -N 9080151117160437766
zfs mount rpool/ROOT/ubuntu_cgk50j
zfs mount bpool/BOOT/ubuntu_cgk50j
zfs mount -a
exit

For a split second I see /ROOT/ubuntu/cgk50j: No such file or directory after the exit and then the system boots normally.

This unfortunately happens every time I boot and my /etc/zfs/zfs-list.cache/* files have a /root prefix that needs to be stripped before I reboot. I've tried to nuke /etc/zfs/zpool.cache (idea from openzfs/zfs#3918) but that doesn't seem to cure it.

Upgrading Root on ZFS from Ubuntu 18.04 to 20.04?

Looking though the documentation for 20.04, I see that there have been some significant changes from 18.04 -- for example, legacy mount points are no longer necessary.

I have a Ubuntu machine that I set up following the Ubuntu 18.04 Root on ZFS guide, and everything went smoothly when I upgraded Ubuntu to 20.04. Naturally, everything is still set up according to the 18.04 guide. With a lot of trial-and-error, I'm sure I could rework it to mostly look like what you'd get from following the 20.04 guide, but it would be better to have someone who actually knew what they were doing provide some guidance.

Or should I just leave it the way it is?

Ubuntu 20.04 Root on ZFS guide fails into grub prompt

System information

Type Version/Name
Distribution Name Ubuntu
Distribution Version 20.04 LTS
Linux Kernel 5.4.0-42-generic
Architecture x86_64
ZFS Version 0.8.3-1ubuntu12.1
SPL Version 0.8.3-1ubuntu12.1

After following the Ubuntu 20.04 Root on ZFS guide for LUKS, single-disk, without any swap, and BIOS-booting, every result boots into a grub rescue prompt where nothing can be read in the boot pool. I initially tried this for UEFI which had the same result, then repeated the procedure for BIOS 3 more times to try to fix the issue or address any missed instructions.

I will likely try the instructions next from 18.04 which uses the bpool service to see if this remedies the issue 20.04. Any help is much appreciated.

Here is what I see in the grub prompt:

grub> ls
(hd0) (hd0,gpt5) (hd0,gpt4) (hd0,gpt3) (hd0,gpt1)
grub> ls (hd0,gpt5)/
error: unknown filesystem.
grub> (hd0,gpt4)/
error: unknown filesystem
grub> ls (hd0,gpt3)/
@/ BOOT/
grub> ls (hd0,gpt3)/BOOT/
@/ ubuntu_uLmhPo/
grub> ls (hd0,gpt3)/BOOT/ubuntu_uLmhPo/
@/
grub> ls (hd0,gpt3)/BOOT/ubuntu_uLmhPo/@/
error: compression algorithm 78 not supported
(messaged repeated 2 more times)
error: compression algorithm 103 not supported
(message repeated 3 more times)
grub> ls (hd0,gpt1)/
grub/
grub> ls (hd0,gpt1)/grub/
i386-pc/ fonts/ grub.cfg grubenv unicode.pf2 gfxblacklist.txt

Raspberry Pi How-To Typo

@rlaager, I have the following issue with the Ubuntu 20.04 Root on ZFS for Raspberry Pi HOWTO: Please note the typo in Step 1 #1 - the xd -d should be xz -d:

curl -O https://cdimage.ubuntu.com/releases/20.04.1/release/ubuntu-20.04.1-preinstalled-server-arm64+raspi.img.xz xd -d ubuntu-20.04.1-preinstalled-server-arm64+raspi.img.xz

Thanks for this how-to! It worked great after I stopped trying to second-guess it.

Proposed items to add to "Debian Buster Root on ZFS" guide

Hello,
here some proposals for adding to this great guide to cover more common system usages:

If you use mysql/mariadb:
zfs create rpool/var/lib/mysql

If you use posgresql:
zfs create rpool/var/lib/postgresql

If you use virtualization (virt-manager, libvirt, ...):
zfs create -o canmount=off rpool/var/lib/libvirt
zfs create rpool/var/lib/libvirt/images

Install package console-setup in chroot environment (for keyboard layout configuration).
apt install console-setup

Best regards, Robert

Nexenta has additional feature flags, include in feature matrix?

While Nexenta is not directly part of the OpenZFS project, there has been code exchanges over time. They uses several of feature flags that have originated in OpenZFS, but have also added some of their own. These additional flags, if active, could affect the interoperability. Adding Nexenta's to the feature flags to the matrix would add a bit of noise, but give a more comprehensive view for people migrating to/from Nexenta products.

The flags are:
com.nexenta:vdev_properties
com.nexenta:cos_properties
com.nexenta:meta_devices
com.nexenta:wbc

Their manual can be found at:
https://github.com/Nexenta/illumos-nexenta/blob/release-5.3/usr/src/man/man5/zpool-features.5

Are their other ZFS feature flags that are unaccounted for?

Ubuntu 20.04 MIRROR Root on ZFS: shouldn't bpool get cloned on restore?

I am a ZFS newbie so my apologies if I will say some stupid thing especially trying to explain what happens.

I followed the instructions for mirror root and it went well. I installed and removed a few different packages and kernels: I could see the entries in the GRUB history menu. Then I restored on an old one that still was using a kernel that I recently removed, and everything got restored as expected. So I did some other change on the system and reboot. I was expecting to automatically reboot in the just rebooted state, but instead it rebooted in the old one.

After I retried a few times, I discovered that in order to gain the same state (originated after the first restore) the only way I have is going to the grub history and pick again the last restored point. Then it boots in the last state, the kernel is the right one, the new installed software is there... all ok, BUT it looked like I couldn't update the GRUB menu at all.

Besides I manually saved a state with success, and I was expecting to find it in the grub history menu on reboot, but it was nowhere to be found. Even with a manual update-grub (which succeeded) there was no new entry: the grub menu was frozen in the past.

It looks like while the rpool persist the changes in the new state, the bpool is read only, so the GRUB menu will never get changed.

The zsysctl history seems to confirm that: rpool got cloned on restore but bpool didn't.

I suspect that in sigle disk installations using a read only bpool is not a problem. Since grub is on the EFI partition it gets updated as usual, but since the mirror installation has GRUB on bpool, shouldn't zsys clone it on restore as it does with rpool?

Did I miss something?

Check for new module params in new PRs of openzfs/zfs

So we can alert contributor that it should update documentation in openzfs-docs too.

  • think about suppress this warning maybe via linking PR with another PR in docs
  • warning on param removal? Work on "deprecation" method in docs

Table formatting in "ZIO Schedulers.rst"

Due to line-wrapping in the raw table code, spaces are inserted inside of the tunable names, which should each be considered a single, unbreakable word. We might also want to format the tunable names as code.

image

Side note: what tool do folks use to edit the ASCII-art tables?

Need to update zfs documentation for CEPH configuration

The "see here" link to http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/#not-recommended, after the phrase "The officially recommended workaround" in the https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#ceph-configuration-ceph-conf "CEPH Configuration (ceph.conf}" section of the OpenZFS document is no longer a working link.

The related CEPH documentation that likely had that material seems to have been updated as well, so I presume that the OpenZFS documentation should be updated correspondingly. I don't know enough of either topic to suggest updates myself however - sorry.

For the current related CEPH documentation, see https://docs.huihoo.com/ceph/v0.80.5/rados/configuration/filesystem-recommendations/index.html#filesystems

zfs and zfs-testing repos for Fedora 33

Hi everyone,

Can you please add zfs and zfs-testing repos in zfs-release for Fedora release version 33, as it is out now and shipped with kernel 5.8, which is compatible with the last versions od zfs-on-linux, but I'm convinced I and other users wouldn't want a breakdown when upgrading to Fedora 33.

Thanks a lot.

Ubuntu 20.04 Root on ZFS Giude - Mounting boot when rescuing using a live cd

In the Guide "Ubuntu 20.04 Root on ZFS", section "Rescuing using a Live CD" , mounting boot in the chrooted environment gives an error (because boot is not found in fstab). I think that way of mounting boot is a leftover from the 18.04 guide. I was able to fix the issue by mounting boot outside the chrooted environment (in the same way as root is mounted). I.e.:

    @@ -1180,25 +1180,25 @@
     Mount everything correctly::
     
       zpool export -a
       zpool import -N -R /mnt rpool
       zpool import -N -R /mnt bpool
       zfs load-key -a
       # Replace “UUID” as appropriate; use zfs list to find it:
       zfs mount rpool/ROOT/ubuntu_UUID
    +  zfs mount bpool/BOOT/ubuntu_UUID
       zfs mount -a
     
     If needed, you can chroot into your installed environment::
     
       mount --rbind /dev  /mnt/dev
       mount --rbind /proc /mnt/proc
       mount --rbind /sys  /mnt/sys
       chroot /mnt /bin/bash --login
    -  mount /boot
       mount -a

Ubuntu 20.04 Root on ZFS - No mirrored ESP Partition

@rlaager, I have the following issue with the Ubuntu 20.04 Root on ZFS HOWTO:

Hi,
I am at step 6.5 "dpkg-reconfigure grub-efi-amd64". I can run the command and questions will be asked, but there is no possibility to choose the EFI partitions from a list. I have a mirror setup with two SSDs and each has an EFI partition, a BOOT partition and a ROOT partition.
These are the questions I am asked:
grafik
grafik
grafik

And this is the result:
grafik

Is it possible that this is connected to my /etc/fstab setup?
grafik

Thanks for your help.

Debian ZFS on Root ZED Issues

@rlaager

When following the instructions on the wiki to install ZFS on root on Debian Buster, I get hung up at step 5.7, "Fix filesystem mount ordering". /etc/zfs/zfs-list.cache/bpool and /etc/zfs/zfs-list.cache/rpool are always empty for me, even after running zfs set canmount=on bpool/BOOT/debian or zfs set canmount=noauto rpool/ROOT/debian as suggested.

I tried running history_event-zfs-list-cacher.sh directly, but then this complains about a missing /zed-functions.sh even though it is in the same directory.

Do you have any other debugging tips to try to resolve this issue?

This is a Debian Buster 10.7 live install USB with ZFS Event Daemon 0.8.6-1~bpo10+1. I decided to use EFI boot rather than MBR.

Compilation Error

zfs does not compile on the following platform. It was working with zfs 0.83. Help appreciated.

CentOS 7.8, zfs 0.86

yum install http://download.zfsonlinux.org/epel/zfs-release.el7_9.noarch.rpm
yum install  zfs

CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/qat.o
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_file.c: In function ‘zpl_aio_read’:
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_file.c:344:2: error: implicit declaration of function ‘generic_segment_checks’ [-Werror=implicit-function-declaration]
  ret = generic_segment_checks(iovp, &nr_segs, &count, VERIFY_WRITE);
  ^
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_file.c: At top level:
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_file.c:530:2: error: #error "Unknown direct IO interface"
 #error "Unknown direct IO interface"
  ^
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_file.c:1026:15: error: ‘zpl_direct_IO’ undeclared here (not in a function)
  .direct_IO = zpl_direct_IO,
               ^
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/qat_compress.o
cc1: some warnings being treated as errors
make[5]: *** [/var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_file.o] Error

Add rescue docs

  • readonly import
  • failmode=continue (otherwise usual user will get panic or halt and thinks that "zfs is gibberish, it's too easy to break a pool and lose data"), need to describe it well in basics too
  • rollback transactions
  • import with missing vdevs

Fedora ROOT ON ZFS Guide

It would be nice if we could have a Fedora Root On Zfs Guide on the wiki

System information

Type Version/Name
Distribution Name Fedora
Distribution Version 32
Linux Kernel
Architecture
ZFS Version
SPL Version

Describe the problem you're observing

Describe how to reproduce the problem

Include any warning/errors/backtraces from the system logs

Debian/Ubuntu Root on ZFS: /root world-readable

The /root dataset is created with default world-readable permissions (755). Debootstrap would create /root with 700 permissions, but it does not change the permissions if it already exists. So I think a chmod 700 /mnt/root should be added to the guides.

Thank you!

@rlaager

bpool/BOOT/ubuntu_UUID not mounted

@rlaager, I have the following issue with the Ubuntu 20.04 Root on ZFS HOWTO:

First of all, thank you for the great HOWTO. I ran into the following issue and I'm not sure what I did wrong. After rebooting only the bpool/BOOT/ubuntu_UUID/grub dataset is mounted but not the underlying bpool/BOOT/ubuntu_UUID. The canmount option on the bpool/BOOT/ubuntu_UUID dataset is set to noauto and I can't figure out if I missed something where this should get mounted or if there is another problem.

I'd be grateful for any help

unsupported features in default install

@rlaager, I have the following issue with the Debian Buster Root on ZFS HOWTO:

This didn't work straight from the instructions, it made an unbootable system:

cannot import 'rpool': unsupported version or feature
This pool uses the following feature(s) not supported by this system:
com.datto:encryption
org.zfsonlinux:project_quota
com.delphix:spacemap_v2

Ended up having to redo everything including the bootpool with
zpool create - .... feature@...=disabled .....

I think this needs a warning/note in the guide

Add man pages

  • Add man pages
  • Integrate CSS
  • Parse links to existing pages and add hyperlinks on them
  • Differ pages per version/platform
  • Change "Edit on github" for generated pages into something more appropriate than "page not found"
  • Trigger rebuild on man pages changes in /openzfs/zfs
  • Review and fix syntax in man pages

Fail to install ZFS on RH 6

Hi,

I followed these steps:

wget http://download.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
rpm -i zfs-release.el6.noarch.rpm 

edited /etc/yum.repo.d/zfs.repo to enable kmod

Then:

yum install zfs

But:

# modprobe zfs
FATAL: Module zfs not found.

And I see that modules got installed in wrong directory:

# rpm -ql kmod-zfs
/lib/modules/2.6.32-754.29.1.el6.x86_64
/lib/modules/2.6.32-754.29.1.el6.x86_64/extra
/lib/modules/2.6.32-754.29.1.el6.x86_64/extra/zfs

My kernel is

# uname -r
2.6.32-696.23.1.el6.x86_64

what can I do? I cannot upgrade the kernel on that machine. This is default kernel for RH 6.

Thanks,
Dmitry

Ubuntu 20.04 Root on ZFS: Mistake in how bpool is configured causes boot to get garbled, ends up with unbootable machine

@rlaager

Firstly thank you SO MUCH for the Ubuntu 20.04 Root on ZFS HOWTO. Using qemu, last month I installed a Ubuntu 20.04 root on ZFS on two budget Intel Atom dedicated servers from their rescue boot, they work surprisingly well, considering.

One of the servers two days ago suddenly vanished however. It took some effort to figure out why, but I narrowed it down to a problem in the current HOWTO. I used the HOWTO from August, so post the current Erratum.

Right now, you say:

zpool create \
    -o ashift=12 -d \
    -o feature@async_destroy=enabled \
    -o feature@bookmarks=enabled \
    -o feature@embedded_data=enabled \
    -o feature@empty_bpobj=enabled \
    -o feature@enabled_txg=enabled \
    -o feature@extensible_dataset=enabled \
    -o feature@filesystem_limits=enabled \
    -o feature@hole_birth=enabled \
    -o feature@large_blocks=enabled \
    -o feature@lz4_compress=enabled \
    -o feature@spacemap_histogram=enabled \
    -o feature@zpool_checkpoint=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/boot -R /mnt \
    bpool ${DISK}-part3
...
zfs create -o canmount=noauto -o mountpoint=/boot \
    bpool/BOOT/ubuntu_$UUID

Note that two separate datasets both have a mountpoint of /boot.

Now, I'm not sure exactly how it happened, but I believe that on some boot or other of that server, the boot mounting service got confused, and didn't mount /boot. The system booted just fine though. However, when unattended-upgrades ran at some point, it called update-grub, that installed into the root ZFS pool which has flags grub can't parse, and BOOM bye bye server.

So, firstly can I suggest to not set the same mountpoint on two datasets?

Secondly, can I suggest that you recommend in the guide that people reboot a few times and make SURE that /boot, /boot/efi, and /boost/efi/grub are all coming up every time?

Finally, thirdly, I got badly caught out first install by missing the step which makes space for the MBR grub. May I suggest that you fuse the EFI and MBR partitioning instructions into one set which works fine for both schemes, so it's a single config right up to when you choose to install UEFI or MBR grub, and that's the only difference?

Thanks once again for the instructions, and taking all that time to write and maintain them. Indeed, if ZFS native encryption on the ZFS in Ubuntu 20.04 weren't so slow, I'd recommend a lot more hard key coded encryption, plus PAM-unlocked home drive encryption, so if your remote server ever dies, you don't leak all your secrets. However, ZFS native encryption is indeed very very slow in Ubuntu 20.04. It gets very much faster if you chose the gcm variant in future ZFSs.

Ubuntu 20.04 Troubleshooting section should include commands for mounting encrypted rpool

I am a bit of a beginner with ZFS, so apologies for any incorrect terminology.

If you have followed the process for installing an encrypted rpool at installation time by modifying the installer as described in the overview section, you subsequently need to use zfs load-key rpool (or something similar) before mounting it. The troubleshooting section should include the correct command, whatever it is.

url fragments don't work after whatever you did to change the documentation urls

this url which was scrolling to "zfs_arc_dnode_limit" paragraph in the wiki https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters#zfs_arc_dnode_limit_percent now shows me a message that the url has changed then when I click the new url it takes me to the start of the new page, without scrolling to the parameter.

I expect that when something change inside, that change should not reflect in user-facing area (ie. an url changes but the content and behaviour should remain the same - scroll me to url fragment "#zfs_arc_dnode_limit_percent" parameter)

Invalid vdev specification

Hello,
I'm trying to install debian on ZFS pool. So I'm following debian buster steps, but I'm stuck on Create the boot pool. When I execute the command I get following error:

invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/nvme-XXXX-part3 contains a filesystem of type 'swap'

According to parted filesystem type of partition 3 is swap indeed:

root@debian:~# parted $DISK
GNU Parted 3.2
Using /dev/nvme1n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: KBG30ZMV256G TOSHIBA (nvme)
Disk /dev/nvme1n1: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 2      1049kB  538MB   537MB   fat32                 boot, esp
 3      538MB   1612MB  1074MB  linux-swap(v1)
 4      1612MB  256GB   254GB

Also according to Arch wiki -o ashift=12 is for disks with 4096 sector size. But my nvme drive shows 512b so perhaps I should use -o ashift=9?

Document remote unlock for Debian Buster Root on ZFS

Debian Buster and later should support ZFS decrypt via dropbear-initramfs. This would be helpful to document in the Root on ZFS instructions, as remote unlock is particularly relevant for that use case.

Doc: https://github.com/openzfs/openzfs-docs/blob/master/docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.rst

I'm interested particularly in native ZFS encryption. There is already some documentation for doing this with LUKS keys.

Fresh Debian 10.7 "converting" to root zpool issues encountered

While "converting" Debian 10.7 fresh installations to zpool (2nd disk to be mirrored later), @rlaager, I have the following issue with the Debian Buster Root on ZFS HOWTO:

My process:
0) Done a "clean" Debian installation (as I would expect from a hosting provider's Dedicated servers)
00) instead of debootstrap, I "just" do a rsync -vaPx / /mnt (repeating for /boot etc. as need be)

  1. Seems that /run needs to be mount --rbind (or at least a tmpfs) before the chroot, else the zed step fails without the /var/lock -> /run/lock not valid.

  2. secondary zpools aren't mounted 'cause of bpool service, see openzfs/zfs#8549 (comment)
    for the advice to add lines to preserve the zpool.cache in the bpool service file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.