├── docs ├── .nojekyll ├── _static │ ├── img │ │ ├── favicon.ico │ │ ├── raidz_draid.png │ │ ├── logo │ │ │ ├── logo_main.png │ │ │ ├── zof-logo.png │ │ │ ├── 320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png │ │ │ ├── 480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png │ │ │ └── 800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png │ │ └── draid-resilver-hours.png │ ├── js │ │ └── redirect.js │ └── css │ │ ├── theme_overrides.css │ │ └── mandoc.css ├── setup.py ├── Getting Started │ ├── Fedora.rst │ ├── RHEL and CentOS.rst │ ├── Slackware │ │ ├── index.rst │ │ └── Root on ZFS.rst │ ├── Ubuntu │ │ └── index.rst │ ├── index.rst │ ├── Alpine Linux │ │ └── index.rst │ ├── openSUSE │ │ └── index.rst │ ├── Arch Linux │ │ └── index.rst │ ├── Debian │ │ ├── index.rst │ │ └── Debian GNU Linux initrd documentation.rst │ ├── NixOS │ │ ├── index.rst │ │ └── Root on ZFS.rst │ ├── Fedora │ │ └── index.rst │ ├── FreeBSD.rst │ └── RHEL-based distro │ │ └── index.rst ├── msg │ ├── index.rst │ ├── ZFS-8000-14 │ │ └── index.rst │ ├── ZFS-8000-EY │ │ └── index.rst │ ├── ZFS-8000-6X │ │ └── index.rst │ ├── ZFS-8000-HC │ │ └── index.rst │ ├── ZFS-8000-JQ │ │ └── index.rst │ ├── ZFS-8000-A5 │ │ └── index.rst │ ├── ZFS-8000-5E │ │ └── index.rst │ ├── ZFS-8000-3C │ │ └── index.rst │ ├── ZFS-8000-8A │ │ └── index.rst │ ├── ZFS-8000-72 │ │ └── index.rst │ ├── ZFS-8000-4J │ │ └── index.rst │ ├── ZFS-8000-2Q │ │ └── index.rst │ ├── ZFS-8000-K4 │ │ └── index.rst │ └── ZFS-8000-9P │ │ └── index.rst ├── requirements.txt ├── Basic Concepts │ ├── index.rst │ ├── Feature Flags.rst │ ├── RAIDZ.rst │ ├── Troubleshooting.rst │ ├── VDEVs.rst │ └── Checksums.rst ├── Performance and Tuning │ ├── index.rst │ └── ZFS Transaction Delay.rst ├── .gitignore ├── _TableOfContents.rst ├── 404.rst ├── Developer Resources │ ├── index.rst │ ├── Git and GitHub for beginners.rst │ └── Building ZFS.rst ├── Project and Community │ ├── Admin Documentation.rst │ ├── Donate.rst │ ├── index.rst │ ├── Mailing Lists.rst │ ├── Signing Keys.rst │ └── FAQ hole birth.rst ├── Makefile ├── index.rst ├── License.rst └── conf.py ├── Makefile ├── scripts ├── zfs_root_gen_bash.py ├── zfs_root_guide_test.sh └── man_pages.py ├── tox.ini ├── .github └── workflows │ ├── pull_request.yml │ ├── publish.yml │ └── test_zfs_root_guide.yml ├── .readthedocs.yaml └── README.rst /docs/.nojekyll: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | help: 2 | $(MAKE) -C docs help 3 | 4 | %: 5 | $(MAKE) -C docs $@ 6 | -------------------------------------------------------------------------------- /docs/_static/img/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/favicon.ico -------------------------------------------------------------------------------- /docs/_static/img/raidz_draid.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/raidz_draid.png -------------------------------------------------------------------------------- /docs/_static/img/logo/logo_main.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/logo/logo_main.png -------------------------------------------------------------------------------- /docs/_static/img/logo/zof-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/logo/zof-logo.png -------------------------------------------------------------------------------- /docs/setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | 4 | setuptools.setup(setup_requires=["pbr"], pbr=True, include_package_data=True) 5 | -------------------------------------------------------------------------------- /docs/_static/img/draid-resilver-hours.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/draid-resilver-hours.png -------------------------------------------------------------------------------- /docs/_static/js/redirect.js: -------------------------------------------------------------------------------- 1 | if (location.host == 'openzfs.readthedocs.io') { 2 | window.location.replace("https://openzfs.github.io"); 3 | } 4 | -------------------------------------------------------------------------------- /docs/Getting Started/Fedora.rst: -------------------------------------------------------------------------------- 1 | :orphan: 2 | 3 | Fedora 4 | ======================= 5 | 6 | This page has been moved to `here `__. 7 | 8 | -------------------------------------------------------------------------------- /docs/msg/index.rst: -------------------------------------------------------------------------------- 1 | ZFS Messages 2 | ============ 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | :glob: 8 | 9 | ZFS-*/index 10 | -------------------------------------------------------------------------------- /docs/requirements.txt: -------------------------------------------------------------------------------- 1 | sphinx>=8 2 | sphinx-issues 3 | sphinx-notfound-page>=0.5 4 | sphinx-rtd-theme 5 | sphinxext-rediraffe 6 | sphinx_last_updated_by_git 7 | gitpython 8 | -------------------------------------------------------------------------------- /docs/Basic Concepts/index.rst: -------------------------------------------------------------------------------- 1 | Basic Concepts 2 | ============== 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | :glob: 8 | 9 | * 10 | -------------------------------------------------------------------------------- /docs/Performance and Tuning/index.rst: -------------------------------------------------------------------------------- 1 | Performance and Tuning 2 | ====================== 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | :glob: 8 | 9 | * 10 | -------------------------------------------------------------------------------- /docs/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png -------------------------------------------------------------------------------- /docs/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png -------------------------------------------------------------------------------- /docs/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openzfs/openzfs-docs/HEAD/docs/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png -------------------------------------------------------------------------------- /docs/Getting Started/RHEL and CentOS.rst: -------------------------------------------------------------------------------- 1 | :orphan: 2 | 3 | RHEL and CentOS 4 | ======================= 5 | 6 | This page has been moved to `RHEL-based distro `__. 7 | -------------------------------------------------------------------------------- /docs/.gitignore: -------------------------------------------------------------------------------- 1 | # sphinx build folder 2 | _build 3 | 4 | # OS generated files # 5 | ###################### 6 | .DS_Store 7 | ehthumbs.db 8 | Icon? 9 | Thumbs.db 10 | 11 | # OpenZFS generated folders 12 | man 13 | -------------------------------------------------------------------------------- /docs/_TableOfContents.rst: -------------------------------------------------------------------------------- 1 | .. toctree:: 2 | :maxdepth: 2 3 | :glob: 4 | 5 | Getting Started/index 6 | Project and Community/index 7 | Developer Resources/index 8 | Performance and Tuning/index 9 | Basic Concepts/index 10 | man/index 11 | msg/index 12 | License 13 | -------------------------------------------------------------------------------- /docs/404.rst: -------------------------------------------------------------------------------- 1 | :orphan: 2 | 3 | 404 Page not found. 4 | =================== 5 | 6 | Please use left menu or search to find interested page. 7 | 8 | .. raw:: html 9 | 10 | 18 | -------------------------------------------------------------------------------- /docs/Developer Resources/index.rst: -------------------------------------------------------------------------------- 1 | Developer Resources 2 | =================== 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | :glob: 8 | 9 | Custom Packages 10 | Building ZFS 11 | OpenZFS Documentation 12 | Git and GitHub for beginners 13 | -------------------------------------------------------------------------------- /scripts/zfs_root_gen_bash.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # 3 | # Copyright 2023 Maurice Zhou 4 | # 5 | # Released without warranty under the terms of the 6 | # Apache License 2.0. 7 | 8 | import pylit 9 | 10 | pylit.defaults.code_block_markers['shell'] = '::' 11 | pylit.defaults.text_extensions = [".rst"] 12 | 13 | pylit.main() 14 | -------------------------------------------------------------------------------- /docs/Project and Community/Admin Documentation.rst: -------------------------------------------------------------------------------- 1 | Admin Documentation 2 | =================== 3 | 4 | - `Aaron Toponce's ZFS on Linux User 5 | Guide `__ 6 | - `OpenZFS System 7 | Administration `__ 8 | - `Oracle Solaris ZFS Administration 9 | Guide `__ 10 | -------------------------------------------------------------------------------- /docs/Project and Community/Donate.rst: -------------------------------------------------------------------------------- 1 | Donate 2 | ====== 3 | 4 | OpenZFS operates under the umbrella of 5 | `Software In The Public Interest (SPI) `_. 6 | SPI is a 501(c)(3) non-profit organization that provides financial, legal, and 7 | organization services for open source software projects. OpenZFS accepts 8 | donations via SPI at our 9 | `donation page `_. We thank you for 10 | your support! 11 | -------------------------------------------------------------------------------- /docs/_static/css/theme_overrides.css: -------------------------------------------------------------------------------- 1 | /* override table width restrictions */ 2 | @media screen and (min-width: 767px) { 3 | 4 | .wy-table-responsive table td { 5 | /* !important prevents the common CSS stylesheets from overriding 6 | this as on RTD they are loaded after this stylesheet */ 7 | white-space: normal !important; 8 | } 9 | 10 | .wy-table-responsive { 11 | overflow: visible !important; 12 | } 13 | 14 | .wy-nav-content { 15 | max-width: 1500px !important; 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py3,pep8 3 | skipdist = True 4 | skip_install = True 5 | 6 | [testenv] 7 | basepython = python3 8 | skipdist = True 9 | skip_install = True 10 | deps = pytest 11 | commands = pytest {posargs} 12 | 13 | [testenv:pep8] 14 | deps = flake8 15 | commands = 16 | flake8 17 | 18 | [testenv:develop] 19 | usedevelop = true 20 | basepython = python3 21 | deps = -r{toxinidir}/docs/requirements.txt 22 | esbonio>=0.12.0 23 | commands = {posargs} 24 | 25 | [testenv:esbonio] 26 | usedevelop = true 27 | basepython = python3 28 | deps = esbonio>=0.12.0 29 | commands = {posargs} 30 | -------------------------------------------------------------------------------- /docs/Getting Started/Slackware/index.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: sh 2 | 3 | Slackware 4 | ========= 5 | 6 | .. contents:: Table of Contents 7 | :local: 8 | 9 | Installation 10 | ------------ 11 | 12 | In order to build and install the kernel modules and userspace tools, use the 13 | openzfs SlackBuild script (for 15.0, it's at https://slackbuilds.org/repository/15.0/system/openzfs/). No special options are required. 14 | 15 | 16 | Root on ZFS 17 | ----------- 18 | 19 | ZFS can be used as root file system for Slackware. 20 | An installation guide is available here: 21 | 22 | .. toctree:: 23 | :maxdepth: 1 24 | :glob: 25 | 26 | *Root on ZFS 27 | -------------------------------------------------------------------------------- /.github/workflows/pull_request.yml: -------------------------------------------------------------------------------- 1 | name: Pull Request Docs Check 2 | 3 | on: [pull_request] 4 | 5 | jobs: 6 | build: 7 | runs-on: ubuntu-latest 8 | steps: 9 | - uses: actions/checkout@v4 10 | with: 11 | fetch-depth: 0 12 | - name: Prepare 13 | run: | 14 | sudo apt-get update -y 15 | sudo apt-get install -y git python3-pip mandoc 16 | pip install -r docs/requirements.txt 17 | - name: Gen_man_pages 18 | run: make man 19 | - name: Gen_feature_matrix 20 | run: make feature_matrix 21 | - name: Gen_sphinx 22 | run: make html 23 | - uses: actions/upload-artifact@v4 24 | with: 25 | name: DocumentationHTML 26 | path: docs/_build/html/ 27 | -------------------------------------------------------------------------------- /docs/Getting Started/Ubuntu/index.rst: -------------------------------------------------------------------------------- 1 | Ubuntu 2 | ====== 3 | 4 | .. contents:: Table of Contents 5 | :local: 6 | 7 | Installation 8 | ------------ 9 | 10 | .. note:: 11 | If you want to use ZFS as your root filesystem, see the 12 | `Root on ZFS`_ links below instead. 13 | 14 | On Ubuntu, ZFS is included in the default Linux kernel packages. 15 | To install the ZFS utilities, first make sure ``universe`` is enabled in 16 | ``/etc/apt/sources.list``:: 17 | 18 | deb http://archive.ubuntu.com/ubuntu main universe 19 | 20 | Then install ``zfsutils-linux``:: 21 | 22 | apt update 23 | apt install zfsutils-linux 24 | 25 | Root on ZFS 26 | ----------- 27 | .. toctree:: 28 | :maxdepth: 1 29 | :glob: 30 | 31 | * 32 | -------------------------------------------------------------------------------- /docs/Getting Started/index.rst: -------------------------------------------------------------------------------- 1 | Getting Started 2 | =============== 3 | 4 | To get started with OpenZFS refer to the provided documentation for your 5 | distribution. It will cover the recommended installation method and any 6 | distribution specific information. First time OpenZFS users are 7 | encouraged to check out Aaron Toponce's `excellent 8 | documentation `__. 9 | 10 | .. toctree:: 11 | :maxdepth: 3 12 | :glob: 13 | 14 | Alpine Linux/index 15 | Arch Linux/index 16 | Debian/index 17 | Fedora/index 18 | FreeBSD 19 | Gentoo 20 | NixOS/index 21 | openSUSE/index 22 | RHEL-based distro/index 23 | Slackware/index 24 | Ubuntu/index 25 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | # Read the Docs configuration file for Sphinx projects 2 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details 3 | 4 | # Required 5 | version: 2 6 | 7 | # Set the OS, Python version and other tools we might need 8 | build: 9 | os: ubuntu-24.04 10 | tools: 11 | python: "3.12" 12 | 13 | # Build documentation in the "docs/" directory with Sphinx 14 | sphinx: 15 | configuration: docs/conf.py 16 | # Fail on all warnings to avoid broken references 17 | # fail_on_warning: true 18 | 19 | # Optionally build docs in additional formats such as PDF and ePub 20 | # formats: 21 | # - pdf 22 | # - epub 23 | 24 | # Optional but recommended, declare the Python requirements needed 25 | # to build the documentation 26 | # See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html 27 | python: 28 | install: 29 | - requirements: docs/requirements.txt 30 | -------------------------------------------------------------------------------- /docs/Getting Started/Alpine Linux/index.rst: -------------------------------------------------------------------------------- 1 | Alpine Linux 2 | ============ 3 | 4 | Contents 5 | -------- 6 | .. toctree:: 7 | :maxdepth: 1 8 | :glob: 9 | 10 | * 11 | 12 | Installation 13 | ------------ 14 | 15 | Note: this is for installing ZFS on an existing Alpine 16 | installation. To use ZFS as root file system, 17 | see below. 18 | 19 | #. Install ZFS package:: 20 | 21 | apk add zfs zfs-lts 22 | 23 | #. Load kernel module:: 24 | 25 | modprobe zfs 26 | 27 | Automatic zpool importing and mount 28 | ----------------------------------- 29 | 30 | To avoid needing to manually import and mount zpools 31 | after the system boots, be sure to enable the 32 | related services. 33 | 34 | #. Import pools on boot:: 35 | 36 | rc-update add zfs-import default 37 | 38 | #. Mount pools on boot:: 39 | 40 | rc-update add zfs-mount default 41 | 42 | Root on ZFS 43 | ----------- 44 | .. toctree:: 45 | :maxdepth: 1 46 | :glob: 47 | 48 | * 49 | -------------------------------------------------------------------------------- /docs/Getting Started/openSUSE/index.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: sh 2 | 3 | openSUSE 4 | ======== 5 | 6 | .. contents:: Table of Contents 7 | :local: 8 | 9 | Installation 10 | ------------ 11 | 12 | If you want to use ZFS as your root filesystem, see the `Root on ZFS`_ 13 | links below instead. 14 | 15 | ZFS packages are not included in official openSUSE repositories, but repository of `filesystems projects of openSUSE 16 | `__ 17 | includes such packages of filesystems including OpenZFS. 18 | 19 | openSUSE progresses through 3 main distribution branches, these are called Tumbleweed, Leap and SLE. There are ZFS packages available for all three. 20 | 21 | 22 | External Links 23 | -------------- 24 | 25 | * `openSUSE OpenZFS page `__ 26 | 27 | Root on ZFS 28 | ----------- 29 | .. toctree:: 30 | :maxdepth: 1 31 | :glob: 32 | 33 | *Root on ZFS 34 | 35 | 36 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | .. image:: docs/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png 2 | .. highlight:: sh 3 | 4 | OpenZFS Documentation 5 | ===================== 6 | 7 | Public link: https://openzfs.github.io/openzfs-docs/ 8 | 9 | Building Locally 10 | ---------------- 11 | 12 | Install Prerequisites 13 | ~~~~~~~~~~~~~~~~~~~~~ 14 | 15 | The dependencies are available via pip:: 16 | 17 | # For Debian based distros 18 | sudo apt install python3-pip 19 | # For RPM-based distros 20 | sudo yum install python3-pip 21 | # For openSUSE 22 | sudo zypper in python3-pip 23 | 24 | pip3 install -r docs/requirements.txt 25 | # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: 26 | PATH=$HOME/.local/bin:$PATH 27 | 28 | Or, you can use tox:: 29 | 30 | tox -e develop 31 | . .tox/develop/bin/activate 32 | 33 | Build 34 | ~~~~~ 35 | 36 | :: 37 | 38 | cd docs 39 | make html 40 | # HTML files will be generated in: _build/html 41 | -------------------------------------------------------------------------------- /.github/workflows/publish.yml: -------------------------------------------------------------------------------- 1 | name: Publish Github Pages 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | workflow_dispatch: 8 | schedule: 9 | # every night, we have external data (man pages, feature matrix, etc) 10 | - cron: "5 3 * * *" 11 | 12 | jobs: 13 | deploy: 14 | runs-on: ubuntu-latest 15 | steps: 16 | - uses: actions/checkout@v4 17 | with: 18 | fetch-depth: 0 19 | - name: Prepare 20 | run: | 21 | sudo apt-get update -y 22 | sudo apt-get install -y git python3-pip mandoc 23 | pip install -r docs/requirements.txt 24 | - name: Gen_man_pages 25 | run: make man 26 | - name: Gen_feature_matrix 27 | run: make feature_matrix 28 | - name: Gen_sphinx 29 | run: make html 30 | - name: Deploy 31 | uses: peaceiris/actions-gh-pages@v3 32 | with: 33 | github_token: ${{ secrets.GITHUB_TOKEN }} 34 | publish_dir: ./docs/_build/html 35 | force_orphan: true 36 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SOURCEDIR = . 8 | BUILDDIR = _build 9 | 10 | # Put it first so that "make" without argument is like "make help". 11 | help: 12 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 13 | 14 | .PHONY: help Makefile 15 | 16 | html: Makefile 17 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 18 | 19 | # Catch-all target: route all unknown targets to Sphinx using the new 20 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 21 | %: Makefile 22 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 23 | 24 | # Gen man pages 25 | ZFS_GIT_REPO = https://github.com/openzfs/zfs.git 26 | ZFS_GIT_DIR = ./_build/zfs 27 | 28 | .PHONY: man 29 | man: 30 | if [ ! -d $(ZFS_GIT_DIR) ]; then \ 31 | git clone $(ZFS_GIT_REPO) ./_build/zfs ; \ 32 | else \ 33 | git -C $(ZFS_GIT_DIR) fetch --all && git -C $(ZFS_GIT_DIR) fetch --tags --force ; \ 34 | fi 35 | ../scripts/man_pages.py ./ 36 | 37 | .PHONY: feature_matrix 38 | feature_matrix: 39 | ../scripts/compatibility_matrix.py ./_build/ 40 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | OpenZFS Documentation 2 | ===================== 3 | 4 | Welcome to the OpenZFS Documentation. This resource provides documentation for 5 | users and developers working with (or contributing to) the OpenZFS 6 | project. New users or system administrators should refer to the 7 | documentation for their favorite platform to get started. 8 | 9 | +----------------------+----------------------+----------------------+ 10 | | :doc:`Getting Started| :doc:`Project and | :doc:`Developer | 11 | | <./Getting | Community <./Project | Resources ` | and Community/index>`| Resources/index>` | 13 | +======================+======================+======================+ 14 | | How to get started | About the project | Technical | 15 | | with OpenZFS on your | and how to | documentation | 16 | | favorite platform | contribute | discussing the | 17 | | | | OpenZFS | 18 | | | | implementation | 19 | +----------------------+----------------------+----------------------+ 20 | 21 | 22 | Table of Contents: 23 | ------------------ 24 | .. include:: _TableOfContents.rst 25 | -------------------------------------------------------------------------------- /docs/Project and Community/index.rst: -------------------------------------------------------------------------------- 1 | Project and Community 2 | ===================== 3 | 4 | OpenZFS is storage software which combines the functionality of 5 | traditional filesystems, volume manager, and more. OpenZFS includes 6 | protection against data corruption, support for high storage capacities, 7 | efficient data compression, snapshots and copy-on-write clones, 8 | continuous integrity checking and automatic repair, remote replication 9 | with ZFS send and receive, and RAID-Z. 10 | 11 | OpenZFS brings together developers from the illumos, Linux, FreeBSD and 12 | OS X platforms, and a wide range of companies -- both online and at the 13 | annual OpenZFS Developer Summit. High-level goals of the project include 14 | raising awareness of the quality, utility and availability of 15 | open-source implementations of ZFS, encouraging open communication about 16 | ongoing efforts toward improving open-source variants of ZFS, and 17 | ensuring consistent reliability, functionality and performance of all 18 | distributions of ZFS. 19 | 20 | .. toctree:: 21 | :maxdepth: 2 22 | :caption: Contents: 23 | :glob: 24 | 25 | Admin Documentation 26 | Donate 27 | FAQ 28 | Mailing Lists 29 | Signing Keys 30 | Issue Tracker 31 | Releases 32 | Roadmap 33 | -------------------------------------------------------------------------------- /docs/Getting Started/Arch Linux/index.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: sh 2 | 3 | Arch Linux 4 | ============ 5 | 6 | Contents 7 | -------- 8 | .. toctree:: 9 | :maxdepth: 1 10 | :glob: 11 | 12 | * 13 | 14 | Support 15 | ------- 16 | Reach out to the community using the :ref:`mailing_lists` or IRC at 17 | `#zfsonlinux `__ on `Libera Chat 18 | `__. 19 | 20 | If you have a bug report or feature request 21 | related to this HOWTO, please `file a new issue and mention @ne9z 22 | `__. 23 | 24 | Overview 25 | -------- 26 | Due to license incompatibility, 27 | ZFS is not available in Arch Linux official repo. 28 | 29 | ZFS support is provided by third-party `archzfs repo `__. 30 | 31 | Installation 32 | ------------ 33 | 34 | See `Archlinux Wiki `__. 35 | 36 | Root on ZFS 37 | ----------- 38 | ZFS can be used as root file system for Arch Linux. 39 | An installation guide is available. 40 | 41 | .. toctree:: 42 | :maxdepth: 1 43 | :glob: 44 | 45 | * 46 | 47 | Contribute 48 | ---------- 49 | #. Fork and clone `this repo `__. 50 | 51 | #. Install the tools:: 52 | 53 | sudo pacman -S --needed python-pip make 54 | 55 | pip3 install -r docs/requirements.txt 56 | 57 | # Add ~/.local/bin to your "${PATH}", e.g. by adding this to ~/.bashrc: 58 | [ -d "${HOME}"/.local/bin ] && export PATH="${HOME}"/.local/bin:"${PATH}" 59 | 60 | #. Make your changes. 61 | 62 | #. Test:: 63 | 64 | cd docs 65 | make html 66 | sensible-browser _build/html/index.html 67 | 68 | #. ``git commit --signoff`` to a branch, ``git push``, and create a pull 69 | request. Mention @ne9z. 70 | -------------------------------------------------------------------------------- /docs/Basic Concepts/Feature Flags.rst: -------------------------------------------------------------------------------- 1 | Feature Flags 2 | ============= 3 | 4 | ZFS on-disk formats were originally versioned with a single number, 5 | which increased whenever the format changed. The numbered approach was 6 | suitable when development of ZFS was driven by a single organisation. 7 | 8 | For distributed development of OpenZFS, version numbering was 9 | unsuitable. Any change to the number would have required agreement, 10 | across all implementations, of each change to the on-disk format. 11 | 12 | OpenZFS feature flags – an alternative to traditional version numbering 13 | – allow **a uniquely named pool property for each change to the on-disk 14 | format**. This approach supports: 15 | 16 | - format changes that are independent 17 | - format changes that depend on each other. 18 | 19 | Compatibility 20 | ------------- 21 | 22 | Where all *features* that are used by a pool are supported by multiple 23 | implementations of OpenZFS, the on-disk format is portable across those 24 | implementations. 25 | 26 | Features that are exclusive when enabled should be periodically ported 27 | to all distributions. 28 | 29 | Reference materials 30 | ------------------- 31 | 32 | `ZFS Feature Flags `_ 33 | (Christopher Siden, 2012-01, in the Internet 34 | Archive Wayback Machine) in particular: "… Legacy version numbers still 35 | exist for pool versions 1-28 …". 36 | 37 | `zpool-features(7) man page <../man/7/zpool-features.7.html>`_ - OpenZFS 38 | 39 | `zpool-features `__ (5) – illumos 40 | 41 | Feature flags implementation per OS 42 | ----------------------------------- 43 | 44 | .. raw:: html 45 | 46 |
47 | 48 | .. raw:: html 49 | :file: ../_build/zfs_feature_matrix.html 50 | 51 | .. raw:: html 52 | 53 |
54 | -------------------------------------------------------------------------------- /docs/Getting Started/Debian/index.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: sh 2 | 3 | Debian 4 | ====== 5 | 6 | .. contents:: Table of Contents 7 | :local: 8 | 9 | Installation 10 | ------------ 11 | 12 | If you want to use ZFS as your root filesystem, see the `Root on ZFS`_ 13 | links below instead. 14 | 15 | ZFS packages are included in the `contrib repository 16 | `__. The 17 | `backports repository `__ 18 | often provides newer releases of ZFS. You can use it as follows. 19 | 20 | Add the backports repository:: 21 | 22 | vi /etc/apt/sources.list.d/trixie-backports.list 23 | 24 | .. code-block:: sourceslist 25 | 26 | deb http://deb.debian.org/debian trixie-backports main contrib non-free-firmware 27 | deb-src http://deb.debian.org/debian trixie-backports main contrib non-free-firmware 28 | 29 | :: 30 | 31 | vi /etc/apt/preferences.d/90_zfs 32 | 33 | .. code-block:: control 34 | 35 | Package: src:zfs-linux 36 | Pin: release n=trixie-backports 37 | Pin-Priority: 990 38 | 39 | Install the packages:: 40 | 41 | apt update 42 | apt install dpkg-dev linux-headers-generic linux-image-generic 43 | apt install zfs-dkms zfsutils-linux 44 | 45 | **Caution**: If you are in a poorly configured environment (e.g. certain VM or container consoles), when apt attempts to pop up a message on first install, it may fail to notice a real console is unavailable, and instead appear to hang indefinitely. To circumvent this, you can prefix the `apt install` commands with ``DEBIAN_FRONTEND=noninteractive``, like this:: 46 | 47 | DEBIAN_FRONTEND=noninteractive apt install zfs-dkms zfsutils-linux 48 | 49 | Root on ZFS 50 | ----------- 51 | .. toctree:: 52 | :maxdepth: 1 53 | :glob: 54 | 55 | *Root on ZFS 56 | 57 | Related topics 58 | -------------- 59 | .. toctree:: 60 | :maxdepth: 1 61 | 62 | Debian GNU Linux initrd documentation 63 | -------------------------------------------------------------------------------- /docs/License.rst: -------------------------------------------------------------------------------- 1 | License 2 | ======= 3 | 4 | - The OpenZFS software is licensed under the Common Development and Distribution License 5 | (`CDDL `__) unless otherwise noted. 6 | 7 | - The OpenZFS documentation content is licensed under a Creative Commons Attribution-ShareAlike 8 | license (`CC BY-SA 3.0 `__) 9 | unless otherwise noted. 10 | 11 | - OpenZFS is an associated project of SPI (`Software in the Public Interest 12 | `__). SPI is a 501(c)(3) nonprofit 13 | organization which handles the donations, finances, and legal holdings of the project. 14 | 15 | .. note:: 16 | The Linux Kernel is licensed under the GNU General Public License 17 | Version 2 (`GPLv2 `__). While 18 | both (OpenZFS and Linux Kernel) are free open source licenses they are 19 | restrictive licenses. The combination of them causes problems because it 20 | prevents using pieces of code exclusively available under one license 21 | with pieces of code exclusively available under the other in the same binary. 22 | In the case of the Linux Kernel, this prevents us from distributing OpenZFS 23 | as part of the Linux Kernel binary. However, there is nothing in either license 24 | that prevents distributing it in the form of a binary module or in the form 25 | of source code. 26 | 27 | Additional reading and opinions: 28 | 29 | - `Software Freedom Law 30 | Center `__ 31 | - `Software Freedom 32 | Conservancy `__ 33 | - `Free Software 34 | Foundation `__ 35 | - `Encouraging closed source 36 | modules `__ 37 | 38 | CC BY-SA 3.0: |Creative Commons License| 39 | 40 | .. |Creative Commons License| image:: https://i.creativecommons.org/l/by-sa/3.0/88x31.png 41 | :target: http://creativecommons.org/licenses/by-sa/3.0/ 42 | -------------------------------------------------------------------------------- /docs/Getting Started/NixOS/index.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: sh 2 | 3 | NixOS 4 | ===== 5 | 6 | Contents 7 | -------- 8 | .. toctree:: 9 | :maxdepth: 1 10 | :glob: 11 | 12 | * 13 | 14 | Support 15 | ------- 16 | Reach out to the community using the :ref:`mailing_lists` or IRC at 17 | `#zfsonlinux `__ on `Libera Chat 18 | `__. 19 | 20 | If you have a bug report or feature request 21 | related to this HOWTO, please `file a new issue and mention @ne9z 22 | `__. 23 | 24 | Installation 25 | ------------ 26 | 27 | Note: this is for installing ZFS on an existing 28 | NixOS installation. To use ZFS as root file system, 29 | see below. 30 | 31 | NixOS live image ships with ZFS support by default. 32 | 33 | Note that you need to apply these settings even if you don't need 34 | to boot from ZFS. The kernel module 'zfs.ko' will not be available 35 | to modprobe until you make these changes and reboot. 36 | 37 | #. Edit ``/etc/nixos/configuration.nix`` and add the following 38 | options:: 39 | 40 | boot.supportedFilesystems = [ "zfs" ]; 41 | boot.zfs.forceImportRoot = false; 42 | networking.hostId = "yourHostId"; 43 | 44 | Where hostID can be generated with:: 45 | 46 | head -c4 /dev/urandom | od -A none -t x4 47 | 48 | #. Apply configuration changes:: 49 | 50 | nixos-rebuild boot 51 | 52 | #. Reboot:: 53 | 54 | reboot 55 | 56 | Root on ZFS 57 | ----------- 58 | .. toctree:: 59 | :maxdepth: 1 60 | :glob: 61 | 62 | * 63 | 64 | Contribute 65 | ---------- 66 | 67 | You can contribute to this documentation. Fork this repo, edit the 68 | documentation, then opening a pull request. 69 | 70 | #. To test your changes locally, use the devShell in this repo:: 71 | 72 | git clone https://github.com/ne9z/nixos-live openzfs-docs-dev 73 | cd openzfs-docs-dev 74 | nix develop ./openzfs-docs-dev/#docs 75 | 76 | #. Inside the openzfs-docs repo, build pages:: 77 | 78 | make html 79 | 80 | #. Look for errors and warnings in the make output. If there is no 81 | errors:: 82 | 83 | xdg-open _build/html/index.html 84 | 85 | #. ``git commit --signoff`` to a branch, ``git push``, and create a 86 | pull request. Mention @ne9z. 87 | -------------------------------------------------------------------------------- /docs/Project and Community/Mailing Lists.rst: -------------------------------------------------------------------------------- 1 | .. _mailing_lists: 2 | 3 | Mailing Lists 4 | ============= 5 | 6 | +----------------------+----------------------+----------------------+ 7 | |              | Description | List Archive | 8 | |             List     | | | 9 | |                      | | | 10 | +======================+======================+======================+ 11 | | `zfs-announce\ | A low-traffic list | `archive | 12 | | @list.zfsonlinux.\ | for announcements | `__ | 15 | | ups/zfs-announce>`__ | | | 16 | +----------------------+----------------------+----------------------+ 17 | | `zfs-discuss\ | A user discussion | `archive | 18 | | @list.zfsonlinux\ | list for issues | `__ | 21 | | oups/zfs-discuss>`__ | usability | | 22 | +----------------------+----------------------+----------------------+ 23 | | `zfs-\ | A development list | `archive | 24 | | devel@list.zfsonlin\ | for developers to | `__ | 27 | | groups/zfs-devel>`__ | | | 28 | +----------------------+----------------------+----------------------+ 29 | | `devel\ | A | `archive `__ | 32 | | iki/Mailing_list>`__ | developers to review | | 33 | | | ZFS code and | | 34 | | | architecture changes | | 35 | | | from all platforms | | 36 | +----------------------+----------------------+----------------------+ 37 | -------------------------------------------------------------------------------- /docs/Project and Community/Signing Keys.rst: -------------------------------------------------------------------------------- 1 | Signing Keys 2 | ============ 3 | 4 | All tagged ZFS on Linux 5 | `releases `__ are signed by 6 | the official maintainer for that branch. These signatures are 7 | automatically verified by GitHub and can be checked locally by 8 | downloading the maintainers public key. 9 | 10 | Maintainers 11 | ----------- 12 | 13 | Release branch (spl/zfs-\*-release) 14 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 15 | 16 | | **Maintainer:** `Ned Bass `__ 17 | | **Download:** 18 | `pgp.mit.edu `__ 19 | | **Key ID:** C77B9667 20 | | **Fingerprint:** 29D5 610E AE29 41E3 55A2 FE8A B974 67AA C77B 9667 21 | 22 | | **Maintainer:** `Tony Hutter `__ 23 | | **Download:** 24 | `pgp.mit.edu `__ 25 | | **Key ID:** D4598027 26 | | **Fingerprint:** 4F3B A9AB 6D1F 8D68 3DC2 DFB5 6AD8 60EE D459 8027 27 | 28 | Master branch (master) 29 | ~~~~~~~~~~~~~~~~~~~~~~ 30 | 31 | | **Maintainer:** `Brian Behlendorf `__ 32 | | **Download:** 33 | `pgp.mit.edu `__ 34 | | **Key ID:** C6AF658B 35 | | **Fingerprint:** C33D F142 657E D1F7 C328 A296 0AB9 E991 C6AF 658B 36 | 37 | Checking the Signature of a Git Tag 38 | ----------------------------------- 39 | 40 | First import the public key listed above in to your key ring. 41 | 42 | :: 43 | 44 | $ gpg --keyserver pgp.mit.edu --recv C6AF658B 45 | gpg: requesting key C6AF658B from hkp server pgp.mit.edu 46 | gpg: key C6AF658B: "Brian Behlendorf " not changed 47 | gpg: Total number processed: 1 48 | gpg: unchanged: 1 49 | 50 | After the public key is imported the signature of a git tag can be 51 | verified as shown. 52 | 53 | :: 54 | 55 | $ git tag --verify zfs-0.6.5 56 | object 7a27ad00ae142b38d4aef8cc0af7a72b4c0e44fe 57 | type commit 58 | tag zfs-0.6.5 59 | tagger Brian Behlendorf 1441996302 -0700 60 | 61 | ZFS Version 0.6.5 62 | gpg: Signature made Fri 11 Sep 2015 11:31:42 AM PDT using DSA key ID C6AF658B 63 | gpg: Good signature from "Brian Behlendorf " 64 | gpg: aka "Brian Behlendorf (LLNL) " 65 | -------------------------------------------------------------------------------- /docs/Getting Started/Fedora/index.rst: -------------------------------------------------------------------------------- 1 | Fedora 2 | ====== 3 | 4 | Contents 5 | -------- 6 | .. toctree:: 7 | :maxdepth: 1 8 | :glob: 9 | 10 | * 11 | 12 | Installation 13 | ------------ 14 | 15 | Note: this is for installing ZFS on an existing Fedora 16 | installation. To use ZFS as root file system, 17 | see below. 18 | 19 | #. If ``zfs-fuse`` from official Fedora repo is installed, 20 | remove it first. It is not maintained and should not be used 21 | under any circumstance:: 22 | 23 | rpm -e --nodeps zfs-fuse 24 | 25 | #. Add ZFS repo:: 26 | 27 | dnf install -y https://zfsonlinux.org/fedora/zfs-release-3-0$(rpm --eval "%{dist}").noarch.rpm 28 | 29 | List of old zfs-release RPMs are available `here `__. 30 | 31 | #. Install kernel headers:: 32 | 33 | dnf install -y kernel-devel-$(uname -r | awk -F'-' '{print $1}') 34 | 35 | ``kernel-devel`` package must be installed before ``zfs`` package. 36 | 37 | #. Install ZFS packages:: 38 | 39 | dnf install -y zfs 40 | 41 | #. Load kernel module:: 42 | 43 | modprobe zfs 44 | 45 | If kernel module can not be loaded, your kernel version 46 | might be not yet supported by OpenZFS. 47 | 48 | An option is to an LTS kernel from COPR, provided by a third-party. 49 | Use it at your own risk:: 50 | 51 | # this is a third-party repo! 52 | # you have been warned. 53 | # 54 | # select a kernel from 55 | # https://copr.fedorainfracloud.org/coprs/kwizart/ 56 | 57 | dnf copr enable -y kwizart/kernel-longterm-VERSION 58 | dnf install -y kernel-longterm kernel-longterm-devel 59 | 60 | Reboot to new LTS kernel, then load kernel module:: 61 | 62 | modprobe zfs 63 | 64 | #. By default ZFS kernel modules are loaded upon detecting a pool. 65 | To always load the modules at boot:: 66 | 67 | echo zfs > /etc/modules-load.d/zfs.conf 68 | 69 | #. By default ZFS may be removed by kernel package updates. 70 | To lock the kernel version to only ones supported by ZFS to prevent this:: 71 | 72 | echo 'zfs' > /etc/dnf/protected.d/zfs.conf 73 | 74 | Pending non-kernel updates can still be applied:: 75 | 76 | dnf update --exclude=kernel* 77 | 78 | Testing Repo 79 | -------------------- 80 | 81 | Testing repository, which is disabled by default, contains 82 | the latest version of OpenZFS which is under active development. 83 | These packages 84 | **should not** be used on production systems. 85 | 86 | :: 87 | 88 | dnf config-manager --enable zfs-testing 89 | dnf install zfs 90 | 91 | Root on ZFS 92 | ----------- 93 | .. toctree:: 94 | :maxdepth: 1 95 | :glob: 96 | 97 | * 98 | -------------------------------------------------------------------------------- /.github/workflows/test_zfs_root_guide.yml: -------------------------------------------------------------------------------- 1 | name: "Test installation guides" 2 | on: 3 | push: 4 | branches: 5 | - master 6 | paths: 7 | - 'docs/Getting Started/NixOS/Root on ZFS.rst' 8 | - 'docs/Getting Started/RHEL-based distro/Root on ZFS.rst' 9 | - 'docs/Getting Started/Alpine Linux/Root on ZFS.rst' 10 | - 'docs/Getting Started/Arch Linux/Root on ZFS.rst' 11 | - 'docs/Getting Started/Fedora/Root on ZFS.rst' 12 | - 'docs/Getting Started/zfs_root_maintenance.rst' 13 | 14 | pull_request: 15 | paths: 16 | - 'docs/Getting Started/NixOS/Root on ZFS.rst' 17 | - 'docs/Getting Started/RHEL-based distro/Root on ZFS.rst' 18 | - 'docs/Getting Started/Alpine Linux/Root on ZFS.rst' 19 | - 'docs/Getting Started/Arch Linux/Root on ZFS.rst' 20 | - 'docs/Getting Started/Fedora/Root on ZFS.rst' 21 | - 'docs/Getting Started/zfs_root_maintenance.rst' 22 | 23 | workflow_dispatch: 24 | 25 | jobs: 26 | build: 27 | runs-on: ubuntu-latest 28 | steps: 29 | - uses: actions/checkout@v4 30 | - name: Install shellcheck 31 | run: | 32 | sudo apt install --yes shellcheck 33 | - name: Run shellcheck on test entry point 34 | run: | 35 | sh -n ./scripts/zfs_root_guide_test.sh 36 | shellcheck --check-sourced --enable=all --shell=dash --severity=style --format=tty \ 37 | ./scripts/zfs_root_guide_test.sh 38 | - name: Install pylit 39 | run: | 40 | set -vexuf 41 | sudo apt-get update -y 42 | sudo apt-get install -y python3-pip 43 | sudo pip install pylit 44 | - name: Install ZFS and partitioning tools 45 | run: | 46 | set -vexuf 47 | sudo add-apt-repository --yes universe 48 | sudo apt install --yes zfsutils-linux 49 | sudo apt install --yes qemu-utils 50 | sudo modprobe zfs 51 | sudo apt install --yes git jq parted 52 | sudo apt install --yes whois curl 53 | sudo apt install --yes arch-install-scripts 54 | - uses: cachix/install-nix-action@v20 55 | with: 56 | nix_path: nixpkgs=channel:nixos-unstable 57 | - name: Test NixOS guide 58 | run: | 59 | sudo PATH="${PATH}" NIX_PATH="${NIX_PATH}" ./scripts/zfs_root_guide_test.sh nixos 60 | - name: Test Alpine Linux guide 61 | run: | 62 | sudo ./scripts/zfs_root_guide_test.sh alpine 63 | - name: Test Arch Linux guide 64 | run: | 65 | sudo ./scripts/zfs_root_guide_test.sh archlinux 66 | - name: Test Fedora guide 67 | run: | 68 | sudo ./scripts/zfs_root_guide_test.sh fedora 69 | - name: Test RHEL guide 70 | run: | 71 | sudo ./scripts/zfs_root_guide_test.sh rhel 72 | - uses: actions/upload-artifact@v4 73 | with: 74 | name: installation-scripts 75 | path: | 76 | *.sh 77 | -------------------------------------------------------------------------------- /docs/Project and Community/FAQ hole birth.rst: -------------------------------------------------------------------------------- 1 | :orphan: 2 | 3 | FAQ Hole birth 4 | ============== 5 | 6 | Short explanation 7 | ~~~~~~~~~~~~~~~~~ 8 | 9 | The hole_birth feature has/had bugs, the result of which is that, if you 10 | do a ``zfs send -i`` (or ``-R``, since it uses ``-i``) from an affected 11 | dataset, the receiver will not see any checksum or other errors, but the 12 | resulting destination snapshot will not match the source. 13 | 14 | ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the 15 | faulty metadata which causes this issue *on the sender side*. 16 | 17 | FAQ 18 | ~~~ 19 | 20 | I have a pool with hole_birth enabled, how do I know if I am affected? 21 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 22 | 23 | It is technically possible to calculate whether you have any affected 24 | files, but it requires scraping zdb output for each file in each 25 | snapshot in each dataset, which is a combinatoric nightmare. (If you 26 | really want it, there is a proof of concept 27 | `here `__. 28 | 29 | Is there any less painful way to fix this if we have already received an affected snapshot? 30 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 31 | 32 | No, the data you need was simply not present in the send stream, 33 | unfortunately, and cannot feasibly be rewritten in place. 34 | 35 | Long explanation 36 | ~~~~~~~~~~~~~~~~ 37 | 38 | hole_birth is a feature to speed up ZFS send -i - in particular, ZFS 39 | used to not store metadata on when "holes" (sparse regions) in files 40 | were created, so every zfs send -i needed to include every hole. 41 | 42 | hole_birth, as the name implies, added tracking for the txg (transaction 43 | group) when a hole was created, so that zfs send -i could only send 44 | holes that had a birth_time between (starting snapshot txg) and (ending 45 | snapshot txg), and life was wonderful. 46 | 47 | Unfortunately, hole_birth had a number of edge cases where it could 48 | "forget" to set the birth_time of holes in some cases, causing it to 49 | record the birth_time as 0 (the value used prior to hole_birth, and 50 | essentially equivalent to "since file creation"). 51 | 52 | This meant that, when you did a zfs send -i, since zfs send does not 53 | have any knowledge of the surrounding snapshots when sending a given 54 | snapshot, it would see the creation txg as 0, conclude "oh, it is 0, I 55 | must have already sent this before", and not include it. 56 | 57 | This means that, on the receiving side, it does not know those holes 58 | should exist, and does not create them. This leads to differences 59 | between the source and the destination. 60 | 61 | ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring this 62 | metadata and always sending holes with birth_time 0, configurable using 63 | the tunable known as ``ignore_hole_birth`` or 64 | ``send_holes_without_birth_time``. The latter is what OpenZFS 65 | standardized on. ZoL version 0.6.5.8 only has the former, but for any 66 | ZoL version with ``send_holes_without_birth_time``, they point to the 67 | same value, so changing either will work. 68 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-14/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-14 26 | ======================= 27 | 28 | Corrupt ZFS cache 29 | ----------------- 30 | 31 | +-------------------------+--------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+--------------------------------------+ 34 | | **Severity:** | Critical | 35 | +-------------------------+--------------------------------------+ 36 | | **Description:** | The ZFS cache file is corrupted. | 37 | +-------------------------+--------------------------------------+ 38 | | **Automated Response:** | No automated response will be taken. | 39 | +-------------------------+--------------------------------------+ 40 | | **Impact:** | ZFS filesystems are not available. | 41 | +-------------------------+--------------------------------------+ 42 | 43 | .. rubric:: Suggested Action for System Administrator 44 | 45 | ZFS keeps a list of active pools on the filesystem to avoid having to 46 | scan all devices when the system is booted. If this file is corrupted, 47 | then normally active pools will not be automatically opened. The pools 48 | can be recovered using the ``zpool import`` command: 49 | 50 | :: 51 | 52 | # zpool import 53 | pool: test 54 | id: 12743384782310107047 55 | state: ONLINE 56 | action: The pool can be imported using its name or numeric identifier. 57 | config: 58 | 59 | test ONLINE 60 | sda9 ONLINE 61 | 62 | This will automatically scan ``/dev`` for any devices part of a pool. 63 | If devices have been made available in an alternate location, use the 64 | ``-d`` option to ``zpool import`` to search for devices in a different 65 | directory. 66 | 67 | Once you have determined which pools are available for import, you 68 | can import the pool explicitly by specifying the name or numeric 69 | identifier: 70 | 71 | :: 72 | 73 | # zpool import test 74 | 75 | Alternately, you can import all available pools by specifying the ``-a`` 76 | option. Once a pool has been imported, the ZFS cache will be repaired 77 | so that the pool will appear normally in the future. 78 | 79 | .. rubric:: Details 80 | 81 | The Message ID: ``ZFS-8000-14`` indicates a corrupted ZFS cache file. 82 | Take the documented action to resolve the problem. 83 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-EY/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-EY 26 | ======================= 27 | 28 | ZFS label hostid mismatch 29 | ------------------------- 30 | 31 | +-------------------------+---------------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+---------------------------------------------------+ 34 | | **Severity:** | Major | 35 | +-------------------------+---------------------------------------------------+ 36 | | **Description:** | The ZFS pool was last accessed by another system. | 37 | +-------------------------+---------------------------------------------------+ 38 | | **Automated Response:** | No automated response will be taken. | 39 | +-------------------------+---------------------------------------------------+ 40 | | **Impact:** | ZFS filesystems are not available. | 41 | +-------------------------+---------------------------------------------------+ 42 | 43 | .. rubric:: Suggested Action for System Administrator 44 | 45 | The pool has been written to from another host, and was not cleanly 46 | exported from the other system. Actively importing a pool on multiple 47 | systems will corrupt the pool and leave it in an unrecoverable state. 48 | To determine which system last accessed the pool, run the ``zpool 49 | import`` command: 50 | 51 | :: 52 | 53 | # zpool import 54 | pool: test 55 | id: 14702934086626715962 56 | state: ONLINE 57 | status: The pool was last accessed by another system. 58 | action: The pool can be imported using its name or numeric identifier and 59 | the '-f' flag. 60 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY 61 | config: 62 | 63 | test ONLINE 64 | c0t0d0 ONLINE 65 | 66 | # zpool import test 67 | cannot import 'test': pool may be in use from other system, it was last 68 | accessed by 'tank' (hostid: 0x1435718c) on Fri Mar 9 15:42:47 2007 69 | use '-f' to import anyway 70 | 71 | If you are certain that the pool is not being actively accessed by 72 | another system, then you can use the ``-f`` option to ``zpool import`` to 73 | forcibly import the pool. 74 | 75 | .. rubric:: Details 76 | 77 | The Message ID: ``ZFS-8000-EY`` indicates that the pool cannot be 78 | imported as it was last accessed by another system. Take the 79 | documented action to resolve the problem. 80 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-6X/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-6X 26 | ======================= 27 | 28 | Missing top level device 29 | ------------------------ 30 | 31 | +-------------------------+--------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+--------------------------------------------+ 34 | | **Severity:** | Critical | 35 | +-------------------------+--------------------------------------------+ 36 | | **Description:** | One or more top level devices are missing. | 37 | +-------------------------+--------------------------------------------+ 38 | | **Automated Response:** | No automated response will be taken. | 39 | +-------------------------+--------------------------------------------+ 40 | | **Impact:** | The pool cannot be imported. | 41 | +-------------------------+--------------------------------------------+ 42 | 43 | .. rubric:: Suggested Action for System Administrator 44 | 45 | Run ``zpool import`` to list which pool cannot be imported: 46 | 47 | :: 48 | 49 | # zpool import 50 | pool: test 51 | id: 13783646421373024673 52 | state: FAULTED 53 | status: One or more devices are missing from the system. 54 | action: The pool cannot be imported. Attach the missing devices and try again. 55 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-6X 56 | config: 57 | 58 | test FAULTED missing device 59 | c0t0d0 ONLINE 60 | 61 | Additional devices are known to be part of this pool, though their 62 | exact configuration cannot be determined. 63 | 64 | ZFS attempts to store enough configuration data on the devices such 65 | that the configuration is recoverable from any subset of devices. In 66 | some cases, particularly when an entire toplevel virtual device is 67 | not attached to the system, ZFS will be unable to determine the 68 | complete configuration. It will always detect that these devices are 69 | missing, even if it cannot identify all of the devices. 70 | 71 | The pool cannot be imported until the unknown missing device is 72 | attached to the system. If the device has been made available in an 73 | alternate location, use the ``-d`` option to ``zpool import`` to search 74 | for devices in a different directory. If the missing device is 75 | unavailable, then the pool cannot be imported. 76 | 77 | .. rubric:: Details 78 | 79 | The Message ID: ``ZFS-8000-6X`` indicates one or more top level 80 | devices are missing from the configuration. 81 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-HC/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-HC 26 | ======================= 27 | 28 | ZFS pool I/O failures 29 | --------------------- 30 | 31 | +-------------------------+-----------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+-----------------------------------------+ 34 | | **Severity:** | Major | 35 | +-------------------------+-----------------------------------------+ 36 | | **Description:** | The ZFS pool has experienced currently | 37 | | | unrecoverable I/O failures. | 38 | +-------------------------+-----------------------------------------+ 39 | | **Automated Response:** | No automated response will be taken. | 40 | +-------------------------+-----------------------------------------+ 41 | | **Impact:** | Read and write I/Os cannot be serviced. | 42 | +-------------------------+-----------------------------------------+ 43 | 44 | .. rubric:: Suggested Action for System Administrator 45 | 46 | The pool has experienced I/O failures. Since the ZFS pool property 47 | ``failmode`` is set to 'wait', all I/Os (reads and writes) are blocked. 48 | See the zpoolprops(8) manpage for more information on the ``failmode`` 49 | property. Manual intervention is required for I/Os to be serviced. 50 | 51 | You can see which devices are affected by running ``zpool status -x``: 52 | 53 | :: 54 | 55 | # zpool status -x 56 | pool: test 57 | state: FAULTED 58 | status: There are I/O failures. 59 | action: Make sure the affected devices are connected, then run 'zpool clear'. 60 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC 61 | scrub: none requested 62 | config: 63 | 64 | NAME STATE READ WRITE CKSUM 65 | test FAULTED 0 13 0 insufficient replicas 66 | c0t0d0 FAULTED 0 7 0 experienced I/O failures 67 | c0t1d0 ONLINE 0 0 0 68 | 69 | errors: 1 data errors, use '-v' for a list 70 | 71 | After you have made sure the affected devices are connected, run ``zpool 72 | clear`` to allow I/O to the pool again: 73 | 74 | :: 75 | 76 | # zpool clear test 77 | 78 | If I/O failures continue to happen, then applications and commands for the pool 79 | may hang. At this point, a reboot may be necessary to allow I/O to the pool 80 | again. 81 | 82 | .. rubric:: Details 83 | 84 | The Message ID: ``ZFS-8000-HC`` indicates that the pool has experienced I/O 85 | failures. Take the documented action to resolve the problem. 86 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-JQ/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-JQ 26 | ======================= 27 | 28 | ZFS pool I/O failures 29 | --------------------- 30 | 31 | +-------------------------+----------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+----------------------------------------+ 34 | | **Severity:** | Major | 35 | +-------------------------+----------------------------------------+ 36 | | **Description:** | The ZFS pool has experienced currently | 37 | | | unrecoverable I/O failures. | 38 | +-------------------------+----------------------------------------+ 39 | | **Automated Response:** | No automated response will be taken. | 40 | +-------------------------+----------------------------------------+ 41 | | **Impact:** | Write I/Os cannot be serviced. | 42 | +-------------------------+----------------------------------------+ 43 | 44 | .. rubric:: Suggested Action for System Administrator 45 | 46 | The pool has experienced I/O failures. Since the ZFS pool property 47 | ``failmode`` is set to 'continue', read I/Os will continue to be 48 | serviced, but write I/Os are blocked. See the zpoolprops(8) manpage for 49 | more information on the ``failmode`` property. Manual intervention is 50 | required for write I/Os to be serviced. You can see which devices are 51 | affected by running ``zpool status -x``: 52 | 53 | :: 54 | 55 | # zpool status -x 56 | pool: test 57 | state: FAULTED 58 | status: There are I/O failures. 59 | action: Make sure the affected devices are connected, then run 'zpool clear'. 60 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC 61 | scrub: none requested 62 | config: 63 | 64 | NAME STATE READ WRITE CKSUM 65 | test FAULTED 0 13 0 insufficient replicas 66 | sda9 FAULTED 0 7 0 experienced I/O failures 67 | sdb9 ONLINE 0 0 0 68 | 69 | errors: 1 data errors, use '-v' for a list 70 | 71 | After you have made sure the affected devices are connected, run 72 | ``zpool clear`` to allow write I/O to the pool again: 73 | 74 | :: 75 | 76 | # zpool clear test 77 | 78 | If I/O failures continue to happen, then applications and commands 79 | for the pool may hang. At this point, a reboot may be necessary to 80 | allow I/O to the pool again. 81 | 82 | .. rubric:: Details 83 | 84 | The Message ID: ``ZFS-8000-JQ`` indicates that the pool has 85 | experienced I/O failures. Take the documented action to resolve the 86 | problem. 87 | -------------------------------------------------------------------------------- /docs/Basic Concepts/RAIDZ.rst: -------------------------------------------------------------------------------- 1 | RAIDZ 2 | ===== 3 | 4 | Introduction 5 | ~~~~~~~~~~~~ 6 | 7 | RAIDZ is a variation on RAID-5 that allows for better distribution of parity 8 | and eliminates the RAID-5 “write hole” (in which data and parity become 9 | inconsistent after a power loss). 10 | Data and parity is striped across all disks within a raidz group. 11 | 12 | A raidz group can have single, double, or triple parity, meaning that the raidz 13 | group can sustain one, two, or three failures, respectively, without losing any 14 | data. The ``raidz1`` vdev type specifies a single-parity raidz group; the ``raidz2`` 15 | vdev type specifies a double-parity raidz group; and the ``raidz3`` vdev type 16 | specifies a triple-parity raidz group. The ``raidz`` vdev type is an alias for 17 | raidz1. 18 | 19 | A raidz group of N disks of size X with P parity disks can hold 20 | approximately (N-P)*X bytes and can withstand P devices failing without 21 | losing data. The minimum number of devices in a raidz group is one more 22 | than the number of parity disks. The recommended number is between 3 and 9 23 | to help increase performance. 24 | 25 | 26 | Space efficiency 27 | ~~~~~~~~~~~~~~~~ 28 | 29 | Actual used space for a block in RAIDZ is based on several points: 30 | 31 | - minimal write size is disk sector size (can be set via `ashift` vdev parameter) 32 | 33 | - stripe width in RAIDZ is dynamic, and starts with at least one data block part, or up to 34 | ``disks count`` minus ``parity number`` parts of data block 35 | 36 | - one block of data with size of ``recordsize`` is 37 | split equally via ``sector size`` parts 38 | and written on each stripe on RAIDZ vdev 39 | - each stripe of data will have a part of block 40 | 41 | - in addition to data one, two or three blocks of parity should be written, 42 | one per disk; so, for raidz2 of 5 disks there will be 3 blocks of data and 43 | 2 blocks of parity 44 | 45 | Due to these inputs, if ``recordsize`` is less or equal to sector size, 46 | then RAIDZ's parity size will be effictively equal to mirror with same redundancy. 47 | For example, for raidz1 of 3 disks with ``ashift=12`` and ``recordsize=4K`` 48 | we will allocate on disk: 49 | 50 | - one 4K block of data 51 | 52 | - one 4K parity block 53 | 54 | and usable space ratio will be 50%, same as with double mirror. 55 | 56 | 57 | Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks: 58 | 59 | - total stripe width is 3 60 | 61 | - one stripe can have up to 2 data parts of 4K size because of 1 parity blocks 62 | 63 | - we will have 128K/8k = 16 stripes with 8K of data and 4K of parity each 64 | 65 | - 16 stripes each with 12k, means we write 192k to store 128k 66 | 67 | so usable space ratio in this case will be 66%. 68 | 69 | 70 | The more disks RAIDZ has, the wider the stripe, the greater the space 71 | efficiency. 72 | 73 | You can find actual parity cost per RAIDZ size here: 74 | 75 | .. raw:: html 76 | 77 | 78 | 79 | (`source `__) 80 | 81 | 82 | Performance considerations 83 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 84 | 85 | Write 86 | ^^^^^ 87 | 88 | A stripe spans across all drives in the array. A one block write will write the stripe part onto each disk. 89 | A RAIDZ vdev has a write IOPS of the slowest disk in the array in the worst case because the write operation of all stripe parts must be completed on each disk. 90 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-A5/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-A5 26 | ======================= 27 | 28 | Incompatible version 29 | -------------------- 30 | 31 | +-------------------------+------------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+------------------------------------------------+ 34 | | **Severity:** | Major | 35 | +-------------------------+------------------------------------------------+ 36 | | **Description:** | The on-disk version is not compatible with the | 37 | | | running system. | 38 | +-------------------------+------------------------------------------------+ 39 | | **Automated Response:** | No automated response will occur. | 40 | +-------------------------+------------------------------------------------+ 41 | | **Impact:** | The pool is unavailable. | 42 | +-------------------------+------------------------------------------------+ 43 | 44 | .. rubric:: Suggested Action for System Administrator 45 | 46 | If this error is seen during ``zpool import``, see the section below. 47 | Otherwise, run ``zpool status -x`` to determine which pool is faulted: 48 | 49 | :: 50 | 51 | # zpool status -x 52 | pool: test 53 | state: FAULTED 54 | status: The ZFS version for the pool is incompatible with the software running 55 | on this system. 56 | action: Destroy and re-create the pool. 57 | scrub: none requested 58 | config: 59 | 60 | NAME STATE READ WRITE CKSUM 61 | test FAULTED 0 0 0 incompatible version 62 | mirror ONLINE 0 0 0 63 | sda9 ONLINE 0 0 0 64 | sdb9 ONLINE 0 0 0 65 | 66 | errors: No known errors 67 | 68 | The pool cannot be used on this system. Either move the storage to 69 | the system where the pool was originally created, upgrade the current 70 | system software to a more recent version, or destroy the pool and 71 | re-create it from backup. 72 | 73 | If this error is seen during import, the pool cannot be imported on 74 | the current system. The disks must be attached to the system which 75 | originally created the pool, and imported there. 76 | 77 | The list of currently supported versions can be displayed using 78 | ``zpool upgrade -v``. 79 | 80 | .. rubric:: Details 81 | 82 | The Message ID: ``ZFS-8000-A5`` indicates a version mismatch exists 83 | between the running system and the on-disk data. 84 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-5E/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-5E 26 | ======================= 27 | 28 | Corrupted device label in non-replicated configuration 29 | ------------------------------------------------------ 30 | 31 | +-------------------------+--------------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+--------------------------------------------------+ 34 | | **Severity:** | Critical | 35 | +-------------------------+--------------------------------------------------+ 36 | | **Description:** | A device could not be opened due to a missing or | 37 | | | invalid device label and no replicas are | 38 | | | available. | 39 | +-------------------------+--------------------------------------------------+ 40 | | **Automated Response:** | No automated response will be taken. | 41 | +-------------------------+--------------------------------------------------+ 42 | | **Impact:** | The pool is no longer available. | 43 | +-------------------------+--------------------------------------------------+ 44 | 45 | .. rubric:: Suggested Action for System Administrator 46 | 47 | .. rubric:: For an active pool: 48 | 49 | If this error was encountered while running ``zpool import``, please see the 50 | section below. Otherwise, run ``zpool status -x`` to determine which pool has 51 | experienced a failure: 52 | 53 | :: 54 | 55 | # zpool status -x 56 | pool: test 57 | state: FAULTED 58 | status: One or more devices could not be used because the the label is missing 59 | or invalid. There are insufficient replicas for the pool to continue 60 | functioning. 61 | action: Destroy and re-create the pool from a backup source. 62 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E 63 | scrub: none requested 64 | config: 65 | 66 | NAME STATE READ WRITE CKSUM 67 | test FAULTED 0 0 0 insufficient replicas 68 | c0t0d0 FAULTED 0 0 0 corrupted data 69 | c0t0d1 ONLINE 0 0 0 70 | 71 | errors: No known data errors 72 | 73 | The device listed as FAULTED with 'corrupted data' cannot be opened due to a 74 | corrupt label. ZFS will be unable to use the pool, and all data within the 75 | pool is irrevocably lost. The pool must be destroyed and recreated from an 76 | appropriate backup source. Using replicated configurations will prevent this 77 | from happening in the future. 78 | 79 | .. rubric:: For an exported pool: 80 | 81 | If this error is encountered during ``zpool import``, the action is the same. 82 | The pool cannot be imported - all data is lost and must be restored from an 83 | appropriate backup source. 84 | 85 | .. rubric:: Details 86 | 87 | The Message ID: ``ZFS-8000-5E`` indicates a device which was unable to be 88 | opened by the ZFS subsystem. 89 | -------------------------------------------------------------------------------- /scripts/zfs_root_guide_test.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # working directory: root of repo 3 | set -vxuef 4 | 5 | distro="${1}" 6 | 7 | # clean up previous tests 8 | find /dev/mapper/ -name '*-part4' -print0 \ 9 | | xargs -t -0I'{}' sh -vxc "swapoff '{}' && cryptsetup close '{}'" 10 | 11 | find . -mindepth 1 -maxdepth 1 -type d -name 'rootfs-*' \ 12 | | while read -r dir; do 13 | grep "$(pwd || true)/${dir##./}" /proc/mounts \ 14 | | cut -f2 -d' ' | sort | tac \ 15 | | xargs -t -I '{}' sh -vxc "if test -d '{}'; then umount -Rl '{}'; fi" 16 | done 17 | find /dev -mindepth 1 -maxdepth 1 -type l -name 'loop*' -exec rm {} + 18 | zpool export -a 19 | losetup --detach-all 20 | 21 | # download alpine linux chroot 22 | # it is easier to install rhel with Alpine Linux live media 23 | # which has native zfs support 24 | if ! test -f rootfs.tar.gz; then 25 | curl --fail-early --fail -Lo rootfs.tar.gz https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/x86_64/alpine-minirootfs-3.19.0-x86_64.tar.gz 26 | curl --fail-early --fail -Lo rootfs.tar.gz.sig https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/x86_64/alpine-minirootfs-3.19.0-x86_64.tar.gz.asc 27 | gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify rootfs.tar.gz.sig 28 | fi 29 | mkdir rootfs-"${distro}" 30 | tar --auto-compress --extract --file rootfs.tar.gz --directory ./rootfs-"${distro}" 31 | 32 | # Create empty disk image 33 | qemu-img create -f raw "${distro}"_disk1.img 16G 34 | qemu-img create -f raw "${distro}"_disk2.img 16G 35 | losetup --partscan "$(losetup -f || true)" "${distro}"_disk1.img 36 | losetup --partscan "$(losetup -f || true)" "${distro}"_disk2.img 37 | 38 | run_test () { 39 | local path="${1}" 40 | local distro="${2}" 41 | sed 's|.. ifconfig:: zfs_root_test|::|g' \ 42 | "${path}" > "${distro}".rst 43 | sed -i '/highlight:: sh/d' "${distro}".rst 44 | 45 | # Generate installation script from documentation 46 | python scripts/zfs_root_gen_bash.py "${distro}".rst "${distro}".sh 47 | 48 | # Postprocess script for bash 49 | sed -i 's|^ *::||g' "${distro}".sh 50 | # ensure heredocs work 51 | sed -i 's|^ *ZFS_ROOT_GUIDE_TEST|ZFS_ROOT_GUIDE_TEST|g' "${distro}".sh 52 | sed -i 's|^ *ZFS_ROOT_NESTED_CHROOT|ZFS_ROOT_NESTED_CHROOT|g' "${distro}".sh 53 | sed -i 's|^ *EOF|EOF|g' "${distro}".sh 54 | 55 | # check whether nixos.sh have syntax errors 56 | sh -n "${distro}".sh 57 | 58 | ## !shellcheck does not handle nested chroots 59 | # create another file with < "${distro}"-shellcheck.sh 61 | shellcheck \ 62 | --check-sourced \ 63 | --enable=all \ 64 | --shell=dash \ 65 | --severity=style \ 66 | --format=tty \ 67 | "${distro}"-shellcheck.sh 68 | 69 | # Make the installation script executable and run 70 | chmod a+x "${distro}".sh 71 | ./"${distro}".sh "${distro}" 72 | } 73 | 74 | 75 | case "${distro}" in 76 | ("nixos") 77 | run_test 'docs/Getting Started/NixOS/Root on ZFS.rst' "${distro}" 78 | ;; 79 | 80 | ("rhel") 81 | run_test 'docs/Getting Started/RHEL-based distro/Root on ZFS.rst' "${distro}" 82 | ;; 83 | ("alpine") 84 | run_test 'docs/Getting Started/Alpine Linux/Root on ZFS.rst' "${distro}" 85 | ;; 86 | 87 | ("archlinux") 88 | run_test 'docs/Getting Started/Arch Linux/Root on ZFS.rst' "${distro}" 89 | ;; 90 | 91 | ("fedora") 92 | run_test 'docs/Getting Started/Fedora/Root on ZFS.rst' "${distro}" 93 | ;; 94 | 95 | ("maintenance") 96 | grep -B1000 'MAINTENANCE SCRIPT ENTRY POINT' 'docs/Getting Started/Alpine Linux/Root on ZFS.rst' > test_maintenance.rst 97 | cat 'docs/Getting Started/zfs_root_maintenance.rst' >> test_maintenance.rst 98 | grep -A1000 'MAINTENANCE SCRIPT ENTRY POINT' 'docs/Getting Started/Alpine Linux/Root on ZFS.rst' >> test_maintenance.rst 99 | run_test './test_maintenance.rst' "${distro}" 100 | ;; 101 | (*) 102 | echo "no distro specified" 103 | exit 1 104 | ;; 105 | esac 106 | -------------------------------------------------------------------------------- /docs/Getting Started/FreeBSD.rst: -------------------------------------------------------------------------------- 1 | FreeBSD 2 | ======= 3 | 4 | |ZoF-logo| 5 | 6 | Installation on FreeBSD 7 | ----------------------- 8 | 9 | OpenZFS is available pre-packaged as: 10 | 11 | - the zfs-2.0-release branch, in the FreeBSD base system from FreeBSD 13.0-CURRENT forward 12 | - the master branch, in the FreeBSD ports tree as sysutils/openzfs and sysutils/openzfs-kmod from FreeBSD 12.1 forward 13 | 14 | The rest of this document describes the use of OpenZFS either from ports/pkg or built manually from sources for development. 15 | 16 | The ZFS utilities will be installed in /usr/local/sbin/, so make sure 17 | your PATH gets adjusted accordingly. 18 | 19 | To load the module at boot, put ``openzfs_load="YES"`` in 20 | /boot/loader.conf, and remove ``zfs_load="YES"`` if migrating a ZFS 21 | install. 22 | 23 | Beware that the FreeBSD boot loader does not allow booting from root 24 | pools with encryption active (even if it is not in use), so do not try 25 | encryption on a pool you boot from. 26 | 27 | Development on FreeBSD 28 | ---------------------- 29 | 30 | The following dependencies are required to build OpenZFS on FreeBSD: 31 | 32 | - FreeBSD sources in /usr/src or elsewhere specified by SYSDIR in env. 33 | If you don't have the sources installed you can install them with 34 | git. 35 | 36 | Install source For FreeBSD 12: 37 | :: 38 | 39 | git clone -b stable/12 https://git.FreeBSD.org/src.git /usr/src 40 | 41 | Install source for FreeBSD Current: 42 | :: 43 | 44 | git clone https://git.FreeBSD.org/src.git /usr/src 45 | 46 | - Packages for build: 47 | :: 48 | 49 | pkg install \ 50 | autoconf \ 51 | automake \ 52 | autotools \ 53 | git \ 54 | gmake 55 | 56 | - Optional packages for build: 57 | :: 58 | 59 | pkg install python 60 | pkg install devel/py-sysctl # needed for arcstat, arc_summary, dbufstat 61 | 62 | - Packages for checks and tests: 63 | :: 64 | 65 | pkg install \ 66 | base64 \ 67 | bash \ 68 | checkbashisms \ 69 | fio \ 70 | hs-ShellCheck \ 71 | ksh93 \ 72 | pamtester \ 73 | devel/py-flake8 \ 74 | sudo 75 | 76 | Your preferred python version may be substituted. The user for 77 | running tests must have NOPASSWD sudo permission. 78 | 79 | To build and install: 80 | 81 | :: 82 | 83 | # as user 84 | git clone https://github.com/openzfs/zfs 85 | cd zfs 86 | ./autogen.sh 87 | env MAKE=gmake ./configure 88 | gmake -j`sysctl -n hw.ncpu` 89 | # as root 90 | gmake install 91 | 92 | To use the OpenZFS kernel module when FreeBSD starts, edit ``/boot/loader.conf`` : 93 | 94 | Replace the line: 95 | 96 | :: 97 | 98 | zfs_load="YES" 99 | 100 | with: 101 | 102 | :: 103 | 104 | openzfs_load="YES" 105 | 106 | The stock FreeBSD ZFS binaries are installed in /sbin. OpenZFS binaries are installed to /usr/local/sbin when installed form ports/pkg or manually from the source. To use OpenZFS binaries, adjust your path so /usr/local/sbin is listed before /sbin. Otherwise the native ZFS binaries will be used. 107 | 108 | For example, make changes to ~/.profile ~/.bashrc ~/.cshrc from this: 109 | 110 | :: 111 | 112 | PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin 113 | 114 | To this: 115 | 116 | :: 117 | 118 | PATH=/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:~/bin 119 | 120 | For rapid development it can be convenient to do a UFS install instead 121 | of ZFS when setting up the work environment. That way the module can be 122 | unloaded and loaded without rebooting. 123 | :: 124 | 125 | reboot 126 | 127 | Though not required, ``WITHOUT_ZFS`` is a useful build option in FreeBSD 128 | to avoid building and installing the legacy zfs tools and kmod - see 129 | ``src.conf(5)``. 130 | 131 | Some tests require fdescfs to be mount on /dev/fd. This can be done 132 | temporarily with: 133 | :: 134 | 135 | mount -t fdescfs fdescfs /dev/fd 136 | 137 | or an entry can be added to /etc/fstab. 138 | :: 139 | 140 | fdescfs /dev/fd fdescfs rw 0 0 141 | 142 | .. |ZoF-logo| image:: /_static/img/logo/zof-logo.png 143 | -------------------------------------------------------------------------------- /docs/Basic Concepts/Troubleshooting.rst: -------------------------------------------------------------------------------- 1 | Troubleshooting 2 | =============== 3 | 4 | .. todo:: 5 | This page is a draft. 6 | 7 | This page contains tips for troubleshooting ZFS on Linux and what info 8 | developers might want for bug triage. 9 | 10 | - `About Log Files <#about-log-files>`__ 11 | 12 | - `Generic Kernel Log <#generic-kernel-log>`__ 13 | - `ZFS Kernel Module Debug 14 | Messages <#zfs-kernel-module-debug-messages>`__ 15 | 16 | - `Unkillable Process <#unkillable-process>`__ 17 | - `ZFS Events <#zfs-events>`__ 18 | 19 | -------------- 20 | 21 | About Log Files 22 | --------------- 23 | 24 | Log files can be very useful for troubleshooting. In some cases, 25 | interesting information is stored in multiple log files that are 26 | correlated to system events. 27 | 28 | Pro tip: logging infrastructure tools like *elasticsearch*, *fluentd*, 29 | *influxdb*, or *splunk* can simplify log analysis and event correlation. 30 | 31 | Generic Kernel Log 32 | ~~~~~~~~~~~~~~~~~~ 33 | 34 | Typically, Linux kernel log messages are available from ``dmesg -T``, 35 | ``/var/log/syslog``, or where kernel log messages are sent (eg by 36 | ``rsyslogd``). 37 | 38 | ZFS Kernel Module Debug Messages 39 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 40 | 41 | The ZFS kernel modules use an internal log buffer for detailed logging 42 | information. This log information is available in the pseudo file 43 | ``/proc/spl/kstat/zfs/dbgmsg`` for ZFS builds where ZFS module parameter 44 | `zfs_dbgmsg_enable = 45 | 1 `__ 46 | 47 | -------------- 48 | 49 | Unkillable Process 50 | ------------------ 51 | 52 | Symptom: ``zfs`` or ``zpool`` command appear hung, does not return, and 53 | is not killable 54 | 55 | Likely cause: kernel thread hung or panic 56 | 57 | Log files of interest: `Generic Kernel Log <#generic-kernel-log>`__, 58 | `ZFS Kernel Module Debug Messages <#zfs-kernel-module-debug-messages>`__ 59 | 60 | Important information: if a kernel thread is stuck, then a backtrace of 61 | the stuck thread can be in the logs. In some cases, the stuck thread is 62 | not logged until the deadman timer expires. See also `debug 63 | tunables `__ 64 | 65 | -------------- 66 | 67 | ZFS Events 68 | ---------- 69 | 70 | ZFS uses an event-based messaging interface for communication of 71 | important events to other consumers running on the system. The ZFS Event 72 | Daemon (zed) is a userland daemon that listens for these events and 73 | processes them. zed is extensible so you can write shell scripts or 74 | other programs that subscribe to events and take action. For example, 75 | the script usually installed at ``/etc/zfs/zed.d/all-syslog.sh`` writes 76 | a formatted event message to ``syslog``. See the man page for ``zed(8)`` 77 | for more information. 78 | 79 | A history of events is also available via the ``zpool events`` command. 80 | This history begins at ZFS kernel module load and includes events from 81 | any pool. These events are stored in RAM and limited in count to a value 82 | determined by the kernel tunable 83 | `zfs_event_len_max `__. 84 | ``zed`` has an internal throttling mechanism to prevent overconsumption 85 | of system resources processing ZFS events. 86 | 87 | More detailed information about events is observable using 88 | ``zpool events -v`` The contents of the verbose events is subject to 89 | change, based on the event and information available at the time of the 90 | event. 91 | 92 | Each event has a class identifier used for filtering event types. 93 | Commonly seen events are those related to pool management with class 94 | ``sysevent.fs.zfs.*`` including import, export, configuration updates, 95 | and ``zpool history`` updates. 96 | 97 | Events related to errors are reported as class ``ereport.*`` These can 98 | be invaluable for troubleshooting. Some faults can cause multiple 99 | ereports as various layers of the software deal with the fault. For 100 | example, on a simple pool without parity protection, a faulty disk could 101 | cause an ``ereport.io`` during a read from the disk that results in an 102 | ``erport.fs.zfs.checksum`` at the pool level. These events are also 103 | reflected by the error counters observed in ``zpool status`` If you see 104 | checksum or read/write errors in ``zpool status`` then there should be 105 | one or more corresponding ereports in the ``zpool events`` output. 106 | -------------------------------------------------------------------------------- /docs/Getting Started/Debian/Debian GNU Linux initrd documentation.rst: -------------------------------------------------------------------------------- 1 | Debian GNU Linux initrd documentation 2 | ===================================== 3 | 4 | Supported boot parameters 5 | ************************* 6 | 7 | - rollback= Do a rollback of specified snapshot. 8 | - zfs_debug= Debug the initrd script 9 | - zfs_force= Force importing the pool. Should not be 10 | necessary. 11 | - zfs= Don't try to import ANY pool, mount ANY filesystem or 12 | even load the module. 13 | - rpool= Use this pool for root pool. 14 | - bootfs=/ Use this dataset for root filesystem. 15 | - root=/ Use this dataset for root filesystem. 16 | - root=ZFS=/ Use this dataset for root filesystem. 17 | - root=zfs:/ Use this dataset for root filesystem. 18 | - root=zfs:AUTO Try to detect both pool and rootfs 19 | 20 | In all these cases, could also be @. 21 | 22 | The reason there are so many supported boot options to get the root 23 | filesystem, is that there are a lot of different ways too boot ZFS out 24 | there, and I wanted to make sure I supported them all. 25 | 26 | Pool imports 27 | ************ 28 | 29 | Import using /dev/disk/by-\* 30 | ---------------------------- 31 | 32 | The initrd will, if the variable USE_DISK_BY_ID is set in the file 33 | /etc/default/zfs, to import using the /dev/disk/by-\* links. It will try 34 | to import in this order: 35 | 36 | 1. /dev/disk/by-vdev 37 | 2. /dev/disk/by-\* 38 | 3. /dev 39 | 40 | Import using cache file 41 | ----------------------- 42 | 43 | If all of these imports fail (or if USE_DISK_BY_ID is unset), it will 44 | then try to import using the cache file. 45 | 46 | Last ditch attempt at importing 47 | ------------------------------- 48 | 49 | If that ALSO fails, it will try one more time, without any -d or -c 50 | options. 51 | 52 | Booting 53 | ******* 54 | 55 | Booting from snapshot: 56 | ---------------------- 57 | 58 | Enter the snapshot for the root= parameter like in this example: 59 | 60 | :: 61 | 62 | linux /BOOT/debian@/boot/vmlinuz-5.10.0-9-amd64 root=ZFS=rpool/ROOT/debian@some_snapshot ro 63 | 64 | This will clone the snapshot rpool/ROOT/debian@some_snapshot into the 65 | filesystem rpool/ROOT/debian_some_snapshot and use that as root 66 | filesystem. The original filesystem and snapshot is left alone in this 67 | case. 68 | 69 | **BEWARE** that it will first destroy, blindingly, the 70 | rpool/ROOT/debian_some_snapshot filesystem before trying to clone the 71 | snapshot into it again. So if you've booted from the same snapshot 72 | previously and done some changes in that root filesystem, they will be 73 | undone by the destruction of the filesystem. 74 | 75 | Snapshot rollback 76 | ----------------- 77 | 78 | From version 0.6.4-1-3 it is now also possible to specify rollback=1 to 79 | do a rollback of the snapshot instead of cloning it. **BEWARE** that 80 | this will destroy *all* snapshots done after the specified snapshot! 81 | 82 | Select snapshot dynamically 83 | --------------------------- 84 | 85 | From version 0.6.4-1-3 it is now also possible to specify a NULL 86 | snapshot name (such as root=rpool/ROOT/debian@) and if so, the initrd 87 | script will discover all snapshots below that filesystem (sans the at), 88 | and output a list of snapshot for the user to choose from. 89 | 90 | Booting from native encrypted filesystem 91 | ---------------------------------------- 92 | 93 | Although there is currently no support for native encryption in ZFS On 94 | Linux, there is a patch floating around 'out there' and the initrd 95 | supports loading key and unlock such encrypted filesystem. 96 | 97 | Separated filesystems 98 | --------------------- 99 | 100 | Descended filesystems 101 | ~~~~~~~~~~~~~~~~~~~~~ 102 | 103 | If there are separate filesystems (for example a separate dataset for 104 | /usr), the snapshot boot code will try to find the snapshot under each 105 | filesystems and clone (or rollback) them. 106 | 107 | Example: 108 | 109 | :: 110 | 111 | rpool/ROOT/debian@some_snapshot 112 | rpool/ROOT/debian/usr@some_snapshot 113 | 114 | These will create the following filesystems respectively (if not doing a 115 | rollback): 116 | 117 | :: 118 | 119 | rpool/ROOT/debian_some_snapshot 120 | rpool/ROOT/debian/usr_some_snapshot 121 | 122 | The initrd code will use the mountpoint option (if any) in the original 123 | (without the snapshot part) dataset to find *where* it should mount the 124 | dataset. Or it will use the name of the dataset below the root 125 | filesystem (rpool/ROOT/debian in this example) for the mount point. 126 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-3C/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-3C 26 | ======================= 27 | 28 | Missing device in non-replicated configuration 29 | ---------------------------------------------- 30 | 31 | +-------------------------+--------------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+--------------------------------------------------+ 34 | | **Severity:** | Critical | 35 | +-------------------------+--------------------------------------------------+ 36 | | **Description:** | A device could not be opened and no replicas are | 37 | | | available. | 38 | +-------------------------+--------------------------------------------------+ 39 | | **Automated Response:** | No automated response will be taken. | 40 | +-------------------------+--------------------------------------------------+ 41 | | **Impact:** | The pool is no longer available. | 42 | +-------------------------+--------------------------------------------------+ 43 | 44 | .. rubric:: Suggested Action for System Administrator 45 | 46 | .. rubric:: For an active pool: 47 | 48 | If this error was encountered while running ``zpool import``, please 49 | see the section below. Otherwise, run ``zpool status -x`` to determine 50 | which pool has experienced a failure: 51 | 52 | :: 53 | 54 | # zpool status -x 55 | pool: test 56 | state: FAULTED 57 | status: One or more devices could not be opened. There are insufficient 58 | replicas for the pool to continue functioning. 59 | action: Attach the missing device and online it using 'zpool online'. 60 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C 61 | scrub: none requested 62 | config: 63 | 64 | NAME STATE READ WRITE CKSUM 65 | test FAULTED 0 0 0 insufficient replicas 66 | c0t0d0 ONLINE 0 0 0 67 | c0t0d1 FAULTED 0 0 0 cannot open 68 | 69 | errors: No known data errors 70 | 71 | If the device has been temporarily detached from the system, attach 72 | the device to the system and run ``zpool status`` again. The pool 73 | should automatically detect the newly attached device and resume 74 | functioning. You may have to mount the filesystems in the pool 75 | explicitly using ``zfs mount -a``. 76 | 77 | If the device is no longer available and cannot be reattached to the 78 | system, then the pool must be destroyed and re-created from a backup 79 | source. 80 | 81 | .. rubric:: For an exported pool: 82 | 83 | If this error is encountered during a ``zpool import``, it means that 84 | one of the devices is not attached to the system: 85 | 86 | :: 87 | 88 | # zpool import 89 | pool: test 90 | id: 10121266328238932306 91 | state: FAULTED 92 | status: One or more devices are missing from the system. 93 | action: The pool cannot be imported. Attach the missing devices and try again. 94 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C 95 | config: 96 | 97 | test FAULTED insufficient replicas 98 | c0t0d0 ONLINE 99 | c0t0d1 FAULTED cannot open 100 | 101 | The pool cannot be imported until the missing device is attached to 102 | the system. If the device has been made available in an alternate 103 | location, use the ``-d`` option to ``zpool import`` to search for devices 104 | in a different directory. If the missing device is unavailable, then 105 | the pool cannot be imported. 106 | 107 | .. rubric:: Details 108 | 109 | The Message ID: ``ZFS-8000-3C`` indicates a device which was unable 110 | to be opened by the ZFS subsystem. 111 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-8A/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-8A 26 | ======================= 27 | 28 | Corrupted data 29 | -------------- 30 | 31 | +-------------------------+----------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+----------------------------------------------+ 34 | | **Severity:** | Critical | 35 | +-------------------------+----------------------------------------------+ 36 | | **Description:** | A file or directory could not be read due to | 37 | | | corrupt data. | 38 | +-------------------------+----------------------------------------------+ 39 | | **Automated Response:** | No automated response will be taken. | 40 | +-------------------------+----------------------------------------------+ 41 | | **Impact:** | The file or directory is unavailable. | 42 | +-------------------------+----------------------------------------------+ 43 | 44 | .. rubric:: Suggested Action for System Administrator 45 | 46 | Run ``zpool status -x`` to determine which pool is damaged: 47 | 48 | :: 49 | 50 | # zpool status -x 51 | pool: test 52 | state: ONLINE 53 | status: One or more devices has experienced an error and no valid replicas 54 | are available. Some filesystem data is corrupt, and applications 55 | may have been affected. 56 | action: Destroy the pool and restore from backup. 57 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A 58 | scrub: none requested 59 | config: 60 | 61 | NAME STATE READ WRITE CKSUM 62 | test ONLINE 0 0 2 63 | c0t0d0 ONLINE 0 0 2 64 | c0t0d1 ONLINE 0 0 0 65 | 66 | errors: 1 data errors, use '-v' for a list 67 | 68 | Unfortunately, the data cannot be repaired, and the only choice to 69 | repair the data is to restore the pool from backup. Applications 70 | attempting to access the corrupted data will get an error (EIO), and 71 | data may be permanently lost. 72 | 73 | The list of affected files can be retrieved by using the ``-v`` option to 74 | ``zpool status``: 75 | 76 | :: 77 | 78 | # zpool status -xv 79 | pool: test 80 | state: ONLINE 81 | status: One or more devices has experienced an error and no valid replicas 82 | are available. Some filesystem data is corrupt, and applications 83 | may have been affected. 84 | action: Destroy the pool and restore from backup. 85 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A 86 | scrub: none requested 87 | config: 88 | 89 | NAME STATE READ WRITE CKSUM 90 | test ONLINE 0 0 2 91 | c0t0d0 ONLINE 0 0 2 92 | c0t0d1 ONLINE 0 0 0 93 | 94 | errors: Permanent errors have been detected in the following files: 95 | 96 | /export/example/foo 97 | 98 | Damaged files may or may not be able to be removed depending on the 99 | type of corruption. If the corruption is within the plain data, the 100 | file should be removable. If the corruption is in the file metadata, 101 | then the file cannot be removed, though it can be moved to an 102 | alternate location. In either case, the data should be restored from 103 | a backup source. It is also possible for the corruption to be within 104 | pool-wide metadata, resulting in entire datasets being unavailable. 105 | If this is the case, the only option is to destroy the pool and 106 | re-create the datasets from backup. 107 | 108 | .. rubric:: Details 109 | 110 | The Message ID: ``ZFS-8000-8A`` indicates corrupted data exists in 111 | the current pool. 112 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-72/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-72 26 | ======================= 27 | 28 | Corrupted pool metadata 29 | ----------------------- 30 | 31 | +-------------------------+-------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+-------------------------------------------+ 34 | | **Severity:** | Critical | 35 | +-------------------------+-------------------------------------------+ 36 | | **Description:** | The metadata required to open the pool is | 37 | | | corrupt. | 38 | +-------------------------+-------------------------------------------+ 39 | | **Automated Response:** | No automated response will be taken. | 40 | +-------------------------+-------------------------------------------+ 41 | | **Impact:** | The pool is no longer available. | 42 | +-------------------------+-------------------------------------------+ 43 | 44 | .. rubric:: Suggested Action for System Administrator 45 | 46 | Even though all the devices are available, the on-disk data has been 47 | corrupted such that the pool cannot be opened. If a recovery action 48 | is presented, the pool can be returned to a usable state. Otherwise, 49 | all data within the pool is lost, and the pool must be destroyed and 50 | restored from an appropriate backup source. ZFS includes built-in 51 | metadata replication to prevent this from happening even for 52 | unreplicated pools, but running in a replicated configuration will 53 | decrease the chances of this happening in the future. 54 | 55 | If this error is encountered during ``zpool import``, see the section 56 | below. Otherwise, run ``zpool status -x`` to determine which pool is 57 | faulted and if a recovery option is available: 58 | 59 | :: 60 | 61 | # zpool status -x 62 | pool: test 63 | id: 13783646421373024673 64 | state: FAULTED 65 | status: The pool metadata is corrupted and cannot be opened. 66 | action: Recovery is possible, but will result in some data loss. 67 | Returning the pool to its state as of Mon Sep 28 10:24:39 2009 68 | should correct the problem. Approximately 59 seconds of data 69 | will have to be discarded, irreversibly. Recovery can be 70 | attempted by executing 'zpool clear -F test'. A scrub of the pool 71 | is strongly recommended following a successful recovery. 72 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72 73 | config: 74 | 75 | NAME STATE READ WRITE CKSUM 76 | test FAULTED 0 0 2 corrupted data 77 | c0t0d0 ONLINE 0 0 2 78 | c0t0d1 ONLINE 0 0 2 79 | 80 | If recovery is unavailable, the recommended action will be: 81 | 82 | :: 83 | 84 | action: Destroy the pool and restore from backup. 85 | 86 | If this error is encountered during ``zpool import``, and if no recovery option 87 | is mentioned, the pool is unrecoverable and cannot be imported. The pool must 88 | be restored from an appropriate backup source. If a recovery option is 89 | available, the output from ``zpool import`` will look something like the 90 | following: 91 | 92 | :: 93 | 94 | # zpool import share 95 | cannot import 'share': I/O error 96 | Recovery is possible, but will result in some data loss. 97 | Returning the pool to its state as of Sun Sep 27 12:31:07 2009 98 | should correct the problem. Approximately 53 seconds of data 99 | will have to be discarded, irreversibly. Recovery can be 100 | attempted by executing 'zpool import -F share'. A scrub of the pool 101 | is strongly recommended following a successful recovery. 102 | 103 | Recovery actions are requested with the -F option to either ``zpool 104 | clear`` or ``zpool import``. Recovery will result in some data loss, 105 | because it reverts the pool to an earlier state. A dry-run recovery 106 | check can be performed by adding the ``-n`` option, affirming if recovery 107 | is possible without actually reverting the pool to its earlier state. 108 | 109 | .. rubric:: Details 110 | 111 | The Message ID: ``ZFS-8000-72`` indicates a pool was unable to be 112 | opened due to a detected corruption in the pool metadata. 113 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-4J/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-4J 26 | ======================= 27 | 28 | Corrupted device label in a replicated configuration 29 | ---------------------------------------------------- 30 | 31 | +-------------------------+--------------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+--------------------------------------------------+ 34 | | **Severity:** | Major | 35 | +-------------------------+--------------------------------------------------+ 36 | | **Description:** | A device could not be opened due to a missing or | 37 | | | invalid device label. | 38 | +-------------------------+--------------------------------------------------+ 39 | | **Automated Response:** | A hot spare will be activated if available. | 40 | +-------------------------+--------------------------------------------------+ 41 | | **Impact:** | The pool is no longer providing the configured | 42 | | | level of replication. | 43 | +-------------------------+--------------------------------------------------+ 44 | 45 | .. rubric:: Suggested Action for System Administrator 46 | 47 | .. rubric:: For an active pool: 48 | 49 | If this error was encountered while running ``zpool import``, please 50 | see the section below. Otherwise, run ``zpool status -x`` to determine 51 | which pool has experienced a failure: 52 | 53 | :: 54 | 55 | # zpool status -x 56 | pool: test 57 | state: DEGRADED 58 | status: One or more devices could not be used because the label is missing or 59 | invalid. Sufficient replicas exist for the pool to continue 60 | functioning in a degraded state. 61 | action: Replace the device using 'zpool replace'. 62 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J 63 | scrub: none requested 64 | config: 65 | 66 | NAME STATE READ WRITE CKSUM 67 | test DEGRADED 0 0 0 68 | mirror DEGRADED 0 0 0 69 | c0t0d0 ONLINE 0 0 0 70 | c0t0d1 FAULTED 0 0 0 corrupted data 71 | 72 | errors: No known data errors 73 | 74 | If the device has been temporarily detached from the system, attach 75 | the device to the system and run ``zpool status`` again. The pool 76 | should automatically detect the newly attached device and resume 77 | functioning. 78 | 79 | If the device is no longer available, it can be replaced using ``zpool 80 | replace``: 81 | 82 | :: 83 | 84 | # zpool replace test c0t0d1 c0t0d2 85 | 86 | If the device has been replaced by another disk in the same physical 87 | slot, then the device can be replaced using a single argument to the 88 | ``zpool replace`` command: 89 | 90 | :: 91 | 92 | # zpool replace test c0t0d1 93 | 94 | ZFS will begin migrating data to the new device as soon as the 95 | replace is issued. Once the resilvering completes, the original 96 | device (if different from the replacement) will be removed, and the 97 | pool will be restored to the ONLINE state. 98 | 99 | .. rubric:: For an exported pool: 100 | 101 | If this error is encountered while running ``zpool import``, the pool 102 | can be still be imported despite the failure: 103 | 104 | :: 105 | 106 | # zpool import 107 | pool: test 108 | id: 5187963178597328409 109 | state: DEGRADED 110 | status: One or more devices contains corrupted data. The fault tolerance of 111 | the pool may be compromised if imported. 112 | action: The pool can be imported using its name or numeric identifier. 113 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J 114 | config: 115 | 116 | test DEGRADED 117 | mirror DEGRADED 118 | c0t0d0 ONLINE 119 | c0t0d1 FAULTED corrupted data 120 | 121 | To import the pool, run ``zpool import``: 122 | 123 | :: 124 | 125 | # zpool import test 126 | 127 | Once the pool has been imported, the damaged device can be replaced 128 | according to the above procedure. 129 | 130 | .. rubric:: Details 131 | 132 | The Message ID: ``ZFS-8000-4J`` indicates a device which was unable 133 | to be opened by the ZFS subsystem. 134 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-2Q/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-2Q 26 | ======================= 27 | 28 | Missing device in replicated configuration 29 | ------------------------------------------ 30 | 31 | +-------------------------+--------------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+--------------------------------------------------+ 34 | | **Severity:** | Major | 35 | +-------------------------+--------------------------------------------------+ 36 | | **Description:** | A device in a replicated configuration could not | 37 | | | be opened. | 38 | +-------------------------+--------------------------------------------------+ 39 | | **Automated Response:** | A hot spare will be activated if available. | 40 | +-------------------------+--------------------------------------------------+ 41 | | **Impact:** | The pool is no longer providing the configured | 42 | | | level of replication. | 43 | +-------------------------+--------------------------------------------------+ 44 | 45 | .. rubric:: Suggested Action for System Administrator 46 | 47 | .. rubric:: For an active pool: 48 | 49 | If this error was encountered while running ``zpool import``, please 50 | see the section below. Otherwise, run ``zpool status -x`` to determine 51 | which pool has experienced a failure: 52 | 53 | :: 54 | 55 | # zpool status -x 56 | pool: test 57 | state: DEGRADED 58 | status: One or more devices could not be opened. Sufficient replicas exist for 59 | the pool to continue functioning in a degraded state. 60 | action: Attach the missing device and online it using 'zpool online'. 61 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q 62 | scrub: none requested 63 | config: 64 | 65 | NAME STATE READ WRITE CKSUM 66 | test DEGRADED 0 0 0 67 | mirror DEGRADED 0 0 0 68 | c0t0d0 ONLINE 0 0 0 69 | c0t0d1 FAULTED 0 0 0 cannot open 70 | 71 | errors: No known data errors 72 | 73 | Determine which device failed to open by looking for a FAULTED device 74 | with an additional 'cannot open' message. If this device has been 75 | inadvertently removed from the system, attach the device and bring it 76 | online with ``zpool online``: 77 | 78 | :: 79 | 80 | # zpool online test c0t0d1 81 | 82 | If the device is no longer available, the device can be replaced 83 | using the ``zpool replace`` command: 84 | 85 | :: 86 | 87 | # zpool replace test c0t0d1 c0t0d2 88 | 89 | If the device has been replaced by another disk in the same physical 90 | slot, then the device can be replaced using a single argument to the 91 | ``zpool replace`` command: 92 | 93 | :: 94 | 95 | # zpool replace test c0t0d1 96 | 97 | Existing data will be resilvered to the new device. Once the 98 | resilvering completes, the device will be removed from the pool. 99 | 100 | .. rubric:: For an exported pool: 101 | 102 | If this error is encountered during a ``zpool import``, it means that 103 | one of the devices is not attached to the system: 104 | 105 | :: 106 | 107 | # zpool import 108 | pool: test 109 | id: 10121266328238932306 110 | state: DEGRADED 111 | status: One or more devices are missing from the system. 112 | action: The pool can be imported despite missing or damaged devices. The 113 | fault tolerance of the pool may be compromised if imported. 114 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q 115 | config: 116 | 117 | test DEGRADED 118 | mirror DEGRADED 119 | c0t0d0 ONLINE 120 | c0t0d1 FAULTED cannot open 121 | 122 | Unlike when the pool is active on the system, the device cannot be 123 | replaced while the pool is exported. If the device can be attached to 124 | the system, attach the device and run ``zpool import`` again. 125 | 126 | Alternatively, the pool can be imported as-is, though it will be 127 | placed in the DEGRADED state due to a missing device. The device will 128 | be marked as UNAVAIL. Once the pool has been imported, the missing 129 | device can be replaced as described above. 130 | 131 | .. rubric:: Details 132 | 133 | The Message ID: ``ZFS-8000-2Q`` indicates a device which was unable 134 | to be opened by the ZFS subsystem. 135 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-K4/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-K4 26 | ======================= 27 | 28 | ZFS intent log read failure 29 | --------------------------- 30 | 31 | +-------------------------+--------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+--------------------------------------------+ 34 | | **Severity:** | Major | 35 | +-------------------------+--------------------------------------------+ 36 | | **Description:** | A ZFS intent log device could not be read. | 37 | +-------------------------+--------------------------------------------+ 38 | | **Automated Response:** | No automated response will be taken. | 39 | +-------------------------+--------------------------------------------+ 40 | | **Impact:** | The intent log(s) cannot be replayed. | 41 | +-------------------------+--------------------------------------------+ 42 | 43 | .. rubric:: Suggested Action for System Administrator 44 | 45 | A ZFS intent log record could not be read due to an error. This may 46 | be due to a missing or broken log device, or a device within the pool 47 | may be experiencing I/O errors. The pool itself is not corrupt but is 48 | missing some pool changes that happened shortly before a power loss 49 | or system failure. These are pool changes that applications had 50 | requested to be written synchronously but had not been committed in 51 | the pool. This transaction group commit currently occurs every five 52 | seconds, and so typically at most five seconds worth of synchronous 53 | writes have been lost. ZFS itself cannot determine if the pool 54 | changes lost are critical to those applications running at the time 55 | of the system failure. This is a decision the administrator must 56 | make. You may want to consider mirroring log devices. First determine 57 | which pool is in error: 58 | 59 | :: 60 | 61 | # zpool status -x 62 | pool: test 63 | state: FAULTED 64 | status: One or more of the intent logs could not be read. 65 | Waiting for administrator intervention to fix the faulted pool. 66 | action: Either restore the affected device(s) and run 'zpool online', 67 | or ignore the intent log records by running 'zpool clear'. 68 | scrub: none requested 69 | config: 70 | 71 | NAME STATE READ WRITE CKSUM 72 | test FAULTED 0 0 0 bad intent log 73 | c3t2d0 ONLINE 0 0 0 74 | logs FAULTED 0 0 0 bad intent log 75 | c5t3d0 UNAVAIL 0 0 0 cannot open 76 | 77 | There are two courses of action to resolve this problem. 78 | If the validity of the pool from an application perspective requires 79 | the pool changes then the log devices must be recovered. Make sure 80 | power and cables are connected and that the affected device is 81 | online. Then run ``zpool online`` and then ``zpool clear``: 82 | 83 | :: 84 | 85 | # zpool online test c5t3d0 86 | # zpool clear test 87 | # zpool status test 88 | pool: test 89 | state: ONLINE 90 | scrub: none requested 91 | config: 92 | 93 | NAME STATE READ WRITE CKSUM 94 | test ONLINE 0 0 0 95 | c3t2d0 ONLINE 0 0 0 96 | logs ONLINE 0 0 0 97 | c5t3d0 ONLINE 0 0 0 98 | 99 | errors: No known data errors 100 | 101 | The second alternative action is to ignore the most recent pool 102 | changes that could not be read. To do this run ``zpool clear``: 103 | 104 | :: 105 | 106 | # zpool clear test 107 | # zpool status test 108 | pool: test 109 | state: DEGRADED 110 | status: One or more devices could not be opened. Sufficient replicas exist for 111 | the pool to continue functioning in a degraded state. 112 | action: Attach the missing device and online it using 'zpool online'. 113 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q 114 | scrub: none requested 115 | config: 116 | 117 | NAME STATE READ WRITE CKSUM 118 | test DEGRADED 0 0 0 119 | c3t2d0 ONLINE 0 0 0 120 | logs DEGRADED 0 0 0 121 | c5t3d0 UNAVAIL 0 0 0 cannot open 122 | 123 | errors: No known data errors 124 | 125 | Future log records will not use a failed log device but will be 126 | written to the main pool. You should fix or replace any failed log 127 | devices. 128 | 129 | .. rubric:: Details 130 | 131 | The Message ID: ``ZFS-8000-K4`` indicates that a log device is 132 | missing or cannot be read. 133 | -------------------------------------------------------------------------------- /docs/Basic Concepts/VDEVs.rst: -------------------------------------------------------------------------------- 1 | VDEVs 2 | ===== 3 | 4 | What is a VDEV? 5 | ~~~~~~~~~~~~~~~ 6 | 7 | A vdev (virtual device) is a fundamental building block of ZFS storage pools. It represents a logical grouping of physical storage devices, such as hard drives, SSDs, or partitions. 8 | 9 | What is a leaf vdev? 10 | ~~~~~~~~~~~~~~~~~~~~ 11 | 12 | A leaf vdev is the most basic type of vdev, which directly corresponds to a physical storage device. It is the endpoint of the storage hierarchy in ZFS. 13 | 14 | What is a top-level vdev? 15 | ~~~~~~~~~~~~~~~~~~~~~~~~~ 16 | 17 | Top-level vdevs are the direct children of the root vdev. They can be single devices or logical groups that aggregate multiple leaf vdevs (like mirrors or RAIDZ groups). ZFS dynamically stripes data across all top-level vdevs in a pool. 18 | 19 | What is a root vdev? 20 | ~~~~~~~~~~~~~~~~~~~~ 21 | 22 | The root vdev is the top of the pool hierarchy. It aggregates all top-level vdevs into a single logical storage unit (the pool). 23 | 24 | What are the different types of vdevs? 25 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 26 | 27 | OpenZFS supports several types of vdevs. Top-level vdevs carry data and provide redundancy: 28 | 29 | * **Striped Disk(s)**: A vdev consisting of one or more physical devices striped together (like RAID 0). It provides no redundancy and will lead to data loss if a drive fails. 30 | * **Mirror**: A vdev that stores the same data on two or more drives for redundancy. 31 | * `RAIDZ <./RAIDZ.html>`__: A vdev that uses parity to provide fault tolerance, similar to traditional RAID 5/6. There are three levels of RAIDZ: 32 | 33 | * **RAIDZ1**: Single parity, similar to RAID 5. Requires at least 2 disks (3+ recommended), can tolerate one drive failure. 34 | * **RAIDZ2**: Double parity, similar to RAID 6. Requires at least 3 disks (5+ recommended), can tolerate two drive failures. 35 | * **RAIDZ3**: Triple parity. Requires at least 4 disks (7+ recommended), can tolerate three drive failures. 36 | 37 | * `dRAID <./dRAID%20Howto.html>`__: Distributed RAID. A vdev that provides distributed parity and hot spares across multiple drives, allowing for much faster rebuild performance after a failure. 38 | 39 | Auxiliary vdevs provide specific functionality: 40 | 41 | * **Spare**: A drive that acts as a hot spare, automatically replacing a failed drive in another vdev. 42 | * **Cache (L2ARC)**: A Level 2 ARC vdev used for caching frequently accessed data to improve random read performance. 43 | * **Log (SLOG)**: A separate log vdev (SLOG) used to store the ZFS Intent Log (ZIL) for improved synchronous write performance. 44 | * **Special**: A vdev dedicated to storing metadata, and optionally small file blocks and the Dedup Table (DDT). 45 | * **Dedup**: A vdev dedicated strictly to storing the Deduplication Table (DDT). 46 | 47 | How do vdevs relate to storage pools? 48 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 49 | 50 | Vdevs are the building blocks of ZFS storage pools. A storage pool (zpool) is created by combining one or more top-level vdevs. The overall performance, capacity, and redundancy of the storage pool depend on the configuration and types of vdevs used. 51 | 52 | Here is an example layout as seen in `zpool-status(8) `_ output 53 | for a pool with two RAIDZ1 top-level vdevs and 10 leaf vdevs: 54 | 55 | :: 56 | 57 | datapoolname (root vdev) 58 | raidz1-0 (top-level vdev) 59 | /dev/dsk/disk0 (leaf vdev) 60 | /dev/dsk/disk1 (leaf vdev) 61 | /dev/dsk/disk2 (leaf vdev) 62 | /dev/dsk/disk3 (leaf vdev) 63 | /dev/dsk/disk4 (leaf vdev) 64 | raidz1-1 (top-level vdev) 65 | /dev/dsk/disk5 (leaf vdev) 66 | /dev/dsk/disk6 (leaf vdev) 67 | /dev/dsk/disk7 (leaf vdev) 68 | /dev/dsk/disk8 (leaf vdev) 69 | /dev/dsk/disk9 (leaf vdev) 70 | 71 | How does ZFS handle vdev failures? 72 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 73 | 74 | ZFS is designed to handle vdev failures gracefully. If a vdev fails, ZFS can continue to operate using the remaining vdevs in the pool, 75 | provided that the redundancy level of the pool allows for it (e.g., in a mirror, RAIDZ, or dRAID configuration). 76 | When there is still enough redundancy in the pool, ZFS will mark the failed vdev as "faulted" and will recover data from the remaining vdevs. 77 | Administrators can `zpool-replace(8) `_ failed vdev with a new one and ZFS will automatically resilver (rebuild) 78 | the data onto the new vdev to return the pool to a healthy state. 79 | 80 | How do I manage vdevs in ZFS? 81 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 82 | 83 | Vdevs are managed using the `zpool(8) `_ command-line utility. Common operations include: 84 | 85 | * **Creating a pool**: `zpool create` allows you to specify the vdev layout. See `zpool-create(8) `_. 86 | * **Adding vdevs**: `zpool add` attaches new top-level vdevs to an existing pool, expanding its capacity and performance (by increasing stripe width). See `zpool-add(8) `_. 87 | * **Removing vdevs**: `zpool remove` can remove certain types of top-level vdevs evacuating their data to other vdevs. See `zpool-remove(8) `_. 88 | * **Replacing drives**: `zpool replace` swaps a failed or small drive with a new one. See `zpool-replace(8) `_. 89 | * **Monitoring status**: `zpool status` shows the health and layout of all vdevs. See `zpool-status(8) `_. 90 | * **Monitoring performance**: `zpool iostat` displays I/O statistics for the pool and individual vdevs. See `zpool-iostat(8) `_. 91 | -------------------------------------------------------------------------------- /docs/_static/css/mandoc.css: -------------------------------------------------------------------------------- 1 | /* $Id: mandoc.css,v 1.46 2019/06/02 16:57:13 schwarze Exp $ */ 2 | /* 3 | * Standard style sheet for mandoc(1) -Thtml and man.cgi(8). 4 | * 5 | * Written by Ingo Schwarze . 6 | * I place this file into the public domain. 7 | * Permission to use, copy, modify, and distribute it for any purpose 8 | * with or without fee is hereby granted, without any conditions. 9 | */ 10 | 11 | /* 12 | * Edited by George Melikov 13 | * to be integrated with sphinx RTD theme. 14 | */ 15 | 16 | /* override */ 17 | .man_container code { 18 | overflow-x: initial; 19 | background: none; 20 | border: none; 21 | font-size: 100%; 22 | } 23 | 24 | /* OpenZFS styles */ 25 | .man_container .head { 26 | max-width: 640px; 27 | width: 100%; 28 | } 29 | .man_container .head .head-vol { 30 | text-align: center; 31 | } 32 | .man_container .head .head-rtitle { 33 | text-align: right; 34 | } 35 | .man_container .foot td { 36 | padding: 1em; 37 | } 38 | 39 | /* Fix for Chrome */ 40 | .man_container dl dt { 41 | display: initial !important; 42 | color: black !important; 43 | } 44 | 45 | /* Next CSS rules come from upstream file as is, only with needed changes */ 46 | 47 | /* Sections and paragraphs. */ 48 | 49 | .manual-text { 50 | margin-left: 0em; } 51 | .Nd { } 52 | section.Sh { } 53 | h1.Sh { margin-top: 1.2em; 54 | margin-bottom: 0.6em; } 55 | section.Ss { } 56 | h2.Ss { margin-top: 1.2em; 57 | margin-bottom: 0.6em; 58 | font-size: 105%; } 59 | .Pp { margin: 0.6em 0em; } 60 | .Sx { } 61 | .Xr { } 62 | 63 | /* Displays and lists. */ 64 | 65 | .Bd { } 66 | .Bd-indent { margin-left: 3.8em; } 67 | 68 | .Bl-bullet { list-style-type: disc; 69 | padding-left: 1em; } 70 | .Bl-bullet > li { } 71 | .Bl-dash { list-style-type: none; 72 | padding-left: 0em; } 73 | .Bl-dash > li:before { 74 | content: "\2014 "; } 75 | .Bl-item { list-style-type: none; 76 | padding-left: 0em; } 77 | .Bl-item > li { } 78 | .Bl-compact > li { 79 | margin-top: 0em; } 80 | 81 | .Bl-enum { padding-left: 2em; } 82 | .Bl-enum > li { } 83 | .Bl-compact > li { 84 | margin-top: 0em; } 85 | 86 | .Bl-diag { } 87 | .Bl-diag > dt { 88 | font-style: normal; 89 | font-weight: bold; } 90 | .Bl-diag > dd { 91 | margin-left: 0em; } 92 | .Bl-hang { } 93 | .Bl-hang > dt { } 94 | .Bl-hang > dd { 95 | margin-left: 5.5em; } 96 | .Bl-inset { } 97 | .Bl-inset > dt { } 98 | .Bl-inset > dd { 99 | margin-left: 0em; } 100 | .Bl-ohang { } 101 | .Bl-ohang > dt { } 102 | .Bl-ohang > dd { 103 | margin-left: 0em; } 104 | .Bl-tag { margin-top: 0.6em; 105 | margin-left: 5.5em; } 106 | .Bl-tag > dt { 107 | float: left; 108 | margin-top: 0em; 109 | margin-left: -5.5em; 110 | padding-right: 0.5em; 111 | vertical-align: top; } 112 | .Bl-tag > dd { 113 | clear: right; 114 | width: 100%; 115 | margin-top: 0em; 116 | margin-left: 0em; 117 | margin-bottom: 0.6em; 118 | vertical-align: top; 119 | overflow: auto; } 120 | .Bl-compact { margin-top: 0em; } 121 | .Bl-compact > dd { 122 | margin-bottom: 0em; } 123 | .Bl-compact > dt { 124 | margin-top: 0em; } 125 | 126 | .Bl-column { } 127 | .Bl-column > tbody > tr { } 128 | .Bl-column > tbody > tr > td { 129 | margin-top: 1em; } 130 | .Bl-compact > tbody > tr > td { 131 | margin-top: 0em; } 132 | 133 | .Rs { font-style: normal; 134 | font-weight: normal; } 135 | .RsA { } 136 | .RsB { font-style: italic; 137 | font-weight: normal; } 138 | .RsC { } 139 | .RsD { } 140 | .RsI { font-style: italic; 141 | font-weight: normal; } 142 | .RsJ { font-style: italic; 143 | font-weight: normal; } 144 | .RsN { } 145 | .RsO { } 146 | .RsP { } 147 | .RsQ { } 148 | .RsR { } 149 | .RsT { text-decoration: underline; } 150 | .RsU { } 151 | .RsV { } 152 | 153 | .eqn { } 154 | .tbl td { vertical-align: middle; } 155 | 156 | .HP { margin-left: 3.8em; 157 | text-indent: -3.8em; } 158 | 159 | /* Semantic markup for command line utilities. */ 160 | 161 | table.Nm { } 162 | code.Nm { font-style: normal; 163 | font-weight: bold; 164 | font-family: inherit; } 165 | .Fl { font-style: normal; 166 | font-weight: bold; 167 | font-family: inherit; } 168 | .Cm { font-style: normal; 169 | font-weight: bold; 170 | font-family: inherit; } 171 | .Ar { font-style: italic; 172 | font-weight: normal; } 173 | .Op { display: inline; } 174 | .Ic { font-style: normal; 175 | font-weight: bold; 176 | font-family: inherit; } 177 | .Ev { font-style: normal; 178 | font-weight: normal; 179 | font-family: monospace; } 180 | .Pa { font-style: italic; 181 | font-weight: normal; } 182 | 183 | /* Semantic markup for function libraries. */ 184 | 185 | .Lb { } 186 | code.In { font-style: normal; 187 | font-weight: bold; 188 | font-family: inherit; } 189 | a.In { } 190 | .Fd { font-style: normal; 191 | font-weight: bold; 192 | font-family: inherit; } 193 | .Ft { font-style: italic; 194 | font-weight: normal; } 195 | .Fn { font-style: normal; 196 | font-weight: bold; 197 | font-family: inherit; } 198 | .Fa { font-style: italic; 199 | font-weight: normal; } 200 | .Vt { font-style: italic; 201 | font-weight: normal; } 202 | .Va { font-style: italic; 203 | font-weight: normal; } 204 | .Dv { font-style: normal; 205 | font-weight: normal; 206 | font-family: monospace; } 207 | .Er { font-style: normal; 208 | font-weight: normal; 209 | font-family: monospace; } 210 | 211 | /* Various semantic markup. */ 212 | 213 | .An { } 214 | .Lk { } 215 | .Mt { } 216 | .Cd { font-style: normal; 217 | font-weight: bold; 218 | font-family: inherit; } 219 | .Ad { font-style: italic; 220 | font-weight: normal; } 221 | .Ms { font-style: normal; 222 | font-weight: bold; } 223 | .St { } 224 | .Ux { } 225 | 226 | /* Physical markup. */ 227 | 228 | .Bf { display: inline; } 229 | .No { font-style: normal; 230 | font-weight: normal; } 231 | .Em { font-style: italic; 232 | font-weight: normal; } 233 | .Sy { font-style: normal; 234 | font-weight: bold; } 235 | .Li { font-style: normal; 236 | font-weight: normal; 237 | font-family: monospace; } 238 | 239 | /* Tooltip support. */ 240 | 241 | h1.Sh, h2.Ss { position: relative; } 242 | .An, .Ar, .Cd, .Cm, .Dv, .Em, .Er, .Ev, .Fa, .Fd, .Fl, .Fn, .Ft, 243 | .Ic, code.In, .Lb, .Lk, .Ms, .Mt, .Nd, code.Nm, .Pa, .Rs, 244 | .St, .Sx, .Sy, .Va, .Vt, .Xr { 245 | display: inline-block; 246 | position: relative; }? 247 | 248 | /* Overrides to avoid excessive margins on small devices. */ 249 | 250 | @media (max-width: 37.5em) { 251 | .manual-text { 252 | margin-left: 0.5em; } 253 | h1.Sh, h2.Ss { margin-left: 0em; } 254 | .Bd-indent { margin-left: 2em; } 255 | .Bl-hang > dd { 256 | margin-left: 2em; } 257 | .Bl-tag { margin-left: 2em; } 258 | .Bl-tag > dt { 259 | margin-left: -2em; } 260 | .HP { margin-left: 2em; 261 | text-indent: -2em; } 262 | } 263 | -------------------------------------------------------------------------------- /docs/Performance and Tuning/ZFS Transaction Delay.rst: -------------------------------------------------------------------------------- 1 | OpenZFS Transaction Delay 2 | ========================= 3 | 4 | OpenZFS write operations are delayed when the backend storage isn't able to 5 | accommodate the rate of incoming writes. This delay process is known as 6 | the OpenZFS write throttle. As different hardware and workloads have different 7 | performance characteristics, tuning the write throttle is hardware and workload 8 | specific. 9 | 10 | Writes are grouped into transactions. Transactions are grouped into transaction groups. 11 | When a transaction group is synced to disk, all transactions in that group are 12 | considered complete. When a delay is applied to a transaction it delays the transaction's 13 | assignment into a transaction group. 14 | 15 | To check if write throttle is activated on a pool, monitor the dmu_tx_delay 16 | and/or dmu_tx_dirty_delay kstat counters. 17 | 18 | If there is already a write transaction waiting, the delay is relative 19 | to when that transaction will finish waiting. Thus the calculated delay 20 | time is independent of the number of threads concurrently executing 21 | transactions. 22 | 23 | If there is only one waiter, the delay is relative to when the 24 | transaction started, rather than the current time. This credits the 25 | transaction for "time already served." For example, if a write 26 | transaction requires reading indirect blocks first, then the delay is 27 | counted at the start of the transaction, just prior to the indirect 28 | block reads. 29 | 30 | The minimum time for a transaction to take is calculated as: 31 | 32 | :: 33 | 34 | min_time = zfs_delay_scale * (dirty - min) / (max - dirty) 35 | min_time is then capped at 100 milliseconds 36 | 37 | The delay has two degrees of freedom that can be adjusted via tunables: 38 | 39 | 1. The percentage of dirty data at which we start to delay is defined by 40 | zfs_delay_min_dirty_percent. This is typically be at or above 41 | zfs_vdev_async_write_active_max_dirty_percent so delays occur after 42 | writing at full speed has failed to keep up with the incoming write 43 | rate. 44 | 2. The scale of the curve is defined by zfs_delay_scale. Roughly 45 | speaking, this variable determines the amount of delay at the 46 | midpoint of the curve. 47 | 48 | :: 49 | 50 | delay 51 | 10ms +-------------------------------------------------------------*+ 52 | | *| 53 | 9ms + *+ 54 | | *| 55 | 8ms + *+ 56 | | * | 57 | 7ms + * + 58 | | * | 59 | 6ms + * + 60 | | * | 61 | 5ms + * + 62 | | * | 63 | 4ms + * + 64 | | * | 65 | 3ms + * + 66 | | * | 67 | 2ms + (midpoint) * + 68 | | | ** | 69 | 1ms + v *** + 70 | | zfs_delay_scale ----------> ******** | 71 | 0 +-------------------------------------*********----------------+ 72 | 0% <- zfs_dirty_data_max -> 100% 73 | 74 | Note that since the delay is added to the outstanding time remaining on 75 | the most recent transaction, the delay is effectively the inverse of 76 | IOPS. Here the midpoint of 500 microseconds translates to 2000 IOPS. The 77 | shape of the curve was chosen such that small changes in the amount of 78 | accumulated dirty data in the first 3/4 of the curve yield relatively 79 | small differences in the amount of delay. 80 | 81 | The effects can be easier to understand when the amount of delay is 82 | represented on a log scale: 83 | 84 | :: 85 | 86 | delay 87 | 100ms +-------------------------------------------------------------++ 88 | + + 89 | | | 90 | + *+ 91 | 10ms + *+ 92 | + ** + 93 | | (midpoint) ** | 94 | + | ** + 95 | 1ms + v **** + 96 | + zfs_delay_scale ----------> ***** + 97 | | **** | 98 | + **** + 99 | 100us + ** + 100 | + * + 101 | | * | 102 | + * + 103 | 10us + * + 104 | + + 105 | | | 106 | + + 107 | +--------------------------------------------------------------+ 108 | 0% <- zfs_dirty_data_max -> 100% 109 | 110 | Note here that only as the amount of dirty data approaches its limit 111 | does the delay start to increase rapidly. The goal of a properly tuned 112 | system should be to keep the amount of dirty data out of that range by 113 | first ensuring that the appropriate limits are set for the I/O scheduler 114 | to reach optimal throughput on the backend storage, and then by changing 115 | the value of zfs_delay_scale to increase the steepness of the curve. 116 | 117 | 118 | 119 | Code reference: `dmu_tx.c `_ 120 | -------------------------------------------------------------------------------- /docs/msg/ZFS-8000-9P/index.rst: -------------------------------------------------------------------------------- 1 | .. 2 | CDDL HEADER START 3 | 4 | The contents of this file are subject to the terms of the 5 | Common Development and Distribution License (the "License"). 6 | You may not use this file except in compliance with the License. 7 | 8 | You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 | or http://www.opensolaris.org/os/licensing. 10 | See the License for the specific language governing permissions 11 | and limitations under the License. 12 | 13 | When distributing Covered Code, include this CDDL HEADER in each 14 | file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 | If applicable, add the following below this CDDL HEADER, with the 16 | fields enclosed by brackets "[]" replaced with your own identifying 17 | information: Portions Copyright [yyyy] [name of copyright owner] 18 | 19 | CDDL HEADER END 20 | 21 | Portions Copyright 2007 Sun Microsystems, Inc. 22 | 23 | .. highlight:: none 24 | 25 | Message ID: ZFS-8000-9P 26 | ======================= 27 | 28 | Failing device in replicated configuration 29 | ------------------------------------------ 30 | 31 | +-------------------------+----------------------------------------------------+ 32 | | **Type:** | Error | 33 | +-------------------------+----------------------------------------------------+ 34 | | **Severity:** | Minor | 35 | +-------------------------+----------------------------------------------------+ 36 | | **Description:** | A device has experienced uncorrectable errors in a | 37 | | | replicated configuration. | 38 | +-------------------------+----------------------------------------------------+ 39 | | **Automated Response:** | ZFS has attempted to repair the affected data. | 40 | +-------------------------+----------------------------------------------------+ 41 | | **Impact:** | The system is unaffected, though errors may | 42 | | | indicate future failure. Future errors may cause | 43 | | | ZFS to automatically fault the device. | 44 | +-------------------------+----------------------------------------------------+ 45 | 46 | .. rubric:: Suggested Action for System Administrator 47 | 48 | Run ``zpool status -x`` to determine which pool has experienced errors: 49 | 50 | :: 51 | 52 | # zpool status 53 | pool: test 54 | state: ONLINE 55 | status: One or more devices has experienced an unrecoverable error. An 56 | attempt was made to correct the error. Applications are unaffected. 57 | action: Determine if the device needs to be replaced, and clear the errors 58 | using 'zpool online' or replace the device with 'zpool replace'. 59 | see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P 60 | scrub: none requested 61 | config: 62 | 63 | NAME STATE READ WRITE CKSUM 64 | test ONLINE 0 0 0 65 | mirror ONLINE 0 0 0 66 | c0t0d0 ONLINE 0 0 2 67 | c0t0d1 ONLINE 0 0 0 68 | 69 | errors: No known data errors 70 | 71 | Find the device with a non-zero error count for READ, WRITE, or 72 | CKSUM. This indicates that the device has experienced a read I/O 73 | error, write I/O error, or checksum validation error. Because the 74 | device is part of a mirror or RAID-Z device, ZFS was able to recover 75 | from the error and subsequently repair the damaged data. 76 | 77 | If these errors persist over a period of time, ZFS may determine the 78 | device is faulty and mark it as such. However, these error counts may 79 | or may not indicate that the device is unusable. It depends on how 80 | the errors were caused, which the administrator can determine in 81 | advance of any ZFS diagnosis. For example, the following cases will 82 | all produce errors that do not indicate potential device failure: 83 | 84 | - A network attached device lost connectivity but has now 85 | recovered 86 | - A device suffered from a bit flip, an expected event over long 87 | periods of time 88 | - An administrator accidentally wrote over a portion of the disk 89 | using another program 90 | 91 | In these cases, the presence of errors does not indicate that the 92 | device is likely to fail in the future, and therefore does not need 93 | to be replaced. If this is the case, then the device errors should be 94 | cleared using ``zpool clear``: 95 | 96 | :: 97 | 98 | # zpool clear test c0t0d0 99 | 100 | On the other hand, errors may very well indicate that the device has 101 | failed or is about to fail. If there are continual I/O errors to a 102 | device that is otherwise attached and functioning on the system, it 103 | most likely needs to be replaced. The administrator should check the 104 | system log for any driver messages that may indicate hardware 105 | failure. If it is determined that the device needs to be replaced, 106 | then the ``zpool replace`` command should be used: 107 | 108 | :: 109 | 110 | # zpool replace test c0t0d0 c0t0d2 111 | 112 | This will attach the new device to the pool and begin resilvering 113 | data to it. Once the resilvering process is complete, the old device 114 | will automatically be removed from the pool, at which point it can 115 | safely be removed from the system. If the device needs to be replaced 116 | in-place (because there are no available spare devices), the original 117 | device can be removed and replaced with a new device, at which point 118 | a different form of ``zpool replace`` can be used: 119 | 120 | :: 121 | 122 | # zpool replace test c0t0d0 123 | 124 | This assumes that the original device at 'c0t0d0' has been replaced 125 | with a new device under the same path, and will be replaced 126 | appropriately. 127 | 128 | You can monitor the progress of the resilvering operation by using 129 | the ``zpool status -x`` command: 130 | 131 | :: 132 | 133 | # zpool status -x 134 | pool: test 135 | state: DEGRADED 136 | status: One or more devices is currently being replaced. The pool may not be 137 | providing the necessary level of replication. 138 | action: Wait for the resilvering operation to complete 139 | scrub: resilver in progress, 0.14% done, 0h0m to go 140 | config: 141 | 142 | NAME STATE READ WRITE CKSUM 143 | test ONLINE 0 0 0 144 | mirror ONLINE 0 0 0 145 | replacing ONLINE 0 0 0 146 | c0t0d0 ONLINE 0 0 3 147 | c0t0d2 ONLINE 0 0 0 58.5K resilvered 148 | c0t0d1 ONLINE 0 0 0 149 | 150 | errors: No known data errors 151 | 152 | .. rubric:: Details 153 | 154 | The Message ID: ``ZFS-8000-9P`` indicates a device has exceeded the 155 | acceptable limit of errors allowed by the system. See document 156 | `203768 `__ 157 | for additional information. 158 | -------------------------------------------------------------------------------- /docs/Getting Started/RHEL-based distro/index.rst: -------------------------------------------------------------------------------- 1 | RHEL-based distro 2 | ======================= 3 | 4 | Contents 5 | -------- 6 | .. toctree:: 7 | :maxdepth: 1 8 | :glob: 9 | 10 | * 11 | 12 | `DKMS`_ and `kABI-tracking kmod`_ style packages are provided for x86_64 RHEL- 13 | and CentOS-based distributions from the OpenZFS repository. These packages 14 | are updated as new versions are released. Only the repository for the current 15 | minor version of each current major release is updated with new packages. 16 | 17 | To simplify installation, a *zfs-release* package is provided which includes 18 | a zfs.repo configuration file and public signing key. All official OpenZFS 19 | packages are signed using this key, and by default yum or dnf will verify a 20 | package's signature before allowing it be to installed. Users are strongly 21 | encouraged to verify the authenticity of the OpenZFS public key using 22 | the fingerprint listed here. 23 | 24 | | **Key location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-openzfs (previously -zfsonlinux) 25 | | **Current release packages:** `EL7`_, `EL8`_, `EL9`_ 26 | | **Archived release packages:** `see repo page `__ 27 | 28 | | **Signing key1 (EL8 and older, Fedora 36 and older)** 29 | `pgp.mit.edu `__ / 30 | `direct link `__ 31 | | **Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620 32 | 33 | | **Signing key2 (EL9+, Fedora 37+)** 34 | `pgp.mit.edu `__ / 35 | `direct link `__ 36 | | **Fingerprint:** 7DC7 299D CF7C 7FD9 CD87 701B A599 FD5E 9DB8 4141 37 | 38 | For EL 7 run:: 39 | 40 | yum install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm 41 | 42 | and for EL 8-10:: 43 | 44 | dnf install https://zfsonlinux.org/epel/zfs-release-3-0$(rpm --eval "%{dist}").noarch.rpm 45 | 46 | After installing the *zfs-release* package and verifying the public key 47 | users can opt to install either the DKMS or kABI-tracking kmod style packages. 48 | DKMS packages are recommended for users running a non-distribution kernel or 49 | for users who wish to apply local customizations to OpenZFS. For most users 50 | the kABI-tracking kmod packages are recommended in order to avoid needing to 51 | rebuild OpenZFS for every kernel update. 52 | 53 | DKMS 54 | ---- 55 | 56 | To install DKMS style packages issue the following commands. First add the 57 | `EPEL repository`_ which provides DKMS by installing the *epel-release* 58 | package, then the *kernel-devel* and *zfs* packages. Note that it is 59 | important to make sure that the matching *kernel-devel* package is installed 60 | for the running kernel since DKMS requires it to build OpenZFS. 61 | 62 | For EL6 and 7, separately run:: 63 | 64 | yum install -y epel-release 65 | yum install -y kernel-devel 66 | yum install -y zfs 67 | 68 | And for EL8 and newer, separately run:: 69 | 70 | dnf install -y epel-release 71 | dnf install -y kernel-devel 72 | dnf install -y zfs 73 | 74 | .. note:: 75 | When switching from DKMS to kABI-tracking kmods first uninstall the 76 | existing DKMS packages. This should remove the kernel modules for all 77 | installed kernels, then the kABI-tracking kmods can be installed as 78 | described in the section below. 79 | 80 | kABI-tracking kmod 81 | ------------------ 82 | 83 | By default the *zfs-release* package is configured to install DKMS style 84 | packages so they will work with a wide range of kernels. In order to 85 | install the kABI-tracking kmods the default repository must be switched 86 | from *zfs* to *zfs-kmod*. Keep in mind that the kABI-tracking kmods are 87 | only verified to work with the distribution-provided, non-Stream kernel. 88 | 89 | For EL6 and 7 run:: 90 | 91 | yum-config-manager --disable zfs 92 | yum-config-manager --enable zfs-kmod 93 | yum install zfs 94 | 95 | And for EL8 and newer:: 96 | 97 | dnf config-manager --disable zfs 98 | dnf config-manager --enable zfs-kmod 99 | dnf install zfs 100 | 101 | By default the OpenZFS kernel modules are automatically loaded when a ZFS 102 | pool is detected. If you would prefer to always load the modules at boot 103 | time you can create such configuration in ``/etc/modules-load.d``:: 104 | 105 | echo zfs >/etc/modules-load.d/zfs.conf 106 | 107 | .. note:: 108 | When updating to a new EL minor release the existing kmod 109 | packages may not work due to upstream kABI changes in the kernel. 110 | The configuration of the current release package may have already made an 111 | updated package available, but the package manager may not know to install 112 | that package if the version number isn't newer. When upgrading, users 113 | should verify that the *kmod-zfs* package is providing suitable kernel 114 | modules, reinstalling the *kmod-zfs* package if necessary. 115 | 116 | Previous minor EL releases 117 | -------------------------- 118 | 119 | The current release package uses `"${releasever}"` rather than specify a particular 120 | minor release as previous release packages did. Typically `"${releasever}"` will 121 | resolve to just the major version (e.g. `8`), and the resulting repository URL 122 | will be aliased to the current minor version (e.g. `8.7`), but you can specify 123 | `--releasever` to use previous repositories. :: 124 | 125 | [vagrant@localhost ~]$ dnf list available --showduplicates kmod-zfs 126 | Last metadata expiration check: 0:00:08 ago on tor 31 jan 2023 17:50:05 UTC. 127 | Available Packages 128 | kmod-zfs.x86_64 2.1.6-1.el8 zfs-kmod 129 | kmod-zfs.x86_64 2.1.7-1.el8 zfs-kmod 130 | kmod-zfs.x86_64 2.1.8-1.el8 zfs-kmod 131 | kmod-zfs.x86_64 2.1.9-1.el8 zfs-kmod 132 | [vagrant@localhost ~]$ dnf list available --showduplicates --releasever=8.6 kmod-zfs 133 | Last metadata expiration check: 0:16:13 ago on tor 31 jan 2023 17:34:10 UTC. 134 | Available Packages 135 | kmod-zfs.x86_64 2.1.4-1.el8 zfs-kmod 136 | kmod-zfs.x86_64 2.1.5-1.el8 zfs-kmod 137 | kmod-zfs.x86_64 2.1.5-2.el8 zfs-kmod 138 | kmod-zfs.x86_64 2.1.6-1.el8 zfs-kmod 139 | [vagrant@localhost ~]$ 140 | 141 | In the above example, the former packages were built for EL8.7, and the latter for EL8.6. 142 | 143 | Testing Repositories 144 | -------------------- 145 | 146 | In addition to the primary *zfs* repository a *zfs-testing* repository 147 | is available. This repository, which is disabled by default, contains 148 | the latest version of OpenZFS which is under active development. These 149 | packages are made available in order to get feedback from users regarding 150 | the functionality and stability of upcoming releases. These packages 151 | **should not** be used on production systems. Packages from the testing 152 | repository can be installed as follows. 153 | 154 | For EL6 and 7 run:: 155 | 156 | yum-config-manager --enable zfs-testing 157 | yum install kernel-devel zfs 158 | 159 | And for EL8 and newer:: 160 | 161 | dnf config-manager --enable zfs-testing 162 | dnf install kernel-devel zfs 163 | 164 | .. note:: 165 | Use *zfs-testing* for DKMS packages and *zfs-testing-kmod* 166 | for kABI-tracking kmod packages. 167 | 168 | Root on ZFS 169 | ----------- 170 | .. toctree:: 171 | :maxdepth: 1 172 | :glob: 173 | 174 | * 175 | 176 | .. _kABI-tracking kmod: https://elrepoproject.blogspot.com/2016/02/kabi-tracking-kmod-packages.html 177 | .. _DKMS: https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support 178 | .. _EL7: https://zfsonlinux.org/epel/zfs-release-2-3.el7.noarch.rpm 179 | .. _EL8: https://zfsonlinux.org/epel/zfs-release-2-3.el8.noarch.rpm 180 | .. _EL9: https://zfsonlinux.org/epel/zfs-release-2-3.el9.noarch.rpm 181 | .. _EPEL repository: https://fedoraproject.org/wiki/EPEL 182 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # Configuration file for the Sphinx documentation builder. 4 | # 5 | # This file does only contain a selection of the most common options. For a 6 | # full list see the documentation: 7 | # http://www.sphinx-doc.org/en/master/config 8 | 9 | from pathlib import Path 10 | 11 | import sphinx_rtd_theme # noqa 12 | 13 | # -- Path setup -------------------------------------------------------------- 14 | 15 | # If extensions (or modules to document with autodoc) are in another directory, 16 | # add these directories to sys.path here. If the directory is relative to the 17 | # documentation root, use os.path.abspath to make it absolute, like shown here. 18 | # 19 | # import os 20 | # import sys 21 | # sys.path.insert(0, os.path.abspath('.')) 22 | 23 | 24 | # -- Project information ----------------------------------------------------- 25 | 26 | project = u'OpenZFS' 27 | copyright = u'%Y, OpenZFS' 28 | html_last_updated_fmt = '' 29 | author = u'OpenZFS' 30 | 31 | # The short X.Y version 32 | version = u'' 33 | # The full version, including alpha/beta/rc tags 34 | release = u'' 35 | 36 | 37 | # -- General configuration --------------------------------------------------- 38 | 39 | # If your documentation needs a minimal Sphinx version, state it here. 40 | # 41 | # needs_sphinx = '1.0' 42 | 43 | # Add any Sphinx extension module names here, as strings. They can be 44 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 45 | # ones. 46 | extensions = [ 47 | 'sphinx.ext.todo', 48 | 'sphinx.ext.githubpages', 49 | 'sphinx.ext.intersphinx', 50 | 'sphinx.ext.ifconfig', 51 | "sphinx_issues", 52 | "sphinx_rtd_theme", 53 | "notfound.extension", 54 | "sphinxext.rediraffe", 55 | "sphinx_last_updated_by_git", 56 | ] 57 | 58 | # Add any paths that contain templates here, relative to this directory. 59 | templates_path = ['_templates'] 60 | 61 | # The suffix(es) of source filenames. 62 | # You can specify multiple suffix as a list of string: 63 | # 64 | # source_suffix = ['.rst', '.md'] 65 | source_suffix = { 66 | '.rst': 'restructuredtext', 67 | '.md': 'markdown', 68 | } 69 | 70 | # The master toctree document. 71 | master_doc = 'index' 72 | 73 | # The language for content autogenerated by Sphinx. Refer to documentation 74 | # for a list of supported languages. 75 | # 76 | # This is also used if you do content translation via gettext catalogs. 77 | # Usually you set "language" from the command line for these cases. 78 | language = 'en' 79 | 80 | # List of patterns, relative to source directory, that match files and 81 | # directories to ignore when looking for source files. 82 | # This pattern also affects html_static_path and html_extra_path. 83 | exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store'] 84 | 85 | # The name of the Pygments (syntax highlighting) style to use. 86 | pygments_style = None 87 | 88 | 89 | # https://www.sphinx-doc.org/en/master/usage/extensions/ifconfig.html 90 | # hide commands for tests in user documentation 91 | def setup(app): 92 | app.add_config_value('zfs_root_test', default=True, rebuild='env') 93 | 94 | zfs_root_test = False 95 | 96 | 97 | # -- Options for HTML output ------------------------------------------------- 98 | 99 | # The theme to use for HTML and HTML Help pages. See the documentation for 100 | # a list of builtin themes. 101 | # 102 | html_theme = 'sphinx_rtd_theme' 103 | 104 | # Theme options are theme-specific and customize the look and feel of a theme 105 | # further. For a list of options available for each theme, see the 106 | # documentation. 107 | # 108 | html_theme_options = { 109 | 'logo_only': True, 110 | 'style_nav_header_background': '#29667e', 111 | } 112 | 113 | html_favicon = '_static/img/favicon.ico' 114 | 115 | html_logo = '_static/img/logo/logo_main.png' 116 | 117 | # Add any paths that contain custom static files (such as style sheets) here, 118 | # relative to this directory. They are copied after the builtin static files, 119 | # so a file named "default.css" will overwrite the builtin "default.css". 120 | html_static_path = ['_static'] 121 | 122 | html_context = { 123 | 'css_files': [ 124 | '_static/css/theme.css', # set explicitly for local tests 125 | '_static/css/theme_overrides.css', # override wide tables in RTD theme 126 | '_static/css/mandoc.css', 127 | ], 128 | 'display_github': True, 129 | 'github_user': 'openzfs', 130 | 'github_repo': 'openzfs-docs', 131 | 'github_version': 'master/docs/' 132 | } 133 | 134 | html_js_files = [ 135 | 'js/redirect.js', 136 | ] 137 | 138 | # Custom sidebar templates, must be a dictionary that maps document names 139 | # to template names. 140 | # 141 | # The default sidebars (for documents that don't match any pattern) are 142 | # defined by theme itself. Builtin themes are using these templates by 143 | # default: ``['localtoc.html', 'relations.html', 'sourcelink.html', 144 | # 'searchbox.html']``. 145 | # 146 | # html_sidebars = {} 147 | 148 | 149 | # -- Options for HTMLHelp output --------------------------------------------- 150 | 151 | # Output file base name for HTML help builder. 152 | htmlhelp_basename = 'OpenZFSdoc' 153 | 154 | 155 | # -- Options for LaTeX output ------------------------------------------------ 156 | 157 | latex_elements = { 158 | # The paper size ('letterpaper' or 'a4paper'). 159 | # 160 | # 'papersize': 'letterpaper', 161 | 162 | # The font size ('10pt', '11pt' or '12pt'). 163 | # 164 | # 'pointsize': '10pt', 165 | 166 | # Additional stuff for the LaTeX preamble. 167 | # 168 | # 'preamble': '', 169 | 170 | # Latex figure (float) alignment 171 | # 172 | # 'figure_align': 'htbp', 173 | } 174 | 175 | # Grouping the document tree into LaTeX files. List of tuples 176 | # (source start file, target name, title, 177 | # author, documentclass [howto, manual, or own class]). 178 | latex_documents = [ 179 | (master_doc, 'OpenZFS.tex', u'OpenZFS Documentation', 180 | u'OpenZFS', 'manual'), 181 | ] 182 | 183 | 184 | # -- Options for manual page output ------------------------------------------ 185 | 186 | # One entry per manual page. List of tuples 187 | # (source start file, name, description, authors, manual section). 188 | man_pages = [ 189 | (master_doc, 'openzfs', u'OpenZFS Documentation', 190 | [author], 1) 191 | ] 192 | 193 | 194 | # -- Options for Texinfo output ---------------------------------------------- 195 | 196 | # Grouping the document tree into Texinfo files. List of tuples 197 | # (source start file, target name, title, author, 198 | # dir menu entry, description, category) 199 | texinfo_documents = [ 200 | (master_doc, 'OpenZFS', u'OpenZFS Documentation', 201 | author, 'OpenZFS', 'One line description of project.', 202 | 'Miscellaneous'), 203 | ] 204 | 205 | 206 | # -- Options for Epub output ------------------------------------------------- 207 | 208 | # Bibliographic Dublin Core info. 209 | epub_title = project 210 | 211 | # The unique identifier of the text. This can be a ISBN number 212 | # or the project homepage. 213 | # 214 | # epub_identifier = '' 215 | 216 | # A unique identification for the text. 217 | # 218 | # epub_uid = '' 219 | 220 | # A list of files that should not be packed into the epub file. 221 | epub_exclude_files = ['search.html'] 222 | 223 | 224 | # -- Extension configuration ------------------------------------------------- 225 | 226 | # -- Options for todo extension ---------------------------------------------- 227 | 228 | # If true, `todo` and `todoList` produce output, else they produce nothing. 229 | todo_include_todos = True 230 | 231 | # GitHub repo 232 | issues_github_path = "openzfs/zfs" 233 | 234 | # equivalent to 235 | issues_uri = "https://github.com/openzfs/zfs/issues/{issue}" 236 | issues_pr_uri = "https://github.com/openzfs/zfs/pull/{pr}" 237 | issues_commit_uri = "https://github.com/openzfs/zfs/commit/{commit}" 238 | 239 | # Get absolute paths in 404 240 | notfound_urls_prefix = '/openzfs-docs/' 241 | 242 | # Redirects 243 | rediraffe_redirects = { 244 | "Performance and Tuning/Async Write.rst": "Performance and Tuning/ZIO Scheduler.rst", 245 | } 246 | 247 | # Old man pages location -> to new master branch location 248 | redirect_folders = { 249 | "man/1/": "man/master/1/", 250 | "man/4/": "man/master/4/", 251 | "man/5/": "man/master/5/", 252 | "man/7/": "man/master/7/", 253 | "man/8/": "man/master/8/", 254 | } 255 | 256 | # rediraffe doesn't support wildcards or regex, so fill redirects manually 257 | for old, new in redirect_folders.items(): 258 | for newpath in Path(new).rglob("**/*"): 259 | if newpath.suffix in [".rst"]: 260 | oldpath = str(newpath).replace(new, old, 1) 261 | rediraffe_redirects[oldpath] = str(newpath) 262 | -------------------------------------------------------------------------------- /docs/Basic Concepts/Checksums.rst: -------------------------------------------------------------------------------- 1 | Checksums and Their Use in ZFS 2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3 | 4 | End-to-end checksums are a key feature of ZFS and an important 5 | differentiator for ZFS over other RAID implementations and filesystems. 6 | Advantages of end-to-end checksums include: 7 | 8 | - detects data corruption upon reading from media 9 | - blocks that are detected as corrupt are automatically repaired if 10 | possible, by using the RAID protection in suitably configured pools, 11 | or redundant copies (see the zfs ``copies`` property) 12 | - periodic scrubs can check data to detect and repair latent media 13 | degradation (bit rot) and corruption from other sources 14 | - checksums on ZFS replication streams, ``zfs send`` and 15 | ``zfs receive``, ensure the data received is not corrupted by 16 | intervening storage or transport mechanisms 17 | 18 | Checksum Algorithms 19 | ^^^^^^^^^^^^^^^^^^^ 20 | 21 | The checksum algorithms in ZFS can be changed for datasets (filesystems 22 | or volumes). The checksum algorithm used for each block is stored in the 23 | block pointer (metadata). The block checksum is calculated when the 24 | block is written, so changing the algorithm only affects writes 25 | occurring after the change. 26 | 27 | The checksum algorithm for a dataset can be changed by setting the 28 | ``checksum`` property: 29 | 30 | .. code:: bash 31 | 32 | zfs set checksum=sha256 pool_name/dataset_name 33 | 34 | +-----------+--------------+------------------------+-------------------------+ 35 | | Checksum | Ok for dedup | Compatible with | Notes | 36 | | | and nopwrite?| other ZFS | | 37 | | | | implementations? | | 38 | +===========+==============+========================+=========================+ 39 | | on | see notes | yes | ``on`` is a | 40 | | | | | short hand for | 41 | | | | | ``fletcher4`` | 42 | | | | | for non-deduped | 43 | | | | | datasets and | 44 | | | | | ``sha256`` for | 45 | | | | | deduped | 46 | | | | | datasets | 47 | +-----------+--------------+------------------------+-------------------------+ 48 | | off | no | yes | Do not do use | 49 | | | | | ``off`` | 50 | +-----------+--------------+------------------------+-------------------------+ 51 | | fletcher2 | no | yes | Deprecated | 52 | | | | | implementation | 53 | | | | | of Fletcher | 54 | | | | | checksum, use | 55 | | | | | ``fletcher4`` | 56 | | | | | instead | 57 | +-----------+--------------+------------------------+-------------------------+ 58 | | fletcher4 | no | yes | Fletcher | 59 | | | | | algorithm, also | 60 | | | | | used for | 61 | | | | | ``zfs send`` | 62 | | | | | streams | 63 | +-----------+--------------+------------------------+-------------------------+ 64 | | sha256 | yes | yes | Default for | 65 | | | | | deduped | 66 | | | | | datasets | 67 | +-----------+--------------+------------------------+-------------------------+ 68 | | noparity | no | yes | Do not use | 69 | | | | | ``noparity`` | 70 | +-----------+--------------+------------------------+-------------------------+ 71 | | sha512 | yes | requires pool | salted | 72 | | | | feature | ``sha512`` | 73 | | | | ``org.illumos:sha512`` | currently not | 74 | | | | | supported for | 75 | | | | | any filesystem | 76 | | | | | on the boot | 77 | | | | | pools | 78 | +-----------+--------------+------------------------+-------------------------+ 79 | | skein | yes | requires pool | salted | 80 | | | | feature | ``skein`` | 81 | | | | ``org.illumos:skein`` | currently not | 82 | | | | | supported for | 83 | | | | | any filesystem | 84 | | | | | on the boot | 85 | | | | | pools | 86 | +-----------+--------------+------------------------+-------------------------+ 87 | | edonr | see notes | requires pool | salted | 88 | | | | feature | ``edonr`` | 89 | | | | ``org.illumos:edonr`` | currently not | 90 | | | | | supported for | 91 | | | | | any filesystem | 92 | | | | | on the boot | 93 | | | | | pools | 94 | | | | | | 95 | | | | | In an abundance of | 96 | | | | | caution, Edon-R requires| 97 | | | | | verification when used | 98 | | | | | with dedup, so it will | 99 | | | | | automatically use | 100 | | | | | ``verify``. | 101 | | | | | | 102 | +-----------+--------------+------------------------+-------------------------+ 103 | | blake3 | yes | requires pool | salted | 104 | | | | feature | ``blake3`` | 105 | | | | ``org.openzfs:blake3`` | currently not | 106 | | | | | supported for | 107 | | | | | any filesystem | 108 | | | | | on the boot | 109 | | | | | pools | 110 | +-----------+--------------+------------------------+-------------------------+ 111 | 112 | Checksum Accelerators 113 | ^^^^^^^^^^^^^^^^^^^^^ 114 | 115 | ZFS has the ability to offload checksum operations to the Intel 116 | QuickAssist Technology (QAT) adapters. 117 | 118 | Checksum Microbenchmarks 119 | ^^^^^^^^^^^^^^^^^^^^^^^^ 120 | 121 | Some ZFS features use microbenchmarks when the ``zfs.ko`` kernel module 122 | is loaded to determine the optimal algorithm for checksums. The results 123 | of the microbenchmarks are observable in the ``/proc/spl/kstat/zfs`` 124 | directory. The winning algorithm is reported as the "fastest" and 125 | becomes the default. The default can be overridden by setting zfs module 126 | parameters. 127 | 128 | ========= ==================================== ======================== 129 | Checksum Results Filename ``zfs`` module parameter 130 | ========= ==================================== ======================== 131 | Fletcher4 /proc/spl/kstat/zfs/fletcher_4_bench zfs_fletcher_4_impl 132 | all-other /proc/spl/kstat/zfs/chksum_bench zfs_blake3_impl, 133 | zfs_sha256_impl, 134 | zfs_sha512_impl 135 | ========= ==================================== ======================== 136 | 137 | Disabling Checksums 138 | ^^^^^^^^^^^^^^^^^^^ 139 | 140 | While it may be tempting to disable checksums to improve CPU 141 | performance, it is widely considered by the ZFS community to be an 142 | extrodinarily bad idea. Don't disable checksums. 143 | -------------------------------------------------------------------------------- /docs/Developer Resources/Git and GitHub for beginners.rst: -------------------------------------------------------------------------------- 1 | Git and GitHub for beginners (ZoL edition) 2 | ========================================== 3 | 4 | This is a very basic rundown of how to use Git and GitHub to make 5 | changes. 6 | 7 | Recommended reading: `ZFS on Linux 8 | CONTRIBUTING.md `__ 9 | 10 | First time setup 11 | ---------------- 12 | 13 | If you've never used Git before, you'll need a little setup to start 14 | things off. 15 | 16 | :: 17 | 18 | git config --global user.name "My Name" 19 | git config --global user.email myemail@noreply.non 20 | 21 | Cloning the initial repository 22 | ------------------------------ 23 | 24 | The easiest way to get started is to click the fork icon at the top of 25 | the main repository page. From there you need to download a copy of the 26 | forked repository to your computer: 27 | 28 | :: 29 | 30 | git clone https://github.com//zfs.git 31 | 32 | This sets the "origin" repository to your fork. This will come in handy 33 | when creating pull requests. To make pulling from the "upstream" 34 | repository as changes are made, it is very useful to establish the 35 | upstream repository as another remote (man git-remote): 36 | 37 | :: 38 | 39 | cd zfs 40 | git remote add upstream https://github.com/zfsonlinux/zfs.git 41 | 42 | Preparing and making changes 43 | ---------------------------- 44 | 45 | In order to make changes it is recommended to make a branch, this lets 46 | you work on several unrelated changes at once. It is also not 47 | recommended to make changes to the master branch unless you own the 48 | repository. 49 | 50 | :: 51 | 52 | git checkout -b my-new-branch 53 | 54 | From here you can make your changes and move on to the next step. 55 | 56 | Recommended reading: `C Style and Coding Standards for 57 | SunOS `__, 58 | `ZFS on Linux Developer 59 | Resources `__, 60 | `OpenZFS Developer 61 | Resources `__ 62 | 63 | Testing your patches before pushing 64 | ----------------------------------- 65 | 66 | Before committing and pushing, you may want to test your patches. There 67 | are several tests you can run against your branch such as style 68 | checking, and functional tests. All pull requests go through these tests 69 | before being pushed to the main repository, however testing locally 70 | takes the load off the build/test servers. This step is optional but 71 | highly recommended, however the test suite should be run on a virtual 72 | machine or a host that currently does not use ZFS. You may need to 73 | install ``shellcheck`` and ``flake8`` to run the ``checkstyle`` 74 | correctly. 75 | 76 | :: 77 | 78 | sh autogen.sh 79 | ./configure 80 | make checkstyle 81 | 82 | Recommended reading: `Building 83 | ZFS `__, `ZFS Test 84 | Suite 85 | README `__ 86 | 87 | Committing your changes to be pushed 88 | ------------------------------------ 89 | 90 | When you are done making changes to your branch there are a few more 91 | steps before you can make a pull request. 92 | 93 | :: 94 | 95 | git commit --all --signoff 96 | 97 | This command opens an editor and adds all unstaged files from your 98 | branch. Here you need to describe your change and add a few things: 99 | 100 | :: 101 | 102 | 103 | # Please enter the commit message for your changes. Lines starting 104 | # with '#' will be ignored, and an empty message aborts the commit. 105 | # On branch my-new-branch 106 | # Changes to be committed: 107 | # (use "git reset HEAD ..." to unstage) 108 | # 109 | # modified: hello.c 110 | # 111 | 112 | The first thing we need to add is the commit message. This is what is 113 | displayed on the git log, and should be a short description of the 114 | change. By style guidelines, this has to be less than 72 characters in 115 | length. 116 | 117 | Underneath the commit message you can add a more descriptive text to 118 | your commit. The lines in this section have to be less than 72 119 | characters. 120 | 121 | When you are done, the commit should look like this: 122 | 123 | :: 124 | 125 | Add hello command 126 | 127 | This is a test commit with a descriptive commit message. 128 | This message can be more than one line as shown here. 129 | 130 | Signed-off-by: My Name 131 | Closes #9998 132 | Issue #9999 133 | # Please enter the commit message for your changes. Lines starting 134 | # with '#' will be ignored, and an empty message aborts the commit. 135 | # On branch my-new-branch 136 | # Changes to be committed: 137 | # (use "git reset HEAD ..." to unstage) 138 | # 139 | # modified: hello.c 140 | # 141 | 142 | You can also reference issues and pull requests if you are filing a pull 143 | request for an existing issue as shown above. Save and exit the editor 144 | when you are done. 145 | 146 | Pushing and creating the pull request 147 | ------------------------------------- 148 | 149 | Home stretch. You've made your change and made the commit. Now it's time 150 | to push it. 151 | 152 | :: 153 | 154 | git push --set-upstream origin my-new-branch 155 | 156 | This should ask you for your github credentials and upload your changes 157 | to your repository. 158 | 159 | The last step is to either go to your repository or the upstream 160 | repository on GitHub and you should see a button for making a new pull 161 | request for your recently committed branch. 162 | 163 | Correcting issues with your pull request 164 | ---------------------------------------- 165 | 166 | Sometimes things don't always go as planned and you may need to update 167 | your pull request with a correction to either your commit message, or 168 | your changes. This can be accomplished by re-pushing your branch. If you 169 | need to make code changes or ``git add`` a file, you can do those now, 170 | along with the following: 171 | 172 | :: 173 | 174 | git commit --amend 175 | git push --force 176 | 177 | This will return you to the commit editor screen, and push your changes 178 | over top of the old ones. Do note that this will restart the process of 179 | any build/test servers currently running and excessively pushing can 180 | cause delays in processing of all pull requests. 181 | 182 | Maintaining your repository 183 | --------------------------- 184 | 185 | When you wish to make changes in the future you will want to have an 186 | up-to-date copy of the upstream repository to make your changes on. Here 187 | is how you keep updated: 188 | 189 | :: 190 | 191 | git checkout master 192 | git pull upstream master 193 | git push origin master 194 | 195 | This will make sure you are on the master branch of the repository, grab 196 | the changes from upstream, then push them back to your repository. 197 | 198 | Backporting a commit to a release branch 199 | ---------------------------------------- 200 | 201 | Users may want to backport commits from the ``master`` branch to one 202 | of the release branches. To do that, first checkout the "staging" 203 | branch for the release version you want to pull the commit into. 204 | For example, if you want to backport commit XYZ from ``master`` into the 205 | future ``zfs-2.3.6`` release, first checkout the ``zfs-2.3.6-staging`` 206 | branch, and then pull the commit in with ``git cherry-pick XYZ`` (and 207 | resolve any merge conflicts). You can then open a PR with your backport 208 | branch against the ``zfs-2.3.6-staging`` branch (make sure to select 209 | this branch in the PR target dropdown box). 210 | 211 | Please keep the commit message the same when you backport. This means 212 | keeping the author the same, and all of the attestation lines the 213 | same (Signed-off-by, Reviewed-by, etc). It is assumed those attestation 214 | lines refer to the original commit from ``master``, and not necessarily 215 | the backport commit (even though it may have needed changes to backport). 216 | 217 | You may optionally add a ``Backported-by:`` line with your name if 218 | desired. 219 | 220 | Sometimes you may need to add a small, version-specific commit that only 221 | goes in the release branch. For those cases, add a tag to the commit 222 | line with the release version, in the format of "[x.y.z-only]". For 223 | example: 224 | 225 | 226 | :: 227 | 228 | [2.3.6-only] Disable feature X by default 229 | 230 | Unlike the master branch, turn feature X off by default 231 | for the zfs-2.3.6 release. 232 | 233 | Signed-off-by: Tony Hutter 234 | 235 | Final words 236 | ----------- 237 | 238 | This is a very basic introduction to Git and GitHub, but should get you 239 | on your way to contributing to many open source projects. Not all 240 | projects have style requirements and some may have different processes 241 | to getting changes committed so please refer to their documentation to 242 | see if you need to do anything different. One topic we have not touched 243 | on is the ``git rebase`` command which is a little more advanced for 244 | this wiki article. 245 | 246 | Additional resources: `Github Help `__, 247 | `Atlassian Git Tutorials `__ 248 | -------------------------------------------------------------------------------- /docs/Getting Started/NixOS/Root on ZFS.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: sh 2 | 3 | .. ifconfig:: zfs_root_test 4 | 5 | # For the CI/CD test run of this guide, 6 | # Enable verbose logging of bash shell and fail immediately when 7 | # a command fails. 8 | set -vxeuf 9 | 10 | .. In this document, there are three types of code-block markups: 11 | ``::`` are commands intended for both the vm test and the users 12 | ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test 13 | ``.. code-block:: sh`` are commands intended only for users 14 | 15 | NixOS Root on ZFS 16 | ======================================= 17 | 18 | **Customization** 19 | 20 | Unless stated otherwise, it is not recommended to customize system 21 | configuration before reboot. 22 | 23 | **UEFI support only** 24 | 25 | Only UEFI is supported by this guide. Make sure your computer is 26 | booted in UEFI mode. 27 | 28 | Preparation 29 | --------------------------- 30 | 31 | #. Download `NixOS Live Image 32 | `__ and boot from it. 33 | 34 | .. code-block:: sh 35 | 36 | sha256sum -c ./nixos-*.sha256 37 | 38 | dd if=input-file of=output-file bs=1M 39 | 40 | #. Connect to the Internet. 41 | #. Set root password or ``/root/.ssh/authorized_keys``. 42 | #. Start SSH server 43 | 44 | .. code-block:: sh 45 | 46 | systemctl restart sshd 47 | 48 | #. Connect from another computer 49 | 50 | .. code-block:: sh 51 | 52 | ssh root@192.168.1.91 53 | 54 | #. Target disk 55 | 56 | List available disks with 57 | 58 | .. code-block:: sh 59 | 60 | find /dev/disk/by-id/ 61 | 62 | If virtio is used as disk bus, power off the VM and set serial numbers for disk. 63 | For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. 64 | For libvirt, edit domain XML. See `this page 65 | `__ for examples. 66 | 67 | Declare disk array 68 | 69 | .. code-block:: sh 70 | 71 | DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' 72 | 73 | For single disk installation, use 74 | 75 | .. code-block:: sh 76 | 77 | DISK='/dev/disk/by-id/disk1' 78 | 79 | .. ifconfig:: zfs_root_test 80 | 81 | :: 82 | 83 | # install installation tools 84 | nix-env -f '' -iA nixos-install-tools 85 | 86 | # for github test run, use chroot and loop devices 87 | DISK="$(losetup --all| grep nixos | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" 88 | 89 | # if there is no loopdev, then we are using qemu virtualized test 90 | # run, use sata disks instead 91 | if test -z "${DISK}"; then 92 | DISK=$(find /dev/disk/by-id -type l | grep -v DVD-ROM | grep -v -- -part | xargs -t -I '{}' printf '{} ') 93 | fi 94 | 95 | #. Set a mount point 96 | :: 97 | 98 | MNT=$(mktemp -d) 99 | 100 | #. Set partition size: 101 | 102 | Set swap size in GB, set to 1 if you don't want swap to 103 | take up too much space 104 | 105 | .. code-block:: sh 106 | 107 | SWAPSIZE=4 108 | 109 | .. ifconfig:: zfs_root_test 110 | 111 | # For the test run, use 1GB swap space to avoid hitting CI/CD 112 | # quota 113 | SWAPSIZE=1 114 | 115 | Set how much space should be left at the end of the disk, minimum 1GB 116 | 117 | :: 118 | 119 | RESERVE=1 120 | 121 | System Installation 122 | --------------------------- 123 | 124 | #. Partition the disks. 125 | 126 | Note: you must clear all existing partition tables and data structures from target disks. 127 | 128 | For flash-based storage, this can be done by the blkdiscard command below: 129 | :: 130 | 131 | partition_disk () { 132 | local disk="${1}" 133 | blkdiscard -f "${disk}" || true 134 | 135 | parted --script --align=optimal "${disk}" -- \ 136 | mklabel gpt \ 137 | mkpart EFI 1MiB 4GiB \ 138 | mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \ 139 | mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ 140 | set 1 esp on \ 141 | 142 | partprobe "${disk}" 143 | } 144 | 145 | for i in ${DISK}; do 146 | partition_disk "${i}" 147 | done 148 | 149 | .. ifconfig:: zfs_root_test 150 | 151 | :: 152 | 153 | # When working with GitHub chroot runners, we are using loop 154 | # devices as installation target. However, the alias support for 155 | # loop device was just introduced in March 2023. See 156 | # https://github.com/systemd/systemd/pull/26693 157 | # For now, we will create the aliases manually as a workaround 158 | looppart="1 2 3 4 5" 159 | for i in ${DISK}; do 160 | for j in ${looppart}; do 161 | if test -e "${i}p${j}"; then 162 | ln -s "${i}p${j}" "${i}-part${j}" 163 | fi 164 | done 165 | done 166 | 167 | #. Setup temporary encrypted swap for this installation only. This is 168 | useful if the available memory is small:: 169 | 170 | for i in ${DISK}; do 171 | cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3 172 | mkswap /dev/mapper/"${i##*/}"-part3 173 | swapon /dev/mapper/"${i##*/}"-part3 174 | done 175 | 176 | 177 | #. **LUKS only**: Setup encrypted LUKS container for root pool:: 178 | 179 | for i in ${DISK}; do 180 | # see PASSPHRASE PROCESSING section in cryptsetup(8) 181 | printf "YOUR_PASSWD" | cryptsetup luksFormat --type luks2 "${i}"-part2 - 182 | printf "YOUR_PASSWD" | cryptsetup luksOpen "${i}"-part2 luks-rpool-"${i##*/}"-part2 - 183 | done 184 | 185 | #. Create root pool 186 | 187 | - Unencrypted 188 | 189 | .. code-block:: sh 190 | 191 | # shellcheck disable=SC2046 192 | zpool create \ 193 | -o ashift=12 \ 194 | -o autotrim=on \ 195 | -R "${MNT}" \ 196 | -O acltype=posixacl \ 197 | -O canmount=off \ 198 | -O dnodesize=auto \ 199 | -O normalization=formD \ 200 | -O relatime=on \ 201 | -O xattr=sa \ 202 | -O mountpoint=none \ 203 | rpool \ 204 | mirror \ 205 | $(for i in ${DISK}; do 206 | printf '%s ' "${i}-part2"; 207 | done) 208 | 209 | - LUKS encrypted 210 | 211 | :: 212 | 213 | # shellcheck disable=SC2046 214 | zpool create \ 215 | -o ashift=12 \ 216 | -o autotrim=on \ 217 | -R "${MNT}" \ 218 | -O acltype=posixacl \ 219 | -O canmount=off \ 220 | -O dnodesize=auto \ 221 | -O normalization=formD \ 222 | -O relatime=on \ 223 | -O xattr=sa \ 224 | -O mountpoint=none \ 225 | rpool \ 226 | mirror \ 227 | $(for i in ${DISK}; do 228 | printf '/dev/mapper/luks-rpool-%s ' "${i##*/}-part2"; 229 | done) 230 | 231 | If not using a multi-disk setup, remove ``mirror``. 232 | 233 | #. Create root system container: 234 | 235 | :: 236 | 237 | zfs create -o canmount=noauto -o mountpoint=legacy rpool/root 238 | 239 | Create system datasets, 240 | manage mountpoints with ``mountpoint=legacy`` 241 | :: 242 | 243 | zfs create -o mountpoint=legacy rpool/home 244 | mount -o X-mount.mkdir -t zfs rpool/root "${MNT}" 245 | mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home 246 | 247 | #. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards 248 | :: 249 | 250 | for i in ${DISK}; do 251 | mkfs.vfat -n EFI "${i}"-part1 252 | done 253 | 254 | for i in ${DISK}; do 255 | mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot 256 | break 257 | done 258 | 259 | 260 | System Configuration 261 | --------------------------- 262 | 263 | #. Generate system configuration:: 264 | 265 | nixos-generate-config --root "${MNT}" 266 | 267 | #. Edit system configuration: 268 | 269 | .. code-block:: sh 270 | 271 | nano "${MNT}"/etc/nixos/hardware-configuration.nix 272 | 273 | #. Set networking.hostId: 274 | 275 | .. code-block:: sh 276 | 277 | networking.hostId = "abcd1234"; 278 | 279 | #. If using LUKS, add the output from following command to system 280 | configuration 281 | 282 | .. code-block:: sh 283 | 284 | tee < 5 | # 6 | # All Rights Reserved. 7 | # 8 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 9 | # not use this file except in compliance with the License. You may obtain 10 | # a copy of the License at 11 | # 12 | # http://www.apache.org/licenses/LICENSE-2.0 13 | # 14 | # Unless required by applicable law or agreed to in writing, software 15 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 16 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 17 | # License for the specific language governing permissions and limitations 18 | # under the License. 19 | 20 | import argparse 21 | import logging 22 | import os 23 | import re 24 | import subprocess 25 | import sys 26 | 27 | import git 28 | 29 | LOG = logging.getLogger() 30 | logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) 31 | 32 | zfs_repo_url = 'https://github.com/openzfs/zfs/' 33 | 34 | MAN_SECTIONS = { 35 | '1': 'User Commands', 36 | '2': 'System Calls', 37 | '3': 'C Library Functions', 38 | '4': 'Devices and Special Files', 39 | '5': 'File Formats and Conventions', 40 | '6': 'Games', 41 | '7': 'Miscellaneous', 42 | '8': 'System Administration Commands', 43 | } 44 | 45 | MAN_SECTION_DIR = 'man' 46 | MAN_SECTION_NAME = 'Man Pages' 47 | 48 | BUILD_DIR = '_build' 49 | MAN_BUILD_DIR = os.path.join(BUILD_DIR, 'man') 50 | 51 | ZFS_GIT_REPO = 'https://github.com/openzfs/zfs.git' 52 | TAG_REGEX = re.compile( 53 | r'zfs-(?P' 54 | r'(?P[0-9]+)\.(?P[0-9]+)\.(?P[0-9]+))') 55 | 56 | LINKS_REGEX_TEMPLATE = (r' )class=\"Xr\"(?P.*?)>%s' 57 | r'\((?P<num>[1-9])\)<\/a>') 58 | LINKS_FINAL_REGEX = (r'<a href="../\g<num>/\g<name>.\g<num>.html" class="Xr"' 59 | r'>\g<name>(\g<num>)</a>') 60 | 61 | 62 | def add_hyperlinks(out_dir, version, pages): 63 | all_pages = [] 64 | for _section, section_pages in pages.items(): 65 | all_pages.extend([ 66 | os.path.splitext(page)[0] for page in section_pages]) 67 | tmp_regex = '(?P<name>' + "|".join(all_pages) + ')' 68 | html_regex = re.compile( 69 | LINKS_REGEX_TEMPLATE % tmp_regex, flags=re.MULTILINE) 70 | 71 | for section, pages in pages.items(): 72 | for page in pages: 73 | file_path = os.path.join( 74 | out_dir, MAN_BUILD_DIR, 75 | version, 'man' + section, page + '.html') 76 | with open(file_path, "r") as f: 77 | text = f.read() 78 | new_text = re.sub(html_regex, LINKS_FINAL_REGEX, text) 79 | if text != new_text: 80 | with open(file_path, "w") as f: 81 | LOG.debug('Crosslinks detected in %s, generate', 82 | file_path) 83 | text = f.write(new_text) 84 | 85 | 86 | def run(in_dir, out_dir, version, tag): 87 | pages = {num: [] for num in MAN_SECTIONS} 88 | for subdir, dirs, _ in os.walk(in_dir): 89 | for section in dirs: 90 | section_num = section.replace('man', '') 91 | section_suffix = '.' + section_num 92 | if section_num not in MAN_SECTIONS: 93 | continue 94 | out_section_dir = os.path.join( 95 | out_dir, MAN_BUILD_DIR, version, section) 96 | os.makedirs(out_section_dir, exist_ok=True) 97 | for page in os.listdir(os.path.join(subdir, section)): 98 | if not (page.endswith(section_suffix) or 99 | page.endswith(section_suffix + '.in')): 100 | continue 101 | LOG.debug('Generate %s page', page) 102 | stripped_page = page.rstrip('.in') 103 | page_file = os.path.join(out_section_dir, 104 | stripped_page + '.html') 105 | with open(page_file, "w") as f: 106 | subprocess.run( 107 | ['mandoc', '-T', 'html', '-O', 'fragment', 108 | os.path.join(subdir, section, page)], stdout=f, 109 | check=True) 110 | 111 | pages[section_num].append(stripped_page) 112 | break 113 | 114 | man_path = os.path.join(out_dir, MAN_SECTION_DIR, version) 115 | os.makedirs(man_path, exist_ok=True) 116 | 117 | # Index for version 118 | with open(os.path.join(man_path, 'index.rst'), "w") as f: 119 | f.write("""\ 120 | .. THIS FILE IS AUTOGENERATED, DO NOT EDIT! 121 | 122 | :github_url: {zfs_repo_url}blob/{tag}/man/ 123 | 124 | {name} 125 | {name_sub} 126 | .. toctree:: 127 | :maxdepth: 1 128 | :glob: 129 | 130 | */index 131 | """.format(zfs_repo_url=zfs_repo_url, 132 | name=version, 133 | tag=tag, 134 | name_sub="=" * len(version))) 135 | 136 | for section_num, section_pages in pages.items(): 137 | if not section_pages: 138 | continue 139 | rst_dir = os.path.join(out_dir, MAN_SECTION_DIR, version, section_num) 140 | os.makedirs(rst_dir, exist_ok=True) 141 | section_name = MAN_SECTIONS[section_num] 142 | section_name_with_num = '{name} ({num})'.format( 143 | name=section_name, num=section_num) 144 | with open(os.path.join(rst_dir, 'index.rst'), "w") as f: 145 | f.write("""\ 146 | .. THIS FILE IS AUTOGENERATED, DO NOT EDIT! 147 | 148 | :github_url: {zfs_repo_url}blob/{tag}/man/man{section_num}/ 149 | 150 | {name} 151 | {name_sub} 152 | .. toctree:: 153 | :maxdepth: 1 154 | :glob: 155 | 156 | * 157 | """.format(zfs_repo_url=zfs_repo_url, 158 | section_num=section_num, 159 | name=section_name_with_num, 160 | tag=tag, 161 | name_sub="=" * len(section_name_with_num),)) 162 | 163 | for page in section_pages: 164 | with open(os.path.join(rst_dir, page + '.rst'), "w") as f: 165 | f.write("""\ 166 | .. THIS FILE IS AUTOGENERATED, DO NOT EDIT! 167 | 168 | :github_url: {zfs_repo_url}blob/{tag}/man/man{section_num}/{name} 169 | 170 | {name} 171 | {name_sub} 172 | .. raw:: html 173 | 174 | <div class="man_container"> 175 | 176 | .. raw:: html 177 | :file: ../../../{build_dir}/{version}/man{section_num}/{name}.html 178 | 179 | .. raw:: html 180 | 181 | </div> 182 | """.format(zfs_repo_url=zfs_repo_url, 183 | name=page, 184 | build_dir=MAN_BUILD_DIR, 185 | version=version, 186 | tag=tag, 187 | section_num=section_num, 188 | name_sub="=" * len(page))) 189 | add_hyperlinks(out_dir, version, pages) 190 | 191 | 192 | def gen_index(out_dir, tags): 193 | # Global index 194 | with open(os.path.join(os.path.join(out_dir, MAN_SECTION_DIR), 195 | 'index.rst'), "w") as f: 196 | f.write("""\ 197 | .. THIS FILE IS AUTOGENERATED, DO NOT EDIT! 198 | 199 | {name} 200 | {name_sub} 201 | .. toctree:: 202 | :maxdepth: 1 203 | :glob: 204 | 205 | """.format(name=MAN_SECTION_NAME, name_sub="=" * len(MAN_SECTION_NAME)) 206 | ) 207 | for ver in reversed(tags.keys()): 208 | f.write("""\ 209 | {ver}/index 210 | """.format(ver=ver)) 211 | 212 | 213 | def prepare_repo(path): 214 | # TODO(gmelikov): check for actual tags fetch on remote repo updates 215 | repo_dir = os.path.join(path, BUILD_DIR) 216 | os.makedirs(repo_dir, exist_ok=True) 217 | try: 218 | repo = git.Repo(os.path.join(repo_dir, 'zfs')) 219 | LOG.info('zfs repo already cloned') 220 | for remote in repo.remotes: 221 | remote.fetch(tags=None) 222 | except (git.exc.NoSuchPathError, git.exc.InvalidGitRepositoryError): 223 | LOG.info('Clone zfs repo...') 224 | git.Git(repo_dir).clone(ZFS_GIT_REPO) 225 | 226 | 227 | def iterate_versions(out_dir): 228 | repo_path = os.path.join(BUILD_DIR, 'zfs') 229 | git_cmd = git.Git(repo_path) 230 | repo = git.Repo(os.path.join(BUILD_DIR, 'zfs')) 231 | # sort tags, some versions are not semvers so we'll use latest ones 232 | # (for ex. 0.6.5.11) 233 | repo_tags = sorted(repo.tags, key=lambda t: t.commit.committed_datetime) 234 | tags = {} 235 | for tag in repo_tags: 236 | tag = str(tag) 237 | if 'rc' in tag: 238 | LOG.debug("Skip rc version %s", tag) 239 | continue 240 | version = TAG_REGEX.match(tag) 241 | if not version: 242 | LOG.info("Cannot parse %s version, skipping...", tag) 243 | continue 244 | ver_dict = {k: int(v) if 'version' not in k else v 245 | for k, v in version.groupdict().items()} 246 | # ignore pre-0.6 versions 247 | if ver_dict['major'] > 0 or ver_dict['minor'] > 5: 248 | # get only latest minor versions 249 | tags[f'v{ver_dict["major"]}.{ver_dict["minor"]}'] = tag 250 | 251 | tags['master'] = 'master' 252 | 253 | LOG.info('Tags to build: %r', tags) 254 | 255 | gen_index(out_dir, tags) 256 | 257 | for version, tag in tags.items(): 258 | git_cmd.checkout(tag) 259 | run(os.path.join(repo_path, 'man'), out_dir, version, tag) 260 | 261 | 262 | def main(): 263 | parser = argparse.ArgumentParser() 264 | parser.add_argument('out_dir', 265 | help='Sphinx docs dir') 266 | args = parser.parse_args() 267 | 268 | os.makedirs(os.path.join(args.out_dir, MAN_SECTION_DIR), exist_ok=True) 269 | 270 | prepare_repo(args.out_dir) 271 | 272 | iterate_versions(args.out_dir) 273 | 274 | 275 | if __name__ == '__main__': 276 | main() 277 | -------------------------------------------------------------------------------- /docs/Getting Started/Slackware/Root on ZFS.rst: -------------------------------------------------------------------------------- 1 | Slackware Root on ZFS 2 | ===================== 3 | 4 | This page shows some possible ways to configure Slackware to use zfs for the root filesystem. 5 | 6 | There are countless different ways to achieve such setup, particularly with the flexibility that zfs allows. We'll show only a simple recipe and give pointers for further customization. 7 | 8 | Kernel considerations 9 | --------------------- 10 | 11 | For this mini-HOWTO we'll be using the generic kernel and customize the stock initrd. 12 | 13 | If you use the huge kernel, you may want to switch to the generic kernel first, and install both the kernel-generic and mkinitrd packages. This makes things easier since we'll need an initrd. 14 | 15 | If you absolutely do not want to use an initrd, see "Other options" further down. 16 | 17 | 18 | The problem space 19 | ----------------- 20 | 21 | In order to have the root filesystem on zfs, two problems need to be addressed: 22 | 23 | #. The boot loader needs to be able to load the kernel and its initrd. 24 | 25 | #. The kernel (or, rather, the initrd) needs to be able to mount the zfs root filesystem and run /sbin/init. 26 | 27 | The second problem is relatively easy to deal with, and only requires slight modifications to the default Slackware initrd scripts. 28 | 29 | For the first problem, however, a variety of scenarios are possible; on a PC, for example, you might be booting: 30 | 31 | #. In UEFI mode, via an additional bootloader like elilo: here, the kernel and its initrd are on (read: have been copied to) the ESP, and the additional bootloader doesn't need to understand zfs. 32 | 33 | #. In UEFI mode, by booting the kernel straight from the firmware. All Slackware kernels are built with EFI_STUB=Y, so if you copy your kernel and initrd to the ESP and configure a boot entry with efibootmgr, you are all set (note that the kernel image must have a .efi extension). 34 | 35 | #. In legacy BIOS mode, using lilo or grub or similar: lilo doesn't understand zfs and even the latest grub understands it with some limitations (for example, no zstd compression). If you're stuck with legacy BIOS mode, the best option is to put /boot on a separate partition that your loader understands (for example, ext4). 36 | 37 | If you are not using a PC, things will likely be quite different, so refer to relevant hardware documentation for your platform; on a Raspberry PI, for example, the firmware loads kernel and initrd from a FAT32 partition, so the situation is similar to a PC booting in UEFI mode. 38 | 39 | The simplest setup, discussed in this recipe, is the one using UEFI. As said above, if you boot in legacy BIOS mode, you will have to ensure that the boot loader of your choice can load the kernel image. 40 | 41 | 42 | Partition layout 43 | ---------------- 44 | 45 | Repartitioning an existing system disk in order to make room for a zfs root partition is left as an exercise to the reader (there's nothing specific to zfs). 46 | 47 | As a pointer: if you're starting from a whole-disk ext4 filesystem, you could use resize2fs to shrink it to half of disk size and then relocate it to the second half of the disk with sfdisk. After that, you could create a ZFS partition before it, and copy stuff across using cp or rsync. This approach has the benefit of providing some kind of recovery mechanism in case stuff goes wrong. When you are happy about the final setup, you can then delete the ext4 partition and enlarge the ZFS one. 48 | 49 | In any case you will want to have a rescue cdrom at hand, and one that supports zfs out of the box. A Ubuntu live CD will do. 50 | 51 | For this recipe, we'll be assuming that we're booting in UEFI mode and there's a single disk configured like this: 52 | 53 | .. code-block:: sh 54 | 55 | /dev/sda1 # EFI system partition 56 | /dev/sda2 # zfs pool (contains the "root" filesystem) 57 | 58 | .. 59 | 60 | Since we are creating a zpool inside a disk partition (as opposed to using up a whole disk), make sure that the partition type is set correctly (for GPT, 54 or 67 are good choices). 61 | 62 | When creating the zfs filesystem, you will want to set "mountpoint=legacy" so that the filesystem can be mounted with "mount" in a traditional way; Slackware startup and shutdown scripts expect that. 63 | 64 | Back to our recipe, this is a working example: 65 | 66 | .. code-block:: sh 67 | 68 | zpool create -o ashift=12 -O mountpoint=none tank /dev/sda2 69 | zfs create -o mountpoint=legacy -o compression=zstd tank/root 70 | # add more as needed: 71 | # zfs create -o mountpoint=legacy [..] tank/home 72 | # zfs create -o mountpoint=legacy [..] tank/usr 73 | # zfs create -o mountpoint=legacy [..] tank/opt 74 | 75 | .. 76 | 77 | Tweak options to taste; while "mountpoint=legacy" is required for the root filesystem, it is not required for any additional filesystems. In the example above we applied it to all of them, but that's a matter of personal preference, as is setting "mountpoint=none" on the pool itself so it's not mounted anywhere by default (do note that zpool's "mountpoint=none" wants an uppercase "-O"). 78 | 79 | You can check your setup with: 80 | 81 | .. code-block:: sh 82 | 83 | zpool list 84 | zfs list 85 | 86 | .. 87 | 88 | Then, adjust /etc/fstab to something like this: 89 | 90 | .. code-block:: sh 91 | 92 | tank/root / zfs defaults 0 0 93 | # add more as needed: 94 | # tank/home /home zfs defaults 0 0 95 | # tank/usr /usr zfs defaults 0 0 96 | # tank/opt /opt zfs defaults 0 0 97 | 98 | .. 99 | 100 | This allow us to mount and umount them as usual, once we have imported the pool with "zpool import tank". Which leads us to... 101 | 102 | 103 | Patch and rebuild the initrd 104 | ---------------------------- 105 | 106 | Since we're using the generic kernel, we already have a usable /boot/initrd-tree/ (if you don't, prepare one by running mkinitrd once). 107 | 108 | Copy the zfs userspace tools to it (/sbin/zfs isn't strictly necessary, but may be handy for rescuing a system that refuses to boot): 109 | 110 | .. code-block:: sh 111 | 112 | install -m755 /sbin/zpool /sbin/zfs /boot/initrd-tree/sbin/ 113 | 114 | .. 115 | 116 | Modify /boot/initrd-tree/init; locate the first "case" statement that sets ROOTDEV; it reads: 117 | 118 | .. code-block:: sh 119 | 120 | root=/dev/*) 121 | ROOTDEV=$(echo $ARG | cut -f2 -d=) 122 | ;; 123 | root=LABEL=*) 124 | ROOTDEV=$(echo $ARG | cut -f2- -d=) 125 | ;; 126 | root=UUID=*) 127 | ROOTDEV=$(echo $ARG | cut -f2- -d=) 128 | ;; 129 | .. 130 | 131 | Replace the three cases with: 132 | 133 | .. code-block:: sh 134 | 135 | root=*) 136 | ROOTDEV=$(echo $ARG | cut -f2 -d=) 137 | ;; 138 | 139 | .. 140 | 141 | This allows us to specify something like "root=tank/root" (if you look carefully at the script, you will notice that you can collapse the /dev/*, LABEL=*, UUID=* and the newly-added case into a single one). 142 | 143 | Further down in the script, locate the section that handles RESUMEDEV ("# Resume state from swap"), and insert the following just before it: 144 | 145 | .. code-block:: sh 146 | 147 | # Support for zfs root filesystem: 148 | if [ x"$ROOTFS" = xzfs ]; then 149 | POOL=${ROOTDEV%%/*} 150 | echo "Importing zfs pool: $POOL" 151 | zpool import -o cachefile=none -N $POOL 152 | fi 153 | 154 | .. 155 | 156 | Finally, rebuild the initrd with something like: 157 | 158 | .. code-block:: sh 159 | 160 | mkinitrd -m zfs 161 | 162 | .. 163 | 164 | It may make sense to use the "-o" option and create an initrd.gz in a different file, just in case. Look at /boot/README.initrd for more details. 165 | 166 | Rebuilding the initrd should also copy in the necessary libraries (libzfs.so, etc.) under /lib/; verify it by running: 167 | 168 | .. code-block:: sh 169 | 170 | chroot /boot/initrd-tree /sbin/zpool --help 171 | 172 | .. 173 | 174 | When you're happy, remember to copy the new initrd.gz to the ESP partition. 175 | 176 | There are other ways to ensure that the zfs binaries and filesystem module are always built into the initrd - see man initrd. 177 | 178 | 179 | Configure the boot loader 180 | ------------------------- 181 | 182 | Any of these three options will do: 183 | 184 | #. Append "rootfstype=zfs root=tank/root" to the boot loader configuration (e.g. elilo.conf or equivalent). 185 | #. Modify /boot/initrd-tree/rootdev and /boot/initrd-tree/rootfs in the previous step, then rebuild the initrd. 186 | #. When rebuilding the initrd, add "-f zfs -r tank/root". 187 | 188 | If you're using elilo, it should look something like this: 189 | 190 | .. code-block:: sh 191 | 192 | image=vmlinuz 193 | label=linux 194 | initrd=initrd.gz 195 | append="root=tank/root rootfstype=zfs" 196 | 197 | .. 198 | 199 | Should go without saying, but doublecheck that the file referenced by initrd is the one you just generated (e.g. if you're using the ESP, make sure you copy the newly-built initrd to it). 200 | 201 | 202 | Before rebooting 203 | ---------------- 204 | 205 | Make sure you have an emergency kernel around in case something goes wrong. 206 | If you upgrade kernel or packages, make use of snapshosts. 207 | 208 | 209 | Other options 210 | ------------- 211 | 212 | You can build zfs support right into the kernel. If you do so and do not want to use an initrd, you can embed a small initramfs in the kernel image that performs the "zpool import" step). 213 | 214 | 215 | Snapshots and boot environments 216 | ------------------------------- 217 | 218 | The modifications above also allow you to create a clone of the root filesystem and boot into it; something like this should work: 219 | 220 | .. code-block:: sh 221 | 222 | zfs snapshot tank/root@mysnapshot 223 | zfs clone tank/root@mysnapshot tank/root-clone 224 | zfs set mountpoint=legacy tank/root-clone 225 | zfs promote tank/root-clone 226 | 227 | .. 228 | 229 | Adjust boot parameters to mount "tank/root-clone" instead of "tank/root" (making a copy of the known-good kernel and initrd on the ESP is not a bad idea). 230 | 231 | 232 | Support 233 | ------- 234 | 235 | If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at `#zfsonlinux <ircs://irc.libera.chat/#zfsonlinux>`__ on `Libera Chat <https://libera.chat/>`__. If you have a bug report or feature request related to this HOWTO, please `file a new issue and mention @a-biardi <https://github.com/openzfs/openzfs-docs/issues/new?body=@a-biardi,%20re.%20Slackware%20Root%20on%20ZFS%20HOWTO>`__. 236 | -------------------------------------------------------------------------------- /docs/Developer Resources/Building ZFS.rst: -------------------------------------------------------------------------------- 1 | Building ZFS 2 | ============ 3 | 4 | GitHub Repositories 5 | ~~~~~~~~~~~~~~~~~~~ 6 | 7 | The official source for OpenZFS is maintained at GitHub by the 8 | `openzfs <https://github.com/openzfs/>`__ organization. The primary 9 | git repository for the project is the `zfs 10 | <https://github.com/openzfs/zfs>`__ repository. 11 | 12 | There are two main components in this repository: 13 | 14 | - **ZFS**: The ZFS repository contains a copy of the upstream OpenZFS 15 | code which has been adapted and extended for Linux and FreeBSD. The 16 | vast majority of the core OpenZFS code is self-contained and can be 17 | used without modification. 18 | 19 | - **SPL**: The SPL is a thin shim layer which is responsible for 20 | implementing the fundamental interfaces required by OpenZFS. It's 21 | this layer which allows OpenZFS to be used across multiple 22 | platforms. SPL used to be maintained in a separate repository, but 23 | was merged into the `zfs <https://github.com/openzfs/zfs>`__ 24 | repository in the ``0.8`` major release. 25 | 26 | Installing Dependencies 27 | ~~~~~~~~~~~~~~~~~~~~~~~ 28 | 29 | The first thing you'll need to do is prepare your environment by 30 | installing a full development tool chain. In addition, development 31 | headers for both the kernel and the following packages must be 32 | available. It is important to note that if the development kernel 33 | headers for the currently running kernel aren't installed, the modules 34 | won't compile properly. 35 | 36 | The following dependencies should be installed to build the latest ZFS 37 | 2.1 release. 38 | 39 | - **RHEL/CentOS 7**: 40 | 41 | .. code:: sh 42 | 43 | sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel git ncompress libcurl-devel 44 | sudo yum install --enablerepo=epel python-packaging dkms 45 | 46 | - **RHEL/CentOS 8, Fedora**: 47 | 48 | .. code:: sh 49 | 50 | sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel git ncompress libcurl-devel 51 | sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms 52 | 53 | - **Debian, Ubuntu**: 54 | 55 | .. code:: sh 56 | 57 | sudo apt install alien autoconf automake build-essential debhelper-compat dh-autoreconf dh-dkms dh-python dkms fakeroot gawk git libaio-dev libattr1-dev libblkid-dev libcurl4-openssl-dev libelf-dev libffi-dev libpam0g-dev libssl-dev libtirpc-dev libtool libudev-dev linux-headers-generic parallel po-debconf python3 python3-all-dev python3-cffi python3-dev python3-packaging python3-setuptools python3-sphinx uuid-dev zlib1g-dev 58 | 59 | - **FreeBSD**: 60 | 61 | .. code:: sh 62 | 63 | pkg install autoconf automake autotools git gmake python devel/py-sysctl sudo 64 | 65 | Build Options 66 | ~~~~~~~~~~~~~ 67 | 68 | There are two options for building OpenZFS; the correct one largely 69 | depends on your requirements. 70 | 71 | - **Packages**: Often it can be useful to build custom packages from 72 | git which can be installed on a system. This is the best way to 73 | perform integration testing with systemd, dracut, and udev. The 74 | downside to using packages it is greatly increases the time required 75 | to build, install, and test a change. 76 | 77 | - **In-tree**: Development can be done entirely in the SPL/ZFS source 78 | tree. This speeds up development by allowing developers to rapidly 79 | iterate on a patch. When working in-tree developers can leverage 80 | incremental builds, load/unload kernel modules, execute utilities, 81 | and verify all their changes with the ZFS Test Suite. 82 | 83 | The remainder of this page focuses on the **in-tree** option which is 84 | the recommended method of development for the majority of changes. See 85 | the :doc:`custom packages <./Custom Packages>` page for additional 86 | information on building custom packages. 87 | 88 | Developing In-Tree 89 | ~~~~~~~~~~~~~~~~~~ 90 | 91 | Clone from GitHub 92 | ^^^^^^^^^^^^^^^^^ 93 | 94 | Start by cloning the ZFS repository from GitHub. The repository has a 95 | **master** branch for development and a series of **\*-release** 96 | branches for tagged releases. After checking out the repository your 97 | clone will default to the master branch. Tagged releases may be built 98 | by checking out zfs-x.y.z tags with matching version numbers or 99 | matching release branches. 100 | 101 | :: 102 | 103 | git clone https://github.com/openzfs/zfs 104 | 105 | Configure and Build 106 | ^^^^^^^^^^^^^^^^^^^ 107 | 108 | For developers working on a change always create a new topic branch 109 | based off of master. This will make it easy to open a pull request with 110 | your change latter. The master branch is kept stable with extensive 111 | `regression testing <http://build.zfsonlinux.org/>`__ of every pull 112 | request before and after it's merged. Every effort is made to catch 113 | defects as early as possible and to keep them out of the tree. 114 | Developers should be comfortable frequently rebasing their work against 115 | the latest master branch. 116 | 117 | In this example we'll use the master branch and walk through a stock 118 | **in-tree** build. Start by checking out the desired branch then build 119 | the ZFS and SPL source in the traditional autotools fashion. 120 | 121 | :: 122 | 123 | cd ./zfs 124 | git checkout master 125 | sh autogen.sh 126 | ./configure 127 | make -s -j$(nproc) 128 | 129 | | **tip:** ``--with-linux=PATH`` and ``--with-linux-obj=PATH`` can be 130 | passed to configure to specify a kernel installed in a non-default 131 | location. 132 | | **tip:** ``--enable-debug`` can be passed to configure to enable all ASSERTs and 133 | additional correctness tests. 134 | 135 | **Optional** Build packages 136 | 137 | :: 138 | 139 | make rpm #Builds RPM packages for CentOS/Fedora 140 | make deb #Builds RPM converted DEB packages for Debian/Ubuntu 141 | make native-deb #Builds native DEB packages for Debian/Ubuntu 142 | 143 | | **tip:** Native Debian packages build with pre-configured paths for 144 | Debian and Ubuntu. It's best not to override the paths during 145 | configure. 146 | | **tip:** For native Debian packages, ``KVERS``, ``KSRC`` and ``KOBJ`` 147 | environment variables can be exported to specify the kernel installed 148 | in non-default location. 149 | 150 | .. note:: 151 | Support for native Debian packaging will be available starting from 152 | openzfs-2.2 release. 153 | 154 | Install 155 | ^^^^^^^ 156 | 157 | You can run ``zfs-tests.sh`` without installing ZFS, see below. If you 158 | have reason to install ZFS after building it, pay attention to how your 159 | distribution handles kernel modules. On Ubuntu, for example, the modules 160 | from this repository install in the ``extra`` kernel module path, which 161 | is not in the standard ``depmod`` search path. Therefore, for the 162 | duration of your testing, edit ``/etc/depmod.d/ubuntu.conf`` and add 163 | ``extra`` to the beginning of the search path. 164 | 165 | You may then install using 166 | ``sudo make install; sudo ldconfig; sudo depmod``. You'd uninstall with 167 | ``sudo make uninstall; sudo ldconfig; sudo depmod``. You can install just 168 | the kernel modules with ``sudo make -C modules/ install``. 169 | 170 | .. _running-zloopsh-and-zfs-testssh: 171 | 172 | Running zloop.sh and zfs-tests.sh 173 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 174 | 175 | If you wish to run the ZFS Test Suite (ZTS), then ``ksh`` and a few 176 | additional utilities must be installed. 177 | 178 | - **RHEL/CentOS 7:** 179 | 180 | .. code:: sh 181 | 182 | sudo yum install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf 183 | sudo yum install --enablerepo=epel dbench 184 | 185 | - **RHEL/CentOS 8, Fedora:** 186 | 187 | .. code:: sh 188 | 189 | sudo dnf install --skip-broken ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf 190 | sudo dnf install --skip-broken --enablerepo=epel dbench 191 | 192 | - **Debian:** 193 | 194 | .. code:: sh 195 | 196 | sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-perf selinux-utils quota 197 | 198 | - **Ubuntu:** 199 | 200 | .. code:: sh 201 | 202 | sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-tools-common selinux-utils quota 203 | 204 | - **FreeBSD**: 205 | 206 | .. code:: sh 207 | 208 | pkg install base64 bash checkbashisms fio hs-ShellCheck ksh93 pamtester devel/py-flake8 sudo 209 | 210 | 211 | There are a few helper scripts provided in the top-level scripts 212 | directory designed to aid developers working with in-tree builds. 213 | 214 | - **zfs-helper.sh:** Certain functionality (i.e. /dev/zvol/) depends on 215 | the ZFS provided udev helper scripts being installed on the system. 216 | This script can be used to create symlinks on the system from the 217 | installation location to the in-tree helper. These links must be in 218 | place to successfully run the ZFS Test Suite. The **-i** and **-r** 219 | options can be used to install and remove the symlinks. 220 | 221 | :: 222 | 223 | sudo ./scripts/zfs-helpers.sh -i 224 | 225 | - **zfs.sh:** The freshly built kernel modules can be loaded using 226 | ``zfs.sh``. This script can later be used to unload the kernel 227 | modules with the **-u** option. 228 | 229 | :: 230 | 231 | sudo ./scripts/zfs.sh 232 | 233 | - **zloop.sh:** A wrapper to run ztest repeatedly with randomized 234 | arguments. The ztest command is a user space stress test designed to 235 | detect correctness issues by concurrently running a random set of 236 | test cases. If a crash is encountered, the ztest logs, any associated 237 | vdev files, and core file (if one exists) are collected and moved to 238 | the output directory for analysis. 239 | 240 | :: 241 | 242 | sudo ./scripts/zloop.sh 243 | 244 | - **zfs-tests.sh:** A wrapper which can be used to launch the ZFS Test 245 | Suite. Three loopback devices are created on top of sparse files 246 | located in ``/var/tmp/`` and used for the regression test. Detailed 247 | directions for the ZFS Test Suite can be found in the 248 | `README <https://github.com/openzfs/zfs/tree/master/tests>`__ 249 | located in the top-level tests directory. 250 | 251 | :: 252 | 253 | ./scripts/zfs-tests.sh -vx 254 | 255 | **tip:** The **delegate** tests will be skipped unless group read 256 | permission is set on the zfs directory and its parents. 257 | --------------------------------------------------------------------------------