├── README.md ├── convert-volume-to-core-storage.sh ├── eject-disk.sh ├── list-core-storage.sh ├── list-volumes.sh ├── mount-disk.sh ├── partition-disk-with-gpt.sh ├── passepartout ├── README.md ├── mount.sh ├── scrub.sh ├── unmount.sh └── zdb.txt ├── unmount-disk.sh ├── zfs-create-mirror-pool.sh ├── zfs-export.sh ├── zfs-import-list.sh ├── zfs-import.sh ├── zfs-list.sh ├── zfs-scrub.sh └── zfs-status.sh /README.md: -------------------------------------------------------------------------------- 1 | These are instructions for downloading, installing, and configuring [OpenZFS on 2 | OS X] on macOS 10.14 (Mojave). It was originally written for macOS 10.12 3 | (Sierra) and has since been updated. 4 | 5 | [OpenZFS on OS X]: https://openzfsonosx.org/ 6 | 7 | # Background 8 | 9 | **NOTE**: Everything applies to my personal situation (e.g. w.r.t. to disks and 10 | configuration choices). It is not comprehensive, but it might be helpful in 11 | figuring out your own situation. 12 | 13 | Here is the context in which this is written: 14 | 15 | * I have two external USB hard disk drives (HDDs), each 3TB. 16 | 17 | While I would prefer solid-state drives (SSDs) for their quietness and 18 | reliability, HDDs (esp. at 3TB) are much cheaper. 19 | 20 | * I want the disks to be mirrored to allow for the failure of one disk. 21 | 22 | Since I started using this setup, I've already had one failure. HDDs are 23 | unreliable, and I can't expect one to be enough. 24 | 25 | * I want any disk problems to be identified early. 26 | 27 | It seems that ZFS is the best way to handle the above requirements. 28 | 29 | Here are some other considerations I've had: 30 | 31 | * I would like to encrypt some or all of the disks. 32 | 33 | I previously used the built-in support for encrypting HFS+ disks and installed 34 | ZFS on top of that. (See the [Encryption Guide on OpenZFS on OS X].) This was 35 | before ZFS had native encryption. 36 | 37 | However, since then, I've discovered FUSE-based encryption such as 38 | [`gocryptfs`], [`securefs`], and [CryFS], and I've decided to use that to 39 | encrypt only a part of the data on disk. Consequently, I do not consider 40 | encryption in this document. 41 | 42 | [Encryption Guide on OpenZFS on OS X]: https://openzfsonosx.org/wiki/Encryption 43 | [`gocryptfs`]: https://nuetzlich.net/gocryptfs/ 44 | [`securefs`]: https://github.com/netheril96/securefs 45 | [CryFS]: https://www.cryfs.org/ 46 | 47 | # Instructions 48 | 49 | ## Installing and Upgrading OpenZFS 50 | 51 | I use [Homebrew] to install OpenZFS. It is regularly updated with the OpenZFS 52 | releases. 53 | 54 | [Homebrew]: https://brew.sh/ 55 | 56 | ### Installing 57 | 58 | First, update Homebrew: 59 | 60 | ``` 61 | $ brew update 62 | ``` 63 | 64 | Next, check the [`openzfs` formula] to make sure your macOS system is supported. 65 | Look at `depends_on`. Also, make a note of the version for the next step. 66 | 67 | [`openzfs` formula]: https://github.com/caskroom/homebrew-cask/blob/master/Casks/openzfs.rb 68 | 69 | ``` 70 | $ brew cat openzfs --cask 71 | ``` 72 | 73 | Then, check the [OpenZFS Changelog] for the release notes of the version in the 74 | Homebrew formula. 75 | 76 | [OpenZFS Changelog]: https://openzfsonosx.org/wiki/Changelog 77 | 78 | Finally, install `openzfs`: 79 | 80 | ``` 81 | $ brew install openzfs --cask 82 | ``` 83 | 84 | ### Upgrading 85 | 86 | First, update Homebrew: 87 | 88 | ``` 89 | $ brew update 90 | ``` 91 | 92 | Next, check if you have an outdated `openzfs` cask: 93 | 94 | ``` 95 | $ brew cask outdated 96 | ``` 97 | 98 | Then, if your version is old and you want to upgrade, first read the [OpenZFS 99 | Changelog] to make sure everything you need will still work after an upgrade. 100 | (If everything is working now, you don't necessarily need to upgrade.) 101 | 102 | Finally, upgrade `openzfs`: 103 | 104 | ``` 105 | $ brew cask upgrade openzfs 106 | ``` 107 | 108 | *Resources*: 109 | 110 | * [Installation Guide on OpenZFS on OS X](https://openzfsonosx.org/wiki/Install) 111 | 112 | ## Encrypting an external drive 113 | 114 | These instructions are for OpenZFS on OS X 1.6.1, which does not have built-in 115 | encryption. In future versions of OpenZFS, we expect to be able to use the 116 | built-in encryption in ZFS. 117 | 118 | There are multiple apparent ways to combine ZFS with encryption. From my naive 119 | eyes, it seems like the following is the most convenient. 120 | 121 | First, we need to see what volumes are available. After plugging in both of my 122 | new external USB hard drives, I ran this: 123 | 124 | ``` 125 | $ ./list-volumes.sh 126 | ``` 127 | 128 | In my case, the result was this: 129 | 130 | ``` 131 | /dev/disk0 (internal, physical): 132 | #: TYPE NAME SIZE IDENTIFIER 133 | 0: GUID_partition_scheme *500.3 GB disk0 134 | 1: EFI EFI 209.7 MB disk0s1 135 | 2: Apple_CoreStorage Macintosh HD 499.4 GB disk0s2 136 | 3: Apple_Boot Recovery HD 650.0 MB disk0s3 137 | 138 | /dev/disk1 (internal, virtual): 139 | #: TYPE NAME SIZE IDENTIFIER 140 | 0: Apple_HFS Macintosh HD +499.1 GB disk1 141 | Logical Volume on disk0s2 142 | 33070EB3-F7FF-45A0-BF9C-079ABB4079CC 143 | Unlocked Encrypted 144 | 145 | /dev/disk2 (external, physical): 146 | #: TYPE NAME SIZE IDENTIFIER 147 | 0: FDisk_partition_scheme *3.0 TB disk2 148 | 1: Windows_FAT_32 ADATA HM900 3.0 TB disk2s1 149 | 150 | /dev/disk3 (external, physical): 151 | #: TYPE NAME SIZE IDENTIFIER 152 | 0: FDisk_partition_scheme *3.0 TB disk3 153 | 1: Windows_NTFS Transcend 3.0 TB disk3s1 154 | ``` 155 | 156 | Now, armed with the knowledge that we're working with the physical volumes 157 | `/dev/disk2` and `/dev/disk3`, we need to repartition them to use the GUID 158 | Partitioning Table scheme, which is required for Core Storage: 159 | 160 | ``` 161 | $ ./partition-disk-with-gpt.sh /dev/disk2 ADATA1 162 | $ ./partition-disk-with-gpt.sh /dev/disk3 Transcend1 163 | ``` 164 | 165 | After partitioning, we see the following volumes: 166 | 167 | ``` 168 | /dev/disk2 (external, physical): 169 | #: TYPE NAME SIZE IDENTIFIER 170 | 0: GUID_partition_scheme *3.0 TB disk2 171 | 1: EFI EFI 314.6 MB disk2s1 172 | 2: Apple_HFS ADATA1 3.0 TB disk2s2 173 | 174 | /dev/disk3 (external, physical): 175 | #: TYPE NAME SIZE IDENTIFIER 176 | 0: GUID_partition_scheme *3.0 TB disk3 177 | 1: EFI EFI 314.6 MB disk3s1 178 | 2: Apple_HFS Transcend1 3.0 TB disk3s2 179 | ``` 180 | 181 | Next, we convert the `Apple_HFS` partitions to Core Storage, so that we can use 182 | its encryption. 183 | 184 | To convert the partitions, run this following: 185 | 186 | ``` 187 | $ ./convert-volume-to-core-storage.sh disk2s2 188 | $ ./convert-volume-to-core-storage.sh disk3s2 189 | ``` 190 | 191 | We have now created encrypted logical volumes, which `./list-volumes.sh` shows 192 | as: 193 | 194 | ``` 195 | /dev/disk4 (external, virtual): 196 | #: TYPE NAME SIZE IDENTIFIER 197 | 0: Apple_HFS ADATA1 +3.0 TB disk4 198 | Logical Volume on disk2s2 199 | FE33AD56-C280-410B-B54B-85382CA84D75 200 | Unlocked Encrypted 201 | 202 | /dev/disk5 (external, virtual): 203 | #: TYPE NAME SIZE IDENTIFIER 204 | 0: Apple_HFS Transcend1 +3.0 TB disk5 205 | Logical Volume on disk3s2 206 | 3A7DAF85-DBDB-49A7-AE9B-24D55CA27000 207 | Unlocked Encrypted 208 | ``` 209 | 210 | You should now test your password on this volume. One way is to unmount (eject) 211 | all volumes on the external drive, unplug the USB cable, and plug it back in. 212 | You can eject the disks as follows: 213 | 214 | ``` 215 | $ ./eject-disk.sh ADATA1 216 | $ ./eject-disk.sh Transcend1 217 | ``` 218 | 219 | After pluggin them back in, you should be asked for your password, 220 | 221 | At this point, you should let the encryption conversion carry on before doing 222 | anything else. You can check it's status with: 223 | 224 | ``` 225 | $ ./list-core-storage.sh | grep Conversion 226 | ``` 227 | 228 | I first see: 229 | 230 | ``` 231 | Conversion Status: Converting (forward) 232 | Conversion Progress: 1% 233 | ``` 234 | 235 | Note that this process can take a _very_ long time. I took around 4 days with my 236 | two 3 TB drives. 237 | 238 | *Resources*: 239 | 240 | * [Encryption Guide on OpenZFS on OS X](https://openzfsonosx.org/wiki/Encryption) 241 | * `man diskutil` 242 | 243 | ## Creating a ZFS mirror pool 244 | 245 | We're working with two disks, so we're going to create a ZFS mirror pool, in 246 | which the disks are mirror images of each other. In case one fails, the other 247 | has a full copy. 248 | 249 | **IMPORTANT NOTE**: You should use volume identifier from `/var/run/disk` 250 | instead of the `/dev` names when referencing your volumes. For example, USB 251 | drives can be mounted at arbitrary `/dev` virtual devices depending on when they 252 | were connected. I found that I lost ZFS pools after disconnecting and 253 | reconnecting the drives. I'm not sure which identifier is the best, but I 254 | decided to go with UUIDs as found in `/var/run/disk/by-id/media-$UUID`. A UUID 255 | can also be used with `diskutil`, which makes it convenient. 256 | 257 | To get the volume UUIDS, refer here: 258 | 259 | ``` 260 | $ ./list-volumes.sh 261 | ``` 262 | 263 | Run the following script with the name of the pool first followed by the two 264 | volume UUIDs to use for the pool: 265 | 266 | ``` 267 | $ ./zfs-create-mirror-pool.sh passepartout \ 268 | FE33AD56-C280-410B-B54B-85382CA84D75 \ 269 | 3A7DAF85-DBDB-49A7-AE9B-24D55CA27000 270 | ``` 271 | 272 | If this completed without error, you can see the created pool with: 273 | 274 | ``` 275 | $ ./zfs-list.sh 276 | ``` 277 | 278 | *Resources*: 279 | 280 | * [Zpool on OpenZFS on OS X](https://openzfsonosx.org/wiki/Zpool) 281 | * [Device names on OpenZFS on OS X](https://openzfsonosx.org/wiki/Device_names) 282 | 283 | ## Importing a ZFS pool 284 | 285 | Import the pool with: 286 | 287 | ``` 288 | $ ./zfs-import.sh passepartout 289 | ``` 290 | 291 | You can see the status of currently connected pools with: 292 | 293 | ``` 294 | $ ./zfs-status.sh 295 | ``` 296 | 297 | ## Setting user privileges on the ZFS volume 298 | 299 | After the ZFS volume is mounted, it restricts writing to `root`, so you have to 300 | keep typing your password every time you want to copy a file to the volume, for 301 | example. To avoid this, you can change the restrictions to add write permission 302 | for your own user: 303 | 304 | 1. In the Finder, select the volume. 305 | 2. Get Info (⌘ I). 306 | 3. Click the closed lock button (🔒) at the bottom and type in your password if 307 | requested. 308 | 4. Click the plus button (⊞) at the bottom to add a new user for permissions. 309 | 5. Select your user. 310 | 6. Change your user's permission to Read & Write. 311 | 7. Click the open lock button (🔓) at the bottom. 312 | 313 | *Resources*: 314 | 315 | * [Creating user privileges on OpenZFS on OS X](https://openzfsonosx.org/wiki/Creating_user_privileges) 316 | 317 | ## Replacing a failed drive 318 | 319 | When one of the two drives fails, it will show up in `zpool status`. This is an 320 | example of how it looks when you only have one drive attached: 321 | 322 | ``` 323 | $ zpool status 324 | pool: passepartout 325 | state: DEGRADED 326 | status: One or more devices could not be opened. Sufficient replicas exist for 327 | the pool to continue functioning in a degraded state. 328 | action: Attach the missing device and online it using 'zpool online'. 329 | see: http://zfsonlinux.org/msg/ZFS-8000-2Q 330 | scan: scrub repaired 0 in 4h20m with 0 errors on Tue Jun 11 14:07:04 2019 331 | config: 332 | 333 | NAME STATE READ WRITE CKSUM 334 | passepartout DEGRADED 0 0 0 335 | mirror-0 DEGRADED 0 0 0 336 | 9350216444140675144 UNAVAIL 0 0 0 was /private/var/run/disk/by-id/media-FE33AD56-C280-410B-B54B-85382CA84D75 337 | media-3A7DAF85-DBDB-49A7-AE9B-24D55CA27000 ONLINE 0 0 0 338 | ``` 339 | 340 | To fix this: 341 | 342 | 1. Buy a new USB HDD about the same size (3TB). 343 | 2. Plug in both the working HDD and the new HDD. 344 | 3. Import the `passepartout` pool (as shown above). 345 | 4. Replace the failed drive (in this case: `9350216444140675144`) with the new 346 | drive (in this case: `/dev/disk2`): 347 | 348 | ``` 349 | $ sudo zpool replace -f passepartout 9350216444140675144 /dev/disk2 350 | ``` 351 | 352 | **NOTE**: I found the `-f` (“force”) flag to be required. Without it, the `zpool 353 | replace` command fails with: 354 | 355 | ``` 356 | invalid vdev specification 357 | use '-f' to override the following errors: 358 | /dev/disk2s1 is a EFI partition. Please see diskutil(8). 359 | ``` 360 | 361 | As far as I can tell, this error really means: “Are you sure you want to 362 | overwrite this disk?” And, yes, I did want to overwrite it. 363 | 364 | *Resources*: 365 | 366 | * [Degraded Pool](https://openzfsonosx.org/wiki/DegradedPool) 367 | * [Dealing with Failed Devices](https://www.freebsd.org/doc/handbook/zfs-zpool.html#zfs-zpool-resilver) 368 | -------------------------------------------------------------------------------- /convert-volume-to-core-storage.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | 9 | Convert a disk partition (erasing everything!) to Core Storage. 10 | END 11 | ) 12 | 13 | if [[ $# -ne 1 ]] ; then 14 | echo "$USAGE" 15 | exit -1 16 | fi 17 | 18 | VOLUME=$1 19 | 20 | /usr/sbin/diskutil coreStorage convert $VOLUME -passphrase 21 | -------------------------------------------------------------------------------- /eject-disk.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | 9 | Eject a disk. 10 | END 11 | ) 12 | 13 | if [[ $# -ne 1 ]] ; then 14 | echo "$USAGE" 15 | exit -1 16 | fi 17 | 18 | DISK=$1 19 | 20 | /usr/sbin/diskutil eject $DISK 21 | -------------------------------------------------------------------------------- /list-core-storage.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/sbin/diskutil coreStorage list 4 | -------------------------------------------------------------------------------- /list-volumes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/sbin/diskutil list 4 | -------------------------------------------------------------------------------- /mount-disk.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | 9 | Mount a disk. 10 | END 11 | ) 12 | 13 | if [[ $# -ne 1 ]] ; then 14 | echo "$USAGE" 15 | exit -1 16 | fi 17 | 18 | DISK=$1 19 | 20 | /usr/sbin/diskutil mount $DISK 21 | -------------------------------------------------------------------------------- /partition-disk-with-gpt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | 9 | Partition the volume (erasing everything!) using the GUID Partitioning Table 10 | (GPT) scheme required for Core Storage. 11 | END 12 | ) 13 | 14 | if [[ $# -ne 2 ]] ; then 15 | echo "$USAGE" 16 | exit -1 17 | fi 18 | 19 | VOLUME=$1 20 | PARTITION=$2 21 | 22 | /usr/sbin/diskutil eject $VOLUME || true 23 | /usr/sbin/diskutil partitionDisk $VOLUME GPT JHFS+ $PARTITION 100% 24 | 25 | # Notes: 26 | # 27 | # 1. We eject first because partitionDisk doesn't always work if the volume is 28 | # mounted. 29 | # 30 | # 2. The volumes must be given names (and not %noformat%) in order to be 31 | # formatted. 32 | # 33 | # 3. The partition intended for Core Storage (named ZFS here) must have a 34 | # journaled format. We use JHFS+ above. 35 | -------------------------------------------------------------------------------- /passepartout/README.md: -------------------------------------------------------------------------------- 1 | This directory contains instructions and scripts that I use for my particular 2 | OpenZFS installation on macOS 10.12 (Sierra) with two 3 TB USB 3.0 external 3 | drives. 4 | 5 | The drives are named ADATA1 and Transcend1. 6 | 7 | I created the ZFS pool with: 8 | 9 | ``` 10 | $ ../zfs-create-mirror-pool.sh passepartout \ 11 | FE33AD56-C280-410B-B54B-85382CA84D75 \ 12 | 3A7DAF85-DBDB-49A7-AE9B-24D55CA27000 13 | ``` 14 | 15 | After plugging the USB cables in, I use the following scripts to quickly mount 16 | (i.e. “import” in ZFS terminology) and unmount (“export”) the ZFS pool. 17 | 18 | ``` 19 | $ ./mount.sh 20 | $ ./unmount.sh 21 | ``` 22 | 23 | The `zdb` output is in [`zdb.txt`](./zdb.txt). 24 | -------------------------------------------------------------------------------- /passepartout/mount.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ../zfs-import.sh passepartout 4 | -------------------------------------------------------------------------------- /passepartout/scrub.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ../zfs-scrub.sh passepartout 4 | -------------------------------------------------------------------------------- /passepartout/unmount.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ../zfs-export.sh passepartout 4 | -------------------------------------------------------------------------------- /passepartout/zdb.txt: -------------------------------------------------------------------------------- 1 | passepartout: 2 | version: 5000 3 | name: 'passepartout' 4 | state: 0 5 | txg: 490 6 | pool_guid: 15892756445732397228 7 | errata: 0 8 | hostid: 1584508811 9 | hostname: '' 10 | vdev_children: 1 11 | vdev_tree: 12 | type: 'root' 13 | id: 0 14 | guid: 15892756445732397228 15 | children[0]: 16 | type: 'mirror' 17 | id: 0 18 | guid: 4069135744084933496 19 | metaslab_array: 35 20 | metaslab_shift: 34 21 | ashift: 12 22 | asize: 2999768055808 23 | is_log: 0 24 | create_txg: 4 25 | children[0]: 26 | type: 'disk' 27 | id: 0 28 | guid: 9350216444140675144 29 | path: '/private/var/run/disk/by-id/volume-749047B3-3C91-3E70-A44E-7712615110E1' 30 | whole_disk: 0 31 | DTL: 42 32 | create_txg: 4 33 | children[1]: 34 | type: 'disk' 35 | id: 1 36 | guid: 13326125633090316357 37 | path: '/private/var/run/disk/by-id/volume-04FD2434-C92B-3C93-B72D-A562AEE2D392' 38 | whole_disk: 0 39 | DTL: 40 40 | create_txg: 4 41 | features_for_read: 42 | com.delphix:hole_birth 43 | com.delphix:embedded_data 44 | -------------------------------------------------------------------------------- /unmount-disk.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | Unmount a disk, including a Core Storage logical volume 9 | END 10 | ) 11 | 12 | if [[ $# -ne 1 ]] ; then 13 | echo "$USAGE" 14 | exit -1 15 | fi 16 | 17 | DISK=$1 18 | 19 | /usr/sbin/diskutil unmountDisk $DISK 20 | -------------------------------------------------------------------------------- /zfs-create-mirror-pool.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | Create a ZFS pool that mirrors two volumes 9 | END 10 | ) 11 | 12 | if [[ $# -ne 3 ]] ; then 13 | echo "$USAGE" 14 | exit -1 15 | fi 16 | 17 | POOL=$1 18 | VOLUME1_UUID=$2 19 | VOLUME2_UUID=$3 20 | 21 | 22 | # The volumes must be unmounted before zpool create. 23 | /usr/sbin/diskutil unmount $VOLUME1_UUID || true 24 | /usr/sbin/diskutil unmount $VOLUME2_UUID || true 25 | 26 | # Create the ZFS mirror pool. 27 | /usr/bin/sudo /usr/local/bin/zpool create -f \ 28 | -o ashift=12 \ 29 | -O casesensitivity=insensitive \ 30 | -O normalization=formD \ 31 | -O compression=lz4 \ 32 | -O atime=off \ 33 | -O cachefile=$PWD/zpool.cache \ 34 | $POOL \ 35 | mirror \ 36 | /var/run/disk/by-id/media-$VOLUME1_UUID \ 37 | /var/run/disk/by-id/media-$VOLUME2_UUID 38 | 39 | # Notes: 40 | # 41 | # ashift=12 42 | # Use 4K sectors. 43 | # See http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks 44 | # 45 | # casesensitivity=insensitive 46 | # Recommended for Mac OS X. Some applications won't work without it. 47 | # 48 | # normalization=formD 49 | # Recommended for Mac OS X. Some applications won't work without it. 50 | # 51 | # compression=lz4 52 | # The lz4 compression algorithm is a high-performance replacement for the lzjb 53 | # algorithm. It features significantly faster compression and decompression, 54 | # as well as a moderately higher compression ratio than lzjb, 55 | # 56 | # atime=off 57 | # Controls whether the access time for files is updated when they are read. 58 | # Turning this property off avoids producing write traffic when reading files 59 | # and can result in signif icant performance gains, though it might confuse 60 | # mailers and other similar utilities. 61 | -------------------------------------------------------------------------------- /zfs-export.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | Unmount a ZFS pool 9 | END 10 | ) 11 | 12 | if [[ $# -ne 1 ]] ; then 13 | echo "$USAGE" 14 | exit -1 15 | fi 16 | 17 | POOL=$1 18 | 19 | /usr/bin/sudo /usr/local/bin/zpool export $POOL 20 | -------------------------------------------------------------------------------- /zfs-import-list.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/bin/sudo /usr/local/bin/zpool import 4 | -------------------------------------------------------------------------------- /zfs-import.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on error 4 | set -e 5 | 6 | USAGE=$(cat <<-END 7 | Usage: $0 8 | Mount a ZFS pool 9 | END 10 | ) 11 | 12 | if [[ $# -ne 1 ]] ; then 13 | echo "$USAGE" 14 | exit -1 15 | fi 16 | 17 | POOL=$1 18 | 19 | /usr/bin/sudo /usr/local/bin/zpool import -d /var/run/disk/by-id/ $POOL 20 | 21 | # Notes: 22 | # 23 | # * An alternative would be to use a `cachefile` as follows: 24 | # 25 | # /usr/bin/sudo /usr/local/bin/zpool import -c $PWD/zpool.cache $POOL 26 | -------------------------------------------------------------------------------- /zfs-list.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/local/bin/zfs list 4 | -------------------------------------------------------------------------------- /zfs-scrub.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/bin/sudo /usr/local/bin/zpool scrub $* 4 | -------------------------------------------------------------------------------- /zfs-status.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/local/bin/zpool status 4 | --------------------------------------------------------------------------------