├── CHANGELOG.md ├── FUNDING.yml ├── LICENSE ├── README.md ├── delayed_start.sh ├── screenshots ├── disk-spindown-config.png ├── system-dataset.png ├── task-delayed-oneliner.png └── task.png └── spindown_timer.sh /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | ## Version 2.4.0 (2025-02-07) 4 | * Fix shutdown mode on TrueNAS SCALE 5 | * Add support for using `smartctl` to interact with drives 6 | * Allow selection of disk control tool (`camcontrol`, `hdparm`, `smartctl`) via CLI argument `-x` 7 | * Improve host system detection to distinguish between TrueNAS CORE and TrueNAS SCALE 8 | * Simplify active drive detection 9 | * Skip drive I/O detection loop if all monitored drives are already sleeping 10 | 11 | 12 | ## Version 2.3.0 (2024-08-26) 13 | * Introduce syslog mode (`-l`). If set, all output is logged to syslog instead of stdout/stderr. 14 | * Introduce one shot mode (`-o`). If set, the script performs exactly one I/O poll interval, then immediately spins down drives that were idle for the last `POLL_TIME` seconds, and exits. 15 | * Skip NVMe drives during drive detection. 16 | * Exit with an error, if no drives were found during drive detection. 17 | 18 | 19 | ## Version 2.2.0 (2023-02-20) 20 | * Introduce the check mode (`-c`) to display the current power mode of all monitored drives every `POLL_TIME` seconds. See [README.md > Using the check mode](https://github.com/ngandrass/truenas-spindown-timer#automatic-using-the-check-mode--c) for more details. 21 | 22 | 23 | ## Version 2.1.0 (2023-02-19) 24 | * New CLI argument to switch between `disk` and `zpool` operation mode: `-u ` 25 | * When no operation mode is explicitly given, the script works in `disk` mode. This completely ignores zfs pools and works as before. 26 | * When operation mode is set to `zpool` by supplying `-u zpool`, the script now operates on a per-zpool basis. I/O is monitored for the pool as a whole and disks are only spun down if the complete pool was idle for a given number of seconds. ZFS pools are either detected automatically or can be supplied manually (see help text for `-i` and `-m`). 27 | * Drives are referenced by GPTID (CORE) or partuuid (SCALE) in ZFS pool mode. 28 | 29 | 30 | ## Version 2.0.1 (2022-09-17) 31 | * Added support for TrueNAS SCALE using `hdparm` instead of `camcontrol`. The script automatically detects the environment it is run in. 32 | 33 | 34 | ## Version 1.3.2 (2022-01-10) 35 | * Include an option to shutdown the system after all monitored drives are idle for a specified number of seconds 36 | 37 | 38 | ## Version 1.3.1 (2019-10-24) 39 | * Do drive detection at script start to fix erorrs on specific SAS controllers (LSI 9305) 40 | 41 | 42 | ## Version 1.3.0 (2019-10-09) 43 | * Introduce manual mode [-m] to disable automatic drive detection 44 | * Improve script description in print_usage() block 45 | * Documentation of advanced features and usage 46 | 47 | 48 | ## Version 1.2.1 (2019-09-30) 49 | * Add info about how to ignore multiple drives to the scripts usage description 50 | 51 | 52 | ## Version 1.2.0 (2019-07-12) 53 | * Add experimental support for SCSI drives 54 | * Use `camcontrol epc` instead of sending raw disk commands during spincheck (Thanks to @bilditup1) 55 | 56 | 57 | ## Version 1.1.0 (2019-07-09) 58 | * Add detection of "da" prefixed devices (Thanks to @bilditup1) 59 | 60 | 61 | ## Version 1.0.0 (2019-07-04) 62 | * Initial release 63 | -------------------------------------------------------------------------------- /FUNDING.yml: -------------------------------------------------------------------------------- 1 | # Thank you! <3 2 | 3 | github: ngandrass 4 | custom: paypal.me/ngandrass 5 | ko_fi: ngandrass 6 | 7 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 Niels Gandraß 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TrueNAS Spindown Timer 2 | 3 | [![Latest Version](https://img.shields.io/github/v/release/ngandrass/truenas-spindown-timer)](https://github.com/ngandrass/truenas-spindown-timer/releases) 4 | [![TrueNAS Version: CORE](https://img.shields.io/badge/TrueNAS%20Version-SCALE-blue)](https://github.com/ngandrass/truenas-spindown-timer/) 5 | [![TrueNAS Version: SCALE](https://img.shields.io/badge/TrueNAS%20Version-CORE-blue)](https://github.com/ngandrass/truenas-spindown-timer/) 6 | [![Maintenance Status](https://img.shields.io/maintenance/yes/9999)](https://github.com/ngandrass/truenas-spindown-timer/) 7 | [![License](https://img.shields.io/github/license/ngandrass/truenas-spindown-timer)](https://github.com/ngandrass/truenas-spindown-timer/blob/master/LICENSE) 8 | [![GitHub Issues](https://img.shields.io/github/issues/ngandrass/truenas-spindown-timer)](https://github.com/ngandrass/truenas-spindown-timer/issues) 9 | [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/ngandrass/truenas-spindown-timer)](https://github.com/ngandrass/truenas-spindown-timer/pulls) 10 | [![Donate with PayPal](https://img.shields.io/badge/PayPal-donate-orange)](https://www.paypal.me/ngandrass) 11 | [![Sponsor with GitHub](https://img.shields.io/badge/GitHub-sponsor-orange)](https://github.com/sponsors/ngandrass) 12 | [![GitHub Stars](https://img.shields.io/github/stars/ngandrass/truenas-spindown-timer?style=social)](https://github.com/ngandrass/truenas-spindown-timer/stargazers) 13 | [![GitHub Forks](https://img.shields.io/github/forks/ngandrass/truenas-spindown-timer?style=social)](https://github.com/ngandrass/truenas-spindown-timer/network/members) 14 | [![GitHub Contributors](https://img.shields.io/github/contributors/ngandrass/truenas-spindown-timer?style=social)](https://github.com/ngandrass/truenas-spindown-timer/graphs/contributors) 15 | 16 | _Monitors drive I/O and forces HDD spindown after a given idle period. Resistant 17 | to S.M.A.R.T. reads._ 18 | 19 | Disk spindown has always been an issue for various TrueNAS / FreeNAS users. This 20 | script utilizes `iostat` to detect I/O operations (reads, writes) on each disk. 21 | If a disk was neither read nor written for a given period of time, it is 22 | considered idle and is spun down. 23 | 24 | Periodic reads of S.M.A.R.T. data performed by the smartctl service are excluded. 25 | This allows users to have S.M.A.R.T. reporting enabled while being able to 26 | automatically spin down disks. The script moreover is immune to the periodic 27 | disk temperature reads in newer versions of TrueNAS. 28 | 29 | 30 | ## Key Features 31 | 32 | * [Periodic S.M.A.R.T. reads do not interfere with spindown](#configure-disk-standby-settings) 33 | * [Support for ATA and SCSI devices](#verify-drive-spindown--optional-) 34 | * [Works with both TrueNAS Core and TrueNAS SCALE](#tested-truenas--freenas-versions) 35 | * [Can operate on single disks or whole ZFS pools](#operation-mode--disk-vs-zpool) 36 | * [Per-disk idle timer / Independent spindown](#operation-mode--disk-vs-zpool) 37 | * [Configurable idle timeout and poll interval](#using-separate-timeouts-for-different-drives) 38 | * [Different idle timeouts for different disks / ZFS pools](#using-separate-timeouts-for-different-drives) 39 | * [Automatic detection or explicit listing of drives / ZFS pools to monitor](#usage) 40 | * [Ignoring of specific drives / ZFS pools (e.g. SSD with system dataset)](#usage) 41 | * [Executable via `Tasks` as `Post-Init Script`, configurable via TrueNAS GUI](#automatic-start-at-boot) 42 | * [Allows script placement on encrypted pool](#delayed-start--script-placed-in-encrypted-pool-) 43 | * [Optional automatic shutdown after configurable idle time](#automatic-system-shutdown--s-timeout) 44 | 45 | 46 | ## Usage 47 | 48 | ``` 49 | Usage: 50 | $0 [-h] [-q] [-v] [-l] [-d] [-o] [-c] [-m] [-u ] [-t ] [-p ] [-i ] [-s ] [-x ] 51 | 52 | Monitors drive I/O and forces HDD spindown after a given idle period. 53 | Resistant to S.M.A.R.T. reads. 54 | 55 | Operation is supported on either drive level (MODE = disk) with plain device 56 | identifiers or zpool level (MODE = zpool) with zfs pool names. See -u for more 57 | information. A drive is considered idle and gets spun down if there has been no 58 | I/O operations on it for at least TIMEOUT seconds. I/O requests are detected 59 | within multiple intervals with a length of POLL_TIME seconds. Detected reads or 60 | writes reset the drives timer back to TIMEOUT. 61 | 62 | Options: 63 | -t TIMEOUT : Total spindown delay. Number of seconds a drive has to 64 | experience no I/O activity before it is spun down (default: 3600). 65 | -p POLL_TIME : I/O poll interval. Number of seconds to wait for I/O during a 66 | single monitoring period (default: 600). 67 | -s TIMEOUT : Shutdown timeout. If given and no drive is active for TIMEOUT 68 | seconds, the system will be shut down. 69 | -u MODE : Operation mode (default: disk). 70 | If set to 'disk', the script operates with disk identifiers 71 | (e.g. ada0) for all CLI arguments and monitors I/O using 72 | iostat directly. 73 | If set to 'zpool' the script operates with ZFS pool names 74 | (e.g. zfsdata) for all CLI arguments and monitors I/O using 75 | the iostat of zpool. 76 | -i DRIVE : In automatic drive detection mode (default): 77 | Ignores the given drive or zfs pool. 78 | In manual mode [-m]: 79 | Only monitor the specified drives or zfs pools. Multiple 80 | drives or zfs pools can be given by repeating the -i option. 81 | -m : Manual drive detection mode. If set, automatic drive detection 82 | is disabled. 83 | CAUTION: This inverts the -i option, which can then be used to 84 | manually supply drives or zfs pools to monitor. All other drives 85 | or zfs pools will be ignored. 86 | -o : One shot mode. If set, the script performs exactly one I/O poll 87 | interval, then immediately spins down drives that were idle for 88 | the last seconds, and exits. This option ignores 89 | . It can be useful, if you want to invoke to script 90 | via cron. 91 | -c : Check mode. Outputs drive power state after each POLL_TIME 92 | seconds. 93 | -q : Quiet mode. Outputs are suppressed set. 94 | -v : Verbose mode. Prints additional information during execution. 95 | -l : Syslog logging. If set, all output is logged to syslog instead 96 | of stdout/stderr. 97 | -d : Dry run. No actual spindown is performed. 98 | -h : Print this help message. 99 | -x TOOL : Forces use of a specifiy tool for disk control. 100 | Supported tools are: "camcontrol", "hdparm", and "smartctl". 101 | If not specified, the first available tool (from left to right) 102 | will be automatically selected. 103 | 104 | Example usage: 105 | $0 106 | $0 -q -t 3600 -p 600 -i ada0 -i ada1 107 | $0 -q -m -i ada6 -i ada7 -i da0 108 | $0 -u zpool -i freenas-boot 109 | ``` 110 | 111 | ## Deployment and configuration 112 | 113 | The following steps describe how to configure TrueNAS and deploy the script. 114 | 115 | 116 | ### Configure disk standby settings 117 | 118 | To prevent the smartctl daemon or TrueNAS from interfering with spun down disks, 119 | open the TrueNAS GUI and navigate to `Storage > Disks`. 120 | 121 | For every disk you want to spin down, click the `Edit` button. Set the `HDD 122 | Standby` option to `Always On` and `Advanced Power Management` to level 128 or 123 | above. 124 | 125 | ![HDD standby settings](screenshots/disk-spindown-config.png) 126 | 127 | _Note: In older versions of FreeNAS it was required to set the S.M.A.R.T `Power 128 | Mode` to `Standby`. This setting was configured globally and was located under 129 | `Services > S.M.A.R.T. > Configure`._ 130 | 131 | 132 | ### System dataset placement 133 | 134 | Having the TrueNAS system dataset placed on a drive / ZFS pool prevents spindown. 135 | The system dataset should therefore be located on a disk that will not be spun 136 | down (e.g. the operating system SSD). 137 | 138 | The location of the system dataset can be configured under `System > System 139 | Dataset`. 140 | 141 | ![System Dataset settings](screenshots/system-dataset.png) 142 | 143 | 144 | ### Deploy script 145 | 146 | Copy the script to your NAS box and set the execution permission through `chmod 147 | +x spindown_timer.sh`. 148 | 149 | That's it! The script can now be run, e.g., in a `tmux` session. However, an 150 | automatic start during boot is highly recommended (see next section). 151 | 152 | 153 | ### Operation Mode: disk vs zpool 154 | 155 | The spindown timer support two operation modes, as selected by the `-u ` 156 | CLI argument (default: `disk`): 157 | 158 | - `disk`: Operates with plain disk identifiers (e.g. `ada0`, `sda`, ...). Each 159 | disk is monitored and spun down independently. Disks can even be spun down if 160 | they are not part of any ZFS pool (e.g. for hot standby). The CLI option `-i` 161 | will expect disk names as found in `/dev`. 162 | - `zpool`: Operates on a ZFS pool level. Disks are grouped by their associated 163 | ZFS pool. Disks are only spun down if all disks inside a ZFS pool (i.e. the 164 | ZFS pool as a whole) received no I/O for a given amount of time. The CLI 165 | option `-i` will expect pool names (e.g. `zfsdata`, `tank`, ...) as output by 166 | `zpool list`. 167 | 168 | You can also use a combination of both modes when running multiple instances of 169 | the script. See [Using separate timeouts for different drives](#using-separate-timeouts-for-different-drives) 170 | for more details. 171 | 172 | 173 | ### Automatic start at boot 174 | 175 | There are multiple ways to start the spindown timer after system boot. The 176 | easiest one is to register it as an `Init Script` via the TrueNAS GUI. This can 177 | be done by opening the GUI and navigating to `Tasks > Init/Shutdown Scripts`. 178 | Here, create a new `Post Init` task that executes `spindown_timer.sh` after 179 | boot. 180 | 181 | ![Spindown timer post init task](screenshots/task.png) 182 | 183 | _Note: Be sure to select `Command` as `Type`_ 184 | 185 | _Note: With FreeNAS-11.3 a `Timeout` was introduced. However, the spindown 186 | script is never terminated by FreeNAS, regardless of the configured value. 187 | Therefore, keep `Timeout` at the default value of 10 seconds for now._ 188 | 189 | 190 | #### Delayed start (Script placed in encrypted pool) 191 | 192 | If you placed the script at a location that is not available right after boot, a 193 | delayed start of the spindown timer is required. This, for example, applies when 194 | the script is located inside an encrypted pool, which needs to be unlocked prior 195 | to execution. 196 | 197 | To automatically delay the start until the script file becomes available, the 198 | helper script `delayed_start.sh` is provided. It takes the full path to the 199 | spindown timer script as the first argument. All additional arguments are passed 200 | to the called script once available. Example usage: `./delayed_start.sh 201 | /mnt/pool/spindown_timer.sh -t 3600 -p 600` 202 | 203 | The `delayed_start.sh` script, however, must again be placed in a location that 204 | is available right after boot. To circumvent this problem, you can also use the 205 | following one-liner directly within an `Init/Shutdown Script`, as shown in the 206 | screenshot below. Set `SCRIPT` to the path where the `spindown_timer.sh` file is 207 | stored and configure all desired call arguments by setting them via the `ARGS` 208 | variable. The `CHECK` variable determines the delay between execution attempts 209 | in seconds. 210 | 211 | ```bash 212 | /bin/bash -c 'SCRIPT="/mnt/pool/spindown_timer.sh"; ARGS="-t 3600 -p 600"; CHECK=60; while true; do if [ -f "${SCRIPT}" ]; then ${SCRIPT} ${ARGS}; break; else sleep ${CHECK}; fi; done' 213 | ``` 214 | 215 | ![Spindown timer delayed post init task](screenshots/task-delayed-oneliner.png) 216 | 217 | _Note: Be sure to select `Command` as `Type`_ 218 | 219 | _Note: With FreeNAS-11.3 a `Timeout` was introduced. However, the spindown 220 | script is never terminated by FreeNAS, regardless of the configured value. 221 | Therefore, keep `Timeout` at the default value of 10 seconds for now._ 222 | 223 | 224 | #### Verify autostart 225 | 226 | You can verify execution of the script either via a process manager like `htop` 227 | or simply by using the following command: `ps -aux | grep "spindown_timer.sh"` 228 | 229 | When using a delayed start, keep in mind that it can take up to `$CHECK` seconds 230 | before the script availability is updated and the spindown timer is finally 231 | executed. 232 | 233 | 234 | ### Verify drive spindown (optional) 235 | 236 | It can be useful to check the current power state of a drive. This can be done 237 | by using one of the following commands, depending on your device type. 238 | 239 | 240 | #### Automatic: Using the check mode (`-c`) 241 | 242 | The script features a check mode. If the CLI flag `-c` is supplied, the power 243 | state of all monitored drives is output every `POLL_TIME` seconds, as set via 244 | the `-p` option (default: 600 seconds). To monitor drive power states without 245 | performing actual spindowns, the dry run flag `-d` can be set. 246 | 247 | The following example checks the power state of all drives every 60 seconds and 248 | does perform no spindowns: 249 | 250 | ```bash 251 | ./spindown_timer.sh -d -c -p 60 252 | ``` 253 | 254 | In order to be able to check on the script at any given time, you can run it 255 | inside a tmux session, and add the `-c` option. An example would be: 256 | `tmux new-session -d -s spindown_timer "/path/to/spindown_timer.sh -d -c -p 60"` 257 | You can now see what the script has been doing by running the following in a 258 | shell: `tmux attach -t spindown_timer`. 259 | 260 | 261 | #### Manual: ATA drives 262 | 263 | The current power mode of an ATA drive can be checked using the command 264 | `camcontrol epc $drive -c status -P`, where `$drive` is the drive to check 265 | (e.g., `ada0`). 266 | 267 | It should return `Current power state: Standby_z(0x00)` for a spun down drive. 268 | 269 | 270 | #### Manual: SCSI drives 271 | 272 | The current power mode of a SCSI drive can be checked through reading the 273 | modepage `0x1a` using the command `camcontrol modepage $drive -m 0x1a`, where 274 | `$drive` is the drive to check (e.g., `da0`). 275 | 276 | A spun down drive should be in one of the standby states `Standby_y` or 277 | `Standby_z`. 278 | 279 | A detailed description of the available SCSI modes can be found in 280 | `/usr/share/misc/scsi_modes`. 281 | 282 | 283 | ## Advanced usage 284 | 285 | In the following section, advanced usage scenarios are described. 286 | 287 | 288 | ### Automatic drive detection vs manual mode [-m] 289 | 290 | In automatic mode (default) all drives of the system, excluding the ones 291 | specified using the `-i` switch, are monitored and spun down if idle. 292 | 293 | In scenarios where only a small subset of all available drives should be spun 294 | down, the manual mode can be used by setting the `-m` flag. This disables 295 | automatic drive detection and inverts the `-i` switch. It can then be used to 296 | specify all drives that should explicitly get monitored and spun down when idle. 297 | 298 | An example in which only the drives `ada3` and `ada6` are monitored looks like 299 | this: 300 | 301 | ```bash 302 | ./spindown_timer.sh -m -i ada3 -i ada6 303 | ``` 304 | 305 | To verify the drive selection, a list of all drives that are being monitored by 306 | the running script instance is printed directly after starting the script 307 | (except in quiet mode [-q]). 308 | 309 | 310 | ### Using separate timeouts for different drives 311 | 312 | It is possible to run multiple instances of the spindown timer script with 313 | independent `TIMEOUT` values for different drives simultaneously. Just make sure 314 | that the sets of monitored drives are distinct, i.e., each drive is managed by 315 | only one instance of the spindown timer script. 316 | 317 | In the following example, all drives except `ada0` and `ada1` are spun down 318 | after being idle for 3600 seconds. The drives `ada0` and `ada1` are instead 319 | already spun down after 600 seconds of being idle: 320 | 321 | ```bash 322 | ./spindown_timer.sh -t 3600 -i ada0 -i ada1 # Automatic drive detection 323 | ./spindown_timer.sh -m -t 600 -i ada0 -i ada1 # Manual mode 324 | ``` 325 | 326 | Another example is to operate on ZFS pool basis by default but spin down a set 327 | of additional disks that are not part of any ZFS pool yet (e.g., for hot 328 | standby): 329 | 330 | ```bash 331 | ./spindown_timer.sh -u zpool -i freenas-boot -i ssd # Spindown all ZFS pools except 'freenas-boot' and 'ssd' 332 | ./spindown_timer.sh -u disk -m -i ada8 -i ada9 # Additionally spin down disk drives 'ada8' and 'ada9' that are not part of any ZFS pool yet 333 | ``` 334 | 335 | To start all required spindown timer instances you can simply create multiple 336 | `Post Init Scripts`, as described above in the Section [Automatic start at 337 | boot](#automatic-start-at-boot). 338 | 339 | 340 | ### One shot mode [-o] 341 | 342 | In one shot mode, the script performs exactly one single I/O poll interval, then 343 | immediately spins down drives that were idle for the last `POLL_TIME` seconds, 344 | and exits. No continuous monitoring is performed in this mode. 345 | 346 | This option ignores any specified `TIMEOUT` value. It can be useful, if you want 347 | to invoke the script via `cron` to spin down drives at only specific times of the day. 348 | 349 | Notice: Make sure to have **only one instance of the spindown script running** 350 | for one set of disks at the same time. Otherwise, the script instances could 351 | interfere with each other and cause unexpected spin ups/downs. In other words: 352 | Make sure that your cron job triggers not faster than your `POLL_TIME` value. 353 | 354 | 355 | ### Automatic system shutdown [-s TIMEOUT] 356 | 357 | When a timeout is given via the `-s` argument, the system will be shut down by 358 | the script if all monitored drives were idle for the specified number of 359 | seconds. This feature can be used to automatically shut down a system that might 360 | be woken via wake-on-LAN (WOL) later on. 361 | 362 | Setting `TIMEOUT` to 0 results in no shutdown. 363 | 364 | 365 | ## Tested TrueNAS / FreeNAS versions 366 | 367 | This script was successfully tested on the following OS versions: 368 | 369 | ### TrueNAS (Core) 370 | * `TrueNAS-13.0-U5.1 (Core)` 371 | * `TrueNAS-13.0-U4 (Core)` 372 | * `TrueNAS-13.0-U3.1 (Core)` 373 | * `TrueNAS-12.0-U8 (Core)` 374 | * `TrueNAS-12.0-U7 (Core)` 375 | * `TrueNAS-12.0-U6.1 (Core)` 376 | * `TrueNAS-12.0-U6 (Core)` 377 | * `TrueNAS-12.0-U5.1 (Core)` 378 | * `TrueNAS-12.0-U5 (Core)` 379 | * `TrueNAS-12.0-U3.1 (Core)` 380 | * `TrueNAS-12.0-U1.1 (Core)` 381 | * `TrueNAS-12.0-U1 (Core)` 382 | * `TrueNAS-12.0 (Core)` 383 | * `FreeNAS-11.3-U5` 384 | * `FreeNAS-11.3-U4.1` 385 | * `FreeNAS-11.3-U3.2` 386 | * `FreeNAS-11.3-U3.1` 387 | * `FreeNAS-11.3` 388 | * `FreeNAS-11.2-U7` 389 | * `FreeNAS-11.2-U4.1` 390 | 391 | ### TrueNAS SCALE 392 | * `TrueNAS SCALE 22.12.0` 393 | * `TrueNAS SCALE 22.02.3` 394 | 395 | _Intermediate OS versions not listed here have not been explicitly tested, but 396 | the script will most likely be compatible._ 397 | 398 | 399 | ## Warning 400 | 401 | Heavily spinning disk drives up and down increases disk wear. Before deploying 402 | this script, consider carefully which of your drives are frequently accessed and 403 | should therefore not be aggressively spun down. A good rule of thumb is to keep 404 | disk spin-ups below 5 per 24 hours. You can keep an eye on your drives 405 | `Load_Cycle_Count` and `Start_Stop_Count` S.M.A.R.T values to monitor the number 406 | of performed spin-ups. 407 | 408 | **Please do not spin down your drives in an enterprise environment. Only 409 | consider using this technique with small NAS setups for home use, which idle 410 | most time of the day, and select a timeout value appropriate to your usage 411 | behavior.** 412 | 413 | Another typical use case is spinning down drives that are only used once a day 414 | (e.g., for mirroring of files or backups). 415 | 416 | 417 | ## Bug reports and contributions 418 | 419 | Bug report and contributions are very welcome! Feel free to open a new issue or 420 | submit a merge request :) 421 | 422 | 423 | ## Attributions 424 | 425 | The script is heavily inspired by: 426 | [https://serverfault.com/a/969252](https://serverfault.com/a/969252) 427 | 428 | 429 | ## Support 430 | 431 | My work helped you in some way, or you just like it? Awesome! 432 | 433 | If you want to support me, you can consider buying me a coffee/tea/mate. Thank 434 | You! <3 435 | 436 | 437 | Donate with PayPal 438 | 439 | 440 | [![ko-fi.com/ngandrass](https://www.ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/A0A3XX87) 441 | 442 | -------------------------------------------------------------------------------- /delayed_start.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # ################################################## 4 | # Waits until the script file $1 becomes available and starts 5 | # it with the given additional arguments ($2, $3, $4, ...) 6 | # 7 | # See: https://github.com/ngandrass/truenas-spindown-timer 8 | # 9 | # 10 | # MIT License 11 | # 12 | # Copyright (c) 2022 Niels Gandraß 13 | # 14 | # Permission is hereby granted, free of charge, to any person obtaining a copy 15 | # of this software and associated documentation files (the "Software"), to deal 16 | # in the Software without restriction, including without limitation the rights 17 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 18 | # copies of the Software, and to permit persons to whom the Software is 19 | # furnished to do so, subject to the following conditions: 20 | # 21 | # The above copyright notice and this permission notice shall be included in all 22 | # copies or substantial portions of the Software. 23 | # 24 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 25 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 26 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 27 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 28 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 29 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 30 | # SOFTWARE. 31 | # ################################################## 32 | 33 | CHECK_INTERVAL=60 # Interval at which to check if script became available 34 | 35 | SPINDOWN_TIMER_SCRIPT="$1"; shift 36 | SPINDOWN_TIMER_ARGS="$@" 37 | 38 | while true; do 39 | if [ -f "${SPINDOWN_TIMER_SCRIPT}" ]; then 40 | ${SPINDOWN_TIMER_SCRIPT} ${SPINDOWN_TIMER_ARGS} 41 | break 42 | else 43 | sleep ${CHECK_INTERVAL} 44 | fi 45 | done 46 | -------------------------------------------------------------------------------- /screenshots/disk-spindown-config.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ngandrass/truenas-spindown-timer/9b46e2de4b06184eb048768d2db6eccedf29f9b5/screenshots/disk-spindown-config.png -------------------------------------------------------------------------------- /screenshots/system-dataset.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ngandrass/truenas-spindown-timer/9b46e2de4b06184eb048768d2db6eccedf29f9b5/screenshots/system-dataset.png -------------------------------------------------------------------------------- /screenshots/task-delayed-oneliner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ngandrass/truenas-spindown-timer/9b46e2de4b06184eb048768d2db6eccedf29f9b5/screenshots/task-delayed-oneliner.png -------------------------------------------------------------------------------- /screenshots/task.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ngandrass/truenas-spindown-timer/9b46e2de4b06184eb048768d2db6eccedf29f9b5/screenshots/task.png -------------------------------------------------------------------------------- /spindown_timer.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # ################################################## 4 | # TrueNAS HDD Spindown Timer 5 | # Monitors drive I/O and forces HDD spindown after a given idle period. 6 | # 7 | # Version: 2.4.0 8 | # 9 | # See: https://github.com/ngandrass/truenas-spindown-timer 10 | # 11 | # 12 | # MIT License 13 | # 14 | # Copyright (c) 2025 Niels Gandraß 15 | # 16 | # Permission is hereby granted, free of charge, to any person obtaining a copy 17 | # of this software and associated documentation files (the "Software"), to deal 18 | # in the Software without restriction, including without limitation the rights 19 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 20 | # copies of the Software, and to permit persons to whom the Software is 21 | # furnished to do so, subject to the following conditions: 22 | # 23 | # The above copyright notice and this permission notice shall be included in all 24 | # copies or substantial portions of the Software. 25 | # 26 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 27 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 28 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 29 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 30 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 31 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 32 | # SOFTWARE. 33 | # ################################################## 34 | 35 | VERSION=2.4.0 36 | TIMEOUT=3600 # Default timeout before considering a drive as idle 37 | POLL_TIME=600 # Default time to wait during a single iostat call 38 | IGNORED_DRIVES="" # Default list of drives that are never spun down 39 | MANUAL_MODE=0 # Default manual mode setting 40 | ONESHOT_MODE=0 # Default for one shot mode setting 41 | CHECK_MODE=0 # Default check mode setting 42 | QUIET=0 # Default quiet mode setting 43 | VERBOSE=0 # Default verbosity level 44 | LOG_TO_SYSLOG=0 # Default for logging target (stdout/stderr or syslog) 45 | DRYRUN=0 # Default for dryrun option 46 | SHUTDOWN_TIMEOUT=0 # Default shutdown timeout (0 == no shutdown) 47 | declare -A DRIVES # Associative array for detected drives 48 | declare -A ZFSPOOLS # Array for monitored ZFS pools 49 | declare -A DRIVES_BY_POOLS # Associative array mapping of pool names to list of disk identifiers (e.g. poolname => "ada0 ada1 ada2") 50 | declare -A DRIVEID_TO_DEV # Associative array with the drive id (e.g. GPTID) to a device identifier 51 | HOST_PLATFORM= # Detected type of the host os (FreeBSD for TrueNAS CORE or Linux for TrueNAS SCALE) 52 | DRIVEID_TYPE= # Default for type used for drive IDs ('gptid' (CORE) or 'partuuid' (SCALE)) 53 | OPERATION_MODE=disk # Default operation mode (disk or zpool) 54 | DISK_CTRL_TOOL= # Disk control tool to use (camcontrol, hdparm, or smartctl) 55 | 56 | ## 57 | # Prints the help/usage message 58 | ## 59 | function print_usage() { 60 | cat << EOF 61 | Usage: 62 | $0 [-h] [-q] [-v] [-l] [-d] [-o] [-c] [-m] [-u ] [-t ] [-p ] [-i ] [-s ] [-x ] 63 | 64 | Monitors drive I/O and forces HDD spindown after a given idle period. 65 | Resistant to S.M.A.R.T. reads. 66 | 67 | Operation is supported on either drive level (MODE = disk) with plain device 68 | identifiers or zpool level (MODE = zpool) with zfs pool names. See -u for more 69 | information. A drive is considered idle and gets spun down if there has been no 70 | I/O operations on it for at least TIMEOUT seconds. I/O requests are detected 71 | within multiple intervals with a length of POLL_TIME seconds. Detected reads or 72 | writes reset the drives timer back to TIMEOUT. 73 | 74 | Options: 75 | -t TIMEOUT : Total spindown delay. Number of seconds a drive has to 76 | experience no I/O activity before it is spun down (default: 3600). 77 | -p POLL_TIME : I/O poll interval. Number of seconds to wait for I/O during a 78 | single monitoring period (default: 600). 79 | -s TIMEOUT : Shutdown timeout. If given and no drive is active for TIMEOUT 80 | seconds, the system will be shut down. 81 | -u MODE : Operation mode (default: disk). 82 | If set to 'disk', the script operates with disk identifiers 83 | (e.g. ada0) for all CLI arguments and monitors I/O using 84 | iostat directly. 85 | If set to 'zpool' the script operates with ZFS pool names 86 | (e.g. zfsdata) for all CLI arguments and monitors I/O using 87 | the iostat of zpool. 88 | -i DRIVE : In automatic drive detection mode (default): 89 | Ignores the given drive or zfs pool. 90 | In manual mode [-m]: 91 | Only monitor the specified drives or zfs pools. Multiple 92 | drives or zfs pools can be given by repeating the -i option. 93 | -m : Manual drive detection mode. If set, automatic drive detection 94 | is disabled. 95 | CAUTION: This inverts the -i option, which can then be used to 96 | manually supply drives or zfs pools to monitor. All other drives 97 | or zfs pools will be ignored. 98 | -o : One shot mode. If set, the script performs exactly one I/O poll 99 | interval, then immediately spins down drives that were idle for 100 | the last seconds, and exits. This option ignores 101 | . It can be useful, if you want to invoke to script 102 | via cron. 103 | -c : Check mode. Outputs drive power state after each POLL_TIME 104 | seconds. 105 | -q : Quiet mode. Outputs are suppressed set. 106 | -v : Verbose mode. Prints additional information during execution. 107 | -l : Syslog logging. If set, all output is logged to syslog instead 108 | of stdout/stderr. 109 | -d : Dry run. No actual spindown is performed. 110 | -h : Print this help message. 111 | -x TOOL : Forces use of a specifiy tool for disk control. 112 | Supported tools are: "camcontrol", "hdparm", and "smartctl". 113 | If not specified, the first available tool (from left to right) 114 | will be automatically selected. 115 | 116 | Example usage: 117 | $0 118 | $0 -q -t 3600 -p 600 -i ada0 -i ada1 119 | $0 -q -m -i ada6 -i ada7 -i da0 120 | $0 -u zpool -i freenas-boot 121 | EOF 122 | } 123 | 124 | ## 125 | # Writes argument $1 to stdout/syslog if $QUIET is not set 126 | # 127 | # Arguments: 128 | # $1 Message to write to stdout/syslog 129 | ## 130 | function log() { 131 | if [[ $QUIET -eq 0 ]]; then 132 | if [[ $LOG_TO_SYSLOG -eq 1 ]]; then 133 | echo "$1" | logger -i -t "spindown_timer" 134 | else 135 | echo "[$(date '+%F %T')] $1" 136 | fi 137 | fi 138 | } 139 | 140 | ## 141 | # Writes argument $1 to stdout/syslog if $VERBOSE is set and $QUIET is not set 142 | # 143 | # Arguments: 144 | # $1 Message to write to stdout/syslog 145 | ## 146 | function log_verbose() { 147 | if [[ $VERBOSE -eq 1 ]]; then 148 | log "$1" 149 | fi 150 | } 151 | 152 | ## 153 | # Writes argument $1 to stderr/syslog. Ignores $QUIET. 154 | # 155 | # Arguments: 156 | # $1 Message to write to stderr/syslog 157 | ## 158 | function log_error() { 159 | if [[ $LOG_TO_SYSLOG -eq 1 ]]; then 160 | echo "[ERROR]: $1" | logger -i -t "spindown_timer" 161 | else 162 | >&2 echo "[$(date '+%F %T')] [ERROR]: $1" 163 | fi 164 | } 165 | 166 | ## 167 | # Detects the host platform (FreeBSD (TrueNAS CORE) or Linux (TrueNAS SCALE)) 168 | ## 169 | function detect_host_platform() { 170 | if [[ "$(uname)" == "Linux" ]]; then 171 | HOST_PLATFORM=Linux 172 | elif [[ "$(uname)" == "FreeBSD" ]]; then 173 | HOST_PLATFORM=FreeBSD 174 | else 175 | log_error "Unsupported host OS type: $(uname). Assuming Linux for now ..." 176 | HOST_PLATFORM=Linux 177 | return 178 | fi 179 | 180 | log_verbose "Detected host OS type: $HOST_PLATFORM" 181 | } 182 | 183 | ## 184 | # Determines which tool to control disks is available. This 185 | # differentiates between TrueNAS Core and TrueNAS SCALE. 186 | # 187 | # Return: Command to use to control disks 188 | # 189 | ## 190 | detect_disk_ctrl_tool() { 191 | local SUPPORTED_DISK_CTRL_TOOLS 192 | SUPPORTED_DISK_CTRL_TOOLS=("camcontrol" "hdparm" "smartctl") 193 | 194 | # If a specific tool is given by the user (via -x), validate it 195 | if [[ " ${SUPPORTED_DISK_CTRL_TOOLS[@]} " =~ " ${DISK_CTRL_TOOL} " ]]; then 196 | # Check if the tool is available on the system 197 | if which "$DISK_CTRL_TOOL" &> /dev/null; then 198 | echo "$DISK_CTRL_TOOL" 199 | return 200 | else 201 | log_error "$DISK_CTRL_TOOL is not installed or not found." 202 | return 203 | fi 204 | fi 205 | 206 | # Do not perform autodetect if user explicit specified a tool that is not available 207 | if [[ -n $DISK_CTRL_TOOL ]]; then 208 | log_error "Unsupported disk control tool: $DISK_CTRL_TOOL" 209 | return 210 | fi 211 | 212 | # Auto-detect available tools if no specific tool was given by the user 213 | for tool in "${SUPPORTED_DISK_CTRL_TOOLS[@]}"; do 214 | if which "$tool" &> /dev/null; then 215 | # Return the first available tool 216 | echo "$tool" 217 | return 218 | fi 219 | done 220 | 221 | log_error "No supported disk control tool found." 222 | return 223 | } 224 | 225 | ## 226 | # Detects which type of drive IDs are used. 227 | # CORE uses glabel GPTIDs, SCALE uses partuuids 228 | ## 229 | function detect_driveid_type() { 230 | if [[ -n $(which glabel) ]]; then 231 | DRIVEID_TYPE=gptid 232 | elif [[ -d "/dev/disk/by-partuuid/" ]]; then 233 | DRIVEID_TYPE=partuuid 234 | else 235 | log_error "Cannot detect drive id type. Exiting..." 236 | exit 1 237 | fi 238 | 239 | log_verbose "Detected drive id type: $DRIVEID_TYPE" 240 | } 241 | 242 | ## 243 | # Populates the DRIVEID_TO_DEV associative array. Drive IDs are used as keys. 244 | # Must be called after detect_driveid_type() 245 | # 246 | function populate_driveid_to_dev_array() { 247 | # Create mapping 248 | case $DRIVEID_TYPE in 249 | "gptid") 250 | # glabel present. Index by GPTID (CORE) 251 | log_verbose "Creating disk to dev mapping using: glabel" 252 | while read -r row; do 253 | local gptid=$(echo "$row" | cut -d ' ' -f1) 254 | local diskid=$(echo "$row" | cut -d ' ' -f3 | rev | cut -d 'p' -f2 | rev) 255 | 256 | if [[ "$gptid" = "gptid"* ]]; then 257 | DRIVEID_TO_DEV[$gptid]=$diskid 258 | fi 259 | done < <(glabel status | tail -n +2 | tr -s ' ') 260 | ;; 261 | "partuuid") 262 | # glabel absent. Try to detect by partuuid (SCALE) 263 | log_verbose "Creating disk to dev mapping using: partuuid" 264 | while read -r row; do 265 | local partuuid=$(basename -- "${row}") 266 | local dev=$(basename -- "$(readlink -f "${row}")" | sed "s/[0-9]\+$//") 267 | DRIVEID_TO_DEV[$partuuid]=$dev 268 | done < <(find /dev/disk/by-partuuid/ -type l) 269 | ;; 270 | esac 271 | 272 | # Verbose logging 273 | if [ $VERBOSE -eq 1 ]; then 274 | log_verbose "Detected disk identifier to dev mappings:" 275 | for deviceid in "${!DRIVEID_TO_DEV[@]}"; do 276 | log_verbose "-> [$deviceid]=${DRIVEID_TO_DEV[$deviceid]}" 277 | done 278 | fi 279 | } 280 | 281 | ## 282 | # Registers a new drive in $DRIVES array and detects if it is an ATA or SCSI 283 | # drive. 284 | # 285 | # Arguemnts: 286 | # $1 Device identifier (e.g. ada0) 287 | ## 288 | function register_drive() { 289 | local drive="$1" 290 | if [ -z "$drive" ]; then 291 | log_error "Failed to register drive. Empty name received." 292 | return 1 293 | fi 294 | 295 | local DISK_IS_ATA 296 | case $DISK_CTRL_TOOL in 297 | "camcontrol") DISK_IS_ATA=$(camcontrol identify $drive |& grep -E "^protocol(.*)ATA");; 298 | "hdparm") DISK_IS_ATA=$(hdparm -I "/dev/$drive" |& grep -E "^ATA device");; 299 | "smartctl") DISK_IS_ATA=$(smartctl -i "/dev/$drive" |& grep -E "ATA V");; 300 | esac 301 | 302 | if [[ -n $DISK_IS_ATA ]]; then 303 | DRIVES[$drive]="ATA" 304 | else 305 | DRIVES[$drive]="SCSI" 306 | fi 307 | } 308 | 309 | ## 310 | # Detects all connected drives using plain iostat method and whether they are 311 | # ATA or SCSI drives. Drives listed in $IGNORE_DRIVES will be excluded. 312 | # 313 | # Note: This function populates the $DRIVES array directly. 314 | ## 315 | function detect_drives_disk() { 316 | local DRIVE_IDS 317 | 318 | # Detect relevant drives identifiers 319 | if [[ $MANUAL_MODE -eq 1 ]]; then 320 | # In manual mode the ignored drives become the explicitly monitored drives 321 | DRIVE_IDS=" ${IGNORED_DRIVES} " 322 | else 323 | DRIVE_IDS=`iostat -x | grep -E '^(ada|da|sd)' | awk '{printf $1 " "}'` 324 | DRIVE_IDS=" ${DRIVE_IDS} " # Space padding must be kept for pattern matching 325 | 326 | # Remove ignored drives 327 | for drive in ${IGNORED_DRIVES[@]}; do 328 | DRIVE_IDS=`sed "s/ ${drive} / /g" <<< ${DRIVE_IDS}` 329 | done 330 | fi 331 | 332 | # Detect protocol type (ATA or SCSI) for each drive and populate $DRIVES array 333 | for drive in ${DRIVE_IDS}; do 334 | register_drive "$drive" 335 | done 336 | } 337 | 338 | ## 339 | # Detects all connected drives using zpool list method and whether they are 340 | # ATA or SCSI drives. Drives listed in $IGNORE_DRIVES will be excluded. 341 | # 342 | # Note: This function populates the $DRIVES array directly. 343 | ## 344 | function detect_drives_zpool() { 345 | local DRIVE_IDS 346 | 347 | # Detect zfs pools 348 | if [[ $MANUAL_MODE -eq 1 ]]; then 349 | # Only use explicitly supplied pool names 350 | for poolname in $IGNORED_DRIVES; do 351 | ZFSPOOLS[${#ZFSPOOLS[@]}]="$poolname" 352 | log_verbose "Using zfs pool: $poolname" 353 | done 354 | else 355 | # Auto detect available pools 356 | local poolnames=$(zpool list -H -o name) 357 | 358 | # Remove ignored pools 359 | for ignored_pool in $IGNORED_DRIVES; do 360 | poolnames=${poolnames//$ignored_pool/} 361 | log_verbose "Ignoring zfs pool: $ignored_pool" 362 | done 363 | 364 | # Store remaining detected pools 365 | for poolname in $poolnames; do 366 | ZFSPOOLS[${#ZFSPOOLS[@]}]="$poolname" 367 | log_verbose "Detected zfs pool: $poolname" 368 | done 369 | fi 370 | 371 | # Index disks in detected pools 372 | for poolname in ${ZFSPOOLS[*]}; do 373 | local disks 374 | if ! disks=$(zpool list -H -v "$poolname"); then 375 | log_error "Failed to get information for zfs pool: $poolname. Are you sure it exists?" 376 | continue; 377 | fi 378 | 379 | log_verbose "Detecting disks in pool: $poolname" 380 | 381 | while read -r driveid; do 382 | # Remove invalid rows (Cannot be statically cut because of different pool geometries) 383 | case $DRIVEID_TYPE in 384 | "gptid") driveid=$(echo "$driveid" | grep -E "^gptid/.*$" | sed "s/^\(.*\)\.eli$/\1/") ;; 385 | "partuuid") driveid=$(echo "$driveid" | grep -E "^(\w+\-){2,}") ;; 386 | esac 387 | 388 | # Skip if current row is invalid after filtering above 389 | if [ -z "$driveid" ]; then 390 | continue 391 | fi 392 | 393 | # Skip nvme drives 394 | if [[ "${DRIVEID_TO_DEV[$driveid]}" == "nvme"* ]]; then 395 | log_verbose "-> Skipping NVMe drive: $driveid" 396 | continue 397 | fi 398 | 399 | log_verbose "-> Detected disk in pool $poolname: ${DRIVEID_TO_DEV[$driveid]} ($driveid)" 400 | register_drive "${DRIVEID_TO_DEV[$driveid]}" 401 | DRIVES_BY_POOLS[$poolname]="${DRIVES_BY_POOLS[$poolname]} ${DRIVEID_TO_DEV[$driveid]}" 402 | done < <(echo "$disks" | tr -s "\\t" " " | cut -d ' ' -f2) 403 | done 404 | } 405 | 406 | ## 407 | # Retrieves the list of identifiers (e.g. "ada0") for all monitored drives. 408 | # Drives listed in $IGNORE_DRIVES will be excluded. 409 | # 410 | # Note: Must be run after detect_drives(). 411 | ## 412 | function get_drives() { 413 | echo "${!DRIVES[@]}" 414 | } 415 | 416 | ## 417 | # Waits $1 seconds and returns a list of all drives that didn't 418 | # experience I/O operations during that period. 419 | # 420 | # Devices listed in $IGNORED_DRIVES will never get returned. 421 | # 422 | # Arguments: 423 | # $1 Seconds to listen for I/O before drives are considered idle 424 | ## 425 | function get_idle_drives() { 426 | # Wait for $1 seconds and get active drives 427 | local IOSTAT_OUTPUT 428 | local ACTIVE_DRIVES 429 | case $OPERATION_MODE in 430 | "disk") 431 | # Operation mode: disk. Detect IO using iostat 432 | IOSTAT_OUTPUT=$(iostat -x -z -d $1 2) 433 | case $HOST_PLATFORM in 434 | "FreeBSD") 435 | local CUT_OFFSET=$(grep -no "extended device statistics" <<< "$IOSTAT_OUTPUT" | tail -n1 | cut -d: -f1) 436 | CUT_OFFSET=$((CUT_OFFSET+2)) 437 | ;; 438 | "Linux") 439 | local CUT_OFFSET=$(grep -no "Device" <<< "$IOSTAT_OUTPUT" | tail -n1 | cut -d: -f1) 440 | CUT_OFFSET=$((CUT_OFFSET+1)) 441 | ;; 442 | esac 443 | ACTIVE_DRIVES=$(sed -n "${CUT_OFFSET},\$p" <<< "$IOSTAT_OUTPUT" | cut -d' ' -f1 | tr '\n' ' ') 444 | log_verbose "-> Active Drive(s): $ACTIVE_DRIVES" >&2 445 | ;; 446 | "zpool") 447 | # Operation mode: zpool. Detect IO using zpool iostat 448 | IOSTAT_OUTPUT=$(zpool iostat -H ${ZFSPOOLS[*]} $1 2) 449 | 450 | while read -r row; do 451 | local poolname=$(echo "$row" | cut -d ' ' -f1) 452 | local reads=$(echo "$row" | cut -d ' ' -f4) 453 | local writes=$(echo "$row" | cut -d ' ' -f5) 454 | 455 | if [ "$reads" != "0" ] || [ "$writes" != "0" ]; then 456 | ACTIVE_DRIVES="$ACTIVE_DRIVES ${DRIVES_BY_POOLS[$poolname]}" 457 | fi 458 | done < <(tail -n +$((${#ZFSPOOLS[@]}+1)) <<< "${IOSTAT_OUTPUT}" | tr -s "\\t" " ") 459 | ;; 460 | esac 461 | 462 | # Remove active drives from list to get idle drives 463 | local IDLE_DRIVES=" $(get_drives) " # Space padding must be kept for pattern matching 464 | for drive in ${ACTIVE_DRIVES}; do 465 | IDLE_DRIVES=`sed "s/ ${drive} / /g" <<< ${IDLE_DRIVES}` 466 | done 467 | 468 | echo ${IDLE_DRIVES} 469 | } 470 | 471 | ## 472 | # Checks if all not ignored drives are idle. 473 | # 474 | # returns 0 if all drives are idle, 1 if at least one drive is spinning 475 | # 476 | # Arguments: 477 | # $1 list of idle drives as returned by get_idle_drives() 478 | ## 479 | function all_drives_are_idle() { 480 | local DRIVES=" $(get_drives) " 481 | 482 | for drive in ${DRIVES}; do 483 | if [[ ! $1 =~ $drive ]]; then 484 | return 1 485 | fi 486 | done 487 | 488 | return 0 489 | } 490 | 491 | ## 492 | # Determines whether the given drive $1 understands ATA commands 493 | # 494 | # Arguments: 495 | # $1 Device identifier of the drive 496 | ## 497 | function is_ata_drive() { 498 | if [[ ${DRIVES[$1]} == "ATA" ]]; then echo 1; else echo 0; fi 499 | } 500 | 501 | ## 502 | # Determines whether the given drive $1 is spinning 503 | # 504 | # Arguments: 505 | # $1 Device identifier of the drive 506 | ## 507 | function drive_is_spinning() { 508 | case $DISK_CTRL_TOOL in 509 | "camcontrol") 510 | # camcontrol differentiates between ATA and SCSI drives 511 | if [[ $(is_ata_drive $1) -eq 1 ]]; then 512 | if [[ -z $(camcontrol epc $1 -c status -P | grep 'Standby') ]]; then echo 1; else echo 0; fi 513 | else 514 | # Reads STANDBY values from the power condition mode page (0x1a). 515 | # THIS IS EXPERIMENTAL AND UNTESTED due to the lack of SCSI drives :( 516 | # 517 | # See: /usr/share/misc/scsi_modes and the "SCSI Commands Reference Manual" 518 | if [[ -z $(camcontrol modepage $1 -m 0x1a |& grep -E "^STANDBY(.*)1") ]]; then echo 1; else echo 0; fi 519 | fi 520 | ;; 521 | "hdparm") 522 | # It is currently unknown if hdparm also needs to differentiates between ATA and SCSI drives 523 | if [[ -z $(hdparm -C "/dev/$1" | grep 'standby') ]]; then echo 1; else echo 0; fi 524 | ;; 525 | "smartctl") 526 | if [[ -z $(smartctl --nocheck standby -i "/dev/$1" | grep -q 'Device is in STANDBY mode') ]]; then echo 1; else echo 0; fi 527 | ;; 528 | esac 529 | } 530 | 531 | ## 532 | # Determines if all monitored drives are currently spun down 533 | ## 534 | function all_monitored_drives_are_spun_down() { 535 | for drive in "${!DRIVE_TIMEOUTS[@]}"; do 536 | if [[ $(drive_is_spinning "$drive") -eq 1 ]]; then 537 | echo 0 538 | return 539 | fi 540 | done 541 | 542 | echo 1 543 | return 544 | } 545 | 546 | ## 547 | # Prints the power state of all monitored drives 548 | ## 549 | function print_drive_power_states() { 550 | local powerstates="" 551 | 552 | for drive in $(get_drives); do 553 | powerstates="${powerstates} [$drive] => $(drive_is_spinning "$drive")" 554 | done 555 | 556 | log "Drive power states: ${powerstates:1}" 557 | } 558 | 559 | ## 560 | # Forces the spindown of the drive specified by parameter $1 561 | # 562 | # Arguments: 563 | # $1 Device identifier of the drive 564 | ## 565 | function spindown_drive() { 566 | if [[ $(drive_is_spinning $1) -eq 1 ]]; then 567 | if [[ $DRYRUN -eq 0 ]]; then 568 | case $DISK_CTRL_TOOL in 569 | "camcontrol") 570 | if [[ $(is_ata_drive $1) -eq 1 ]]; then 571 | # Spindown ATA drive 572 | camcontrol standby $1 573 | else 574 | # Spindown SCSI drive 575 | camcontrol stop $1 576 | fi 577 | ;; 578 | "hdparm") 579 | hdparm -q -y "/dev/$1" 580 | ;; 581 | "smartctl") 582 | smartctl --set=standby,now "/dev/$1" 583 | ;; 584 | esac 585 | 586 | log "Spun down idle drive: $1" 587 | else 588 | log "Would spin down idle drive: $1. No spindown was performed (dry run)." 589 | fi 590 | else 591 | log_verbose "Drive is already spun down: $1" 592 | fi 593 | } 594 | 595 | ## 596 | # Generates a list of all active timeouts 597 | ## 598 | function get_drive_timeouts() { 599 | echo -n "Drive timeouts: " 600 | for x in "${!DRIVE_TIMEOUTS[@]}"; do printf "[%s]=%s " "$x" "${DRIVE_TIMEOUTS[$x]}" ; done 601 | echo "" 602 | } 603 | 604 | ## 605 | # Main program loop 606 | ## 607 | function main() { 608 | log_verbose "Running HDD Spindown Timer version $VERSION" 609 | if [[ $DRYRUN -eq 1 ]]; then log "Performing a dry run..."; fi 610 | 611 | # Detect host platform and user 612 | detect_host_platform 613 | log_verbose "Running as user: $(whoami) (UID: $(id -u))" 614 | 615 | # Setup one shot mode, if selected 616 | if [[ $ONESHOT_MODE -eq 1 ]]; then 617 | TIMEOUT=$POLL_TIME 618 | log "Running in one shot mode... Notice: Timeout (-t) value will be ignored. Using poll time (-p) instead."; 619 | fi 620 | 621 | # Verify operation mode 622 | if [ "$OPERATION_MODE" != "disk" ] && [ "$OPERATION_MODE" != "zpool" ]; then 623 | log_error "Invalid operation mode: $OPERATION_MODE. Must be either 'disk' or 'zpool'." 624 | exit 1 625 | fi 626 | log_verbose "Operation mode: $OPERATION_MODE" 627 | 628 | # Determine disk control tool to use 629 | # (Differentiates between TrueNaS Core and TrueNAs SCALE) 630 | DISK_CTRL_TOOL=$(detect_disk_ctrl_tool) 631 | if [[ -z $DISK_CTRL_TOOL ]]; then 632 | log_error "No applicable control tool found. Exiting..." 633 | exit 1 634 | fi 635 | log_verbose "Using disk control tool: ${DISK_CTRL_TOOL}" 636 | 637 | # Initially identify drives to monitor 638 | detect_driveid_type 639 | populate_driveid_to_dev_array 640 | detect_drives_$OPERATION_MODE 641 | 642 | if [[ ${#DRIVES[@]} -eq 0 ]]; then 643 | log_error "No drives to monitor detected. Exiting..." 644 | exit 1 645 | fi 646 | 647 | for drive in ${!DRIVES[@]}; do 648 | log_verbose "Detected drive ${drive} as ${DRIVES[$drive]} device" 649 | done 650 | 651 | log "Monitoring drives with a timeout of ${TIMEOUT} seconds: $(get_drives)" 652 | log "I/O check sample period: ${POLL_TIME} sec" 653 | 654 | if [ ${SHUTDOWN_TIMEOUT} -gt 0 ]; then 655 | log "System will be shut down after ${SHUTDOWN_TIMEOUT} seconds of inactivity" 656 | fi 657 | 658 | # Init timeout counters for all monitored drives 659 | declare -A DRIVE_TIMEOUTS 660 | for drive in $(get_drives); do 661 | DRIVE_TIMEOUTS[$drive]=${TIMEOUT} 662 | done 663 | log_verbose "$(get_drive_timeouts)" 664 | 665 | # Init shutdown counter 666 | SHUTDOWN_COUNTER=${SHUTDOWN_TIMEOUT} 667 | 668 | # Drive I/O monitoring loop 669 | while true; do 670 | if [ $CHECK_MODE -eq 1 ]; then 671 | print_drive_power_states 672 | fi 673 | 674 | if [[ $(all_monitored_drives_are_spun_down) -eq 1 ]]; then 675 | log_verbose "All monitored drives are already spun down, sleeping ${TIMEOUT} seconds ..." 676 | sleep ${TIMEOUT} 677 | continue 678 | fi 679 | 680 | local IDLE_DRIVES=$(get_idle_drives ${POLL_TIME}) 681 | 682 | # Update drive timeouts and spin down idle drives 683 | for drive in "${!DRIVE_TIMEOUTS[@]}"; do 684 | if [[ $IDLE_DRIVES =~ $drive ]]; then 685 | DRIVE_TIMEOUTS[$drive]=$((DRIVE_TIMEOUTS[$drive] - POLL_TIME)) 686 | 687 | if [[ ! ${DRIVE_TIMEOUTS[$drive]} -gt 0 ]]; then 688 | DRIVE_TIMEOUTS[$drive]=${TIMEOUT} 689 | spindown_drive ${drive} 690 | fi 691 | else 692 | DRIVE_TIMEOUTS[$drive]=${TIMEOUT} 693 | fi 694 | done 695 | 696 | # Handle shutdown timeout 697 | if [ ${SHUTDOWN_TIMEOUT} -gt 0 ]; then 698 | if all_drives_are_idle "${IDLE_DRIVES}"; then 699 | SHUTDOWN_COUNTER=$((SHUTDOWN_COUNTER - POLL_TIME)) 700 | if [[ ! ${SHUTDOWN_COUNTER} -gt 0 ]]; then 701 | log_verbose "Shutting down system" 702 | case $HOST_PLATFORM in 703 | "FreeBSD") shutdown -p now ;; 704 | *) shutdown -h now ;; 705 | esac 706 | fi 707 | else 708 | SHUTDOWN_COUNTER=${SHUTDOWN_TIMEOUT} 709 | fi 710 | log_verbose "Shutdown timeout: ${SHUTDOWN_COUNTER}" 711 | fi 712 | 713 | # Handle one shot mode 714 | if [[ $ONESHOT_MODE -eq 1 ]]; then 715 | log_verbose "One shot mode: Exiting..." 716 | exit 0 717 | fi 718 | 719 | # Log updated drive timeouts 720 | log_verbose "$(get_drive_timeouts)" 721 | done 722 | } 723 | 724 | # Parse arguments 725 | while getopts ":hqvdlmoct:p:i:s:u:x:" opt; do 726 | case ${opt} in 727 | t ) TIMEOUT=${OPTARG} 728 | ;; 729 | p ) POLL_TIME=${OPTARG} 730 | ;; 731 | i ) IGNORED_DRIVES="$IGNORED_DRIVES ${OPTARG}" 732 | ;; 733 | s ) SHUTDOWN_TIMEOUT=${OPTARG} 734 | ;; 735 | o ) ONESHOT_MODE=1 736 | ;; 737 | c ) CHECK_MODE=1 738 | ;; 739 | q ) QUIET=1 740 | ;; 741 | v ) VERBOSE=1 742 | ;; 743 | l ) LOG_TO_SYSLOG=1 744 | ;; 745 | d ) DRYRUN=1 746 | ;; 747 | m ) MANUAL_MODE=1 748 | ;; 749 | u ) OPERATION_MODE=${OPTARG} 750 | ;; 751 | x ) DISK_CTRL_TOOL=${OPTARG} 752 | ;; 753 | h ) print_usage; exit 754 | ;; 755 | : ) print_usage; exit 756 | ;; 757 | \? ) print_usage; exit 758 | ;; 759 | esac 760 | done 761 | 762 | main # Start main program 763 | --------------------------------------------------------------------------------