├── .gitignore ├── LICENSE ├── README.md ├── images ├── example_arc_1.png ├── example_arc_2.png ├── example_dataset_usage_1.png ├── example_zfs_throughput.png ├── macros.png ├── trigger_prototypes_zpool.png └── value_map_1.png ├── template └── zol_template.xml └── userparameters ├── ZoL_with_sudo.conf └── ZoL_without_sudo.conf /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 AceSlash 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Monitor ZFS on Linux on Zabbix 2 | 3 | This template is a modified version of the original work done by pbergdolt and posted on the zabbix forum a while ago here: https://www.zabbix.com/forum/zabbix-cookbook/35336-zabbix-zfs-discovery-monitoring?t=43347 . Also the original home of this variant was on https://share.zabbix.com/zfs-on-linux . 4 | 5 | I have maintained and modified this template over the years and the different versions of ZoL on a large number of servers so I'm pretty confident that it works ;) 6 | 7 | Thanks to external contributors, this template was extended and is now more complete than ever. However, if you find a metric that you need and is missing, don't hesitate to open a ticket or even better, to create a PR! 8 | 9 | Tested Zabbix server version include 4.0, 4.4, 5.0 and 5.2 . The template shipped here is in 4.0 format to allow import to all those versions. 10 | 11 | This template will give you screens and graphs for memory usage, zpool usage and performance, dataset usage, etc. It includes triggers for low disk space (customizable via Zabbix own macros), disks errors, etc. 12 | 13 | Example of graphs: 14 | - Arc memory usage and hit rate: 15 | ![arc1](images/example_arc_1.png) 16 | - Complete breakdown of META and DATA usage: 17 | ![arc2](images/example_arc_2.png) 18 | - Dataset usage, with available space, and breakdown of used space with directly used space, space used by snapshots and space used by children: 19 | ![dataset](images/example_dataset_usage_1.png) 20 | - Zpool IO throughput: 21 | ![throughput](images/example_zfs_throughput.png) 22 | 23 | # Supported OS and ZoL version 24 | Any Linux variant should work, tested version by myself include: 25 | - Debian 8, 9, 10 26 | - Ubuntu 16.04, 18.04 and 20.04 27 | - CentOS 6 and 7 28 | 29 | About the ZoL version, this template is intended to be used by ZoL version 0.7.0 or superior but still works on the 0.6.x branch. 30 | 31 | # Installation on Zabbix server 32 | 33 | To use this template, follow those steps: 34 | 35 | ## Create the Value mapping "ZFS zpool scrub status" 36 | Go to: 37 | - Administration 38 | - General 39 | - Value mapping 40 | 41 | Then create a new value map named `ZFS zpool scrub status` with the following mappings: 42 | 43 | | Value | Mapped to | 44 | | ----- | --------- | 45 | | 0 | Scrub in progress | 46 | | 1 | No scrub in progress | 47 | 48 | ![value_map](images/value_map_1.png) 49 | 50 | ## Import the template 51 | Import the template that is in the "template" directory of this repository or download it directly with this link: ![template](template/zol_template.xml) 52 | 53 | # Installation on the server you want to monitor 54 | ## Prerequisites 55 | The server needs to have some very basic tools to run the user parameters: 56 | - awk 57 | - cat 58 | - grep 59 | - sed 60 | - tail 61 | 62 | Usually, they are already installed and you don't have to install them. 63 | ## Add the userparameters file on the servers you want to monitor 64 | 65 | There are 2 different userparameters files in the "userparameters" directory of this repository. 66 | 67 | One uses sudo to run and thus you must give zabbix the correct rights and the other doesn't use sudo. 68 | 69 | On recent ZFS on Linux versions (eg version 0.7.0+), you don't need sudo to run `zpool list` or `zfs list` so just install the file ![ZoL_without_sudo.conf](userparameters/ZoL_without_sudo.conf) and you are done. 70 | 71 | For older ZFS on Linux versions (eg version 0.6.x), you will need to add some sudo rights with the file ![ZoL_with_sudo.conf](userparameters/ZoL_with_sudo.conf). On some distribution, ZoL already includes a file with all the necessary rights at `/etc/sudoers.d/zfs` but its content is commented, just remove the comments and any user will be able to list zfs datasets and pools. For convenience, here is the content of the file commented out: 72 | ``` 73 | ## Allow read-only ZoL commands to be called through sudo 74 | ## without a password. Remove the first '#' column to enable. 75 | ## 76 | ## CAUTION: Any syntax error introduced here will break sudo. 77 | ## 78 | ## Cmnd alias specification 79 | Cmnd_Alias C_ZFS = \ 80 | /sbin/zfs "", /sbin/zfs help *, \ 81 | /sbin/zfs get, /sbin/zfs get *, \ 82 | /sbin/zfs list, /sbin/zfs list *, \ 83 | /sbin/zpool "", /sbin/zpool help *, \ 84 | /sbin/zpool iostat, /sbin/zpool iostat *, \ 85 | /sbin/zpool list, /sbin/zpool list *, \ 86 | /sbin/zpool status, /sbin/zpool status *, \ 87 | /sbin/zpool upgrade, /sbin/zpool upgrade -v 88 | 89 | ## allow any user to use basic read-only ZFS commands 90 | ALL ALL = (root) NOPASSWD: C_ZFS 91 | ``` 92 | If you don't know where your "userparameters" directory is, this is usually the `/etc/zabbix/zabbix_agentd.d` folder. If in doubt, just look at your `zabbix_agentd.conf` file for the line begining by `Include=`, it will show where it is. 93 | 94 | ## Restart zabbix agent 95 | Once you have added the template, restart zabbix-agent so that it will load the new userparameters. 96 | 97 | # Customization of alert level by server 98 | This template includes macros to define when the "low disk spaces" type triggers will fire. 99 | 100 | By default, you will find them on the macro page of this template: 101 | ![macros](images/macros.png) 102 | 103 | If you change them here, they will apply to every hosts linked to this template, which may not be such a good idea. Prefer to change the macros on specific servers if needed. 104 | 105 | You can see how the macros are used by looking at the discovery rules, then "Trigger prototypes": 106 | ![macros](images/trigger_prototypes_zpool.png) 107 | 108 | # Important note about Zabbix active items 109 | 110 | This template uses Zabbix items of type `Zabbix agent (active)` (= active items). By default, most template uses `Zabbix agent` items (= passive items). 111 | 112 | If you want, you can convert all the items to `Zabbix agent` and everything will work, but you should really uses active items because those are way more scalable. The official documentation doesn't really make this point clear (https://www.zabbix.com/documentation/4.0/manual/appendix/items/activepassive) but active items are optimized: the agent asks the server for the list of items that the server wants, then send them by batch periodically. 113 | 114 | On the other hand, for passive items, the zabbix server must establish a connection for each items and ask for them, then wait for the anwser: this results in more CPU, memory and network consumption used by both the server and the agent. 115 | 116 | To make an active item work, you must ensure that you have a `ServerActive=your_zabbix_server_fqdn_or_ip` line in your agent config file (usually `/etc/zabbix/zabbix_agentd.conf`). 117 | 118 | You also need to configure the "Host Name" on the zabbix UI to be the same as the server output of the `hostname` command (you can always adjust the "Visible name" in the Zabbix UI to anything you want if needed) because the zabbix agent sends this information to the zabbix server. It basically tells the server "Hello, I am $(hostname), which items do you need from me?" so if there is a mismatch here, the server will most likely answer "I don't know you!" ;-) 119 | 120 | Beyond a certain point, depending on your hardware, you *will have to use active items*. 121 | 122 | An old but still relevant blog about high performance zabbix is available on https://blog.zabbix.com/scalable-zabbix-lessons-on-hitting-9400-nvps/2615/ . 123 | -------------------------------------------------------------------------------- /images/example_arc_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_arc_1.png -------------------------------------------------------------------------------- /images/example_arc_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_arc_2.png -------------------------------------------------------------------------------- /images/example_dataset_usage_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_dataset_usage_1.png -------------------------------------------------------------------------------- /images/example_zfs_throughput.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_zfs_throughput.png -------------------------------------------------------------------------------- /images/macros.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/macros.png -------------------------------------------------------------------------------- /images/trigger_prototypes_zpool.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/trigger_prototypes_zpool.png -------------------------------------------------------------------------------- /images/value_map_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/value_map_1.png -------------------------------------------------------------------------------- /template/zol_template.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4.0 4 | 2021-01-04T21:27:59Z 5 | 6 | 7 | Templates 8 | 9 | 10 | 11 | 3647 | 3648 | 3649 | 3650 | {ZFS on Linux:vfs.file.contents[/sys/module/zfs/version].diff(0)}>0 3651 | 0 3652 | 3653 | Version of OpenZFS is now {ITEM.VALUE} on {HOST.NAME} 3654 | 0 3655 | 3656 | 3657 | 0 3658 | 1 3659 | 3660 | 0 3661 | 0 3662 | 3663 | 3664 | 3665 | 3666 | {ZFS on Linux:zfs.arcstats[dnode_size].last()}>({ZFS on Linux:zfs.arcstats[arc_dnode_limit].last()}*0.9) 3667 | 0 3668 | 3669 | ZFS ARC dnode size > 90% dnode max size on {HOST.NAME} 3670 | 0 3671 | 3672 | 3673 | 0 3674 | 4 3675 | 3676 | 0 3677 | 0 3678 | 3679 | 3680 | 3681 | 3682 | {ZFS on Linux:zfs.arcstats[arc_meta_used].last()}>({ZFS on Linux:zfs.arcstats[arc_meta_limit].last()}*0.01*{$ZFS_ARC_META_ALERT}) 3683 | 0 3684 | 3685 | ZFS ARC meta size > {$ZFS_ARC_META_ALERT}% meta max size on {HOST.NAME} 3686 | 0 3687 | 3688 | 3689 | 0 3690 | 4 3691 | 3692 | 0 3693 | 0 3694 | 3695 | 3696 | 3697 | 3698 | 3699 | 3700 | ZFS ARC arc_meta_used breakdown 3701 | 900 3702 | 200 3703 | 0.0000 3704 | 100.0000 3705 | 1 3706 | 1 3707 | 1 3708 | 1 3709 | 0 3710 | 0.0000 3711 | 0.0000 3712 | 1 3713 | 0 3714 | 0 3715 | 0 3716 | 3717 | 3718 | 0 3719 | 0 3720 | 3333FF 3721 | 0 3722 | 2 3723 | 0 3724 | 3725 | ZFS on Linux 3726 | zfs.arcstats[metadata_size] 3727 | 3728 | 3729 | 3730 | 1 3731 | 0 3732 | 00EE00 3733 | 0 3734 | 2 3735 | 0 3736 | 3737 | ZFS on Linux 3738 | zfs.arcstats[dnode_size] 3739 | 3740 | 3741 | 3742 | 2 3743 | 0 3744 | EE0000 3745 | 0 3746 | 2 3747 | 0 3748 | 3749 | ZFS on Linux 3750 | zfs.arcstats[hdr_size] 3751 | 3752 | 3753 | 3754 | 3 3755 | 0 3756 | EEEE00 3757 | 0 3758 | 2 3759 | 0 3760 | 3761 | ZFS on Linux 3762 | zfs.arcstats[dbuf_size] 3763 | 3764 | 3765 | 3766 | 4 3767 | 0 3768 | EE00EE 3769 | 0 3770 | 2 3771 | 0 3772 | 3773 | ZFS on Linux 3774 | zfs.arcstats[bonus_size] 3775 | 3776 | 3777 | 3778 | 3779 | 3780 | ZFS ARC breakdown 3781 | 900 3782 | 200 3783 | 0.0000 3784 | 100.0000 3785 | 1 3786 | 1 3787 | 1 3788 | 1 3789 | 0 3790 | 0.0000 3791 | 0.0000 3792 | 1 3793 | 0 3794 | 0 3795 | 0 3796 | 3797 | 3798 | 0 3799 | 0 3800 | 3333FF 3801 | 0 3802 | 2 3803 | 0 3804 | 3805 | ZFS on Linux 3806 | zfs.arcstats[data_size] 3807 | 3808 | 3809 | 3810 | 1 3811 | 0 3812 | 00AA00 3813 | 0 3814 | 2 3815 | 0 3816 | 3817 | ZFS on Linux 3818 | zfs.arcstats[metadata_size] 3819 | 3820 | 3821 | 3822 | 2 3823 | 0 3824 | EE0000 3825 | 0 3826 | 2 3827 | 0 3828 | 3829 | ZFS on Linux 3830 | zfs.arcstats[dnode_size] 3831 | 3832 | 3833 | 3834 | 3 3835 | 0 3836 | CCCC00 3837 | 0 3838 | 2 3839 | 0 3840 | 3841 | ZFS on Linux 3842 | zfs.arcstats[hdr_size] 3843 | 3844 | 3845 | 3846 | 4 3847 | 0 3848 | A54F10 3849 | 0 3850 | 2 3851 | 0 3852 | 3853 | ZFS on Linux 3854 | zfs.arcstats[dbuf_size] 3855 | 3856 | 3857 | 3858 | 5 3859 | 0 3860 | 888888 3861 | 0 3862 | 2 3863 | 0 3864 | 3865 | ZFS on Linux 3866 | zfs.arcstats[bonus_size] 3867 | 3868 | 3869 | 3870 | 3871 | 3872 | ZFS ARC Cache Hit Ratio 3873 | 900 3874 | 200 3875 | 0.0000 3876 | 100.0000 3877 | 1 3878 | 1 3879 | 0 3880 | 1 3881 | 0 3882 | 0.0000 3883 | 0.0000 3884 | 1 3885 | 1 3886 | 0 3887 | 0 3888 | 3889 | 3890 | 0 3891 | 0 3892 | 00CC00 3893 | 0 3894 | 2 3895 | 0 3896 | 3897 | ZFS on Linux 3898 | zfs.arcstats_hit_ratio 3899 | 3900 | 3901 | 3902 | 3903 | 3904 | ZFS ARC memory usage 3905 | 900 3906 | 200 3907 | 0.0000 3908 | 100.0000 3909 | 1 3910 | 1 3911 | 0 3912 | 1 3913 | 0 3914 | 0.0000 3915 | 0.0000 3916 | 1 3917 | 2 3918 | 0 3919 | 3920 | ZFS on Linux 3921 | zfs.arcstats[c_max] 3922 | 3923 | 3924 | 3925 | 0 3926 | 5 3927 | 0000EE 3928 | 0 3929 | 2 3930 | 0 3931 | 3932 | ZFS on Linux 3933 | zfs.arcstats[size] 3934 | 3935 | 3936 | 3937 | 1 3938 | 2 3939 | DD0000 3940 | 0 3941 | 2 3942 | 0 3943 | 3944 | ZFS on Linux 3945 | zfs.arcstats[c_max] 3946 | 3947 | 3948 | 3949 | 2 3950 | 0 3951 | 00BB00 3952 | 0 3953 | 2 3954 | 0 3955 | 3956 | ZFS on Linux 3957 | zfs.arcstats[c_min] 3958 | 3959 | 3960 | 3961 | 3962 | 3963 | 3964 | 3965 | ZFS zpool scrub status 3966 | 3967 | 3968 | 0 3969 | Scrub in progress 3970 | 3971 | 3972 | 1 3973 | No scrub in progress 3974 | 3975 | 3976 | 3977 | 3978 | 3979 | -------------------------------------------------------------------------------- /userparameters/ZoL_with_sudo.conf: -------------------------------------------------------------------------------- 1 | # ZFS discovery and configuration 2 | # original template from pbergbolt (source = https://www.zabbix.com/forum/showthread.php?t=43347), modified by Slash 3 | 4 | 5 | # pool discovery 6 | UserParameter=zfs.pool.discovery,/usr/bin/sudo /sbin/zpool list -H -o name | sed -e '$ ! s/\(.*\)/{"{#POOLNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#POOLNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/' 7 | # dataset discovery, called "fileset" in the zabbix template for legacy reasons 8 | UserParameter=zfs.fileset.discovery,/usr/bin/sudo /sbin/zfs list -H -o name | sed -e '$ ! s/\(.*\)/{"{#FILESETNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#FILESETNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/' 9 | # vdev discovery 10 | UserParameter=zfs.vdev.discovery,/usr/bin/sudo /sbin/zpool list -Hv | grep '^[[:blank:]]' | egrep -v 'mirror|raidz' | awk '{print $1}' | sed -e '$ ! s/\(.*\)/{"{#VDEV}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#VDEV}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/' 11 | 12 | # pool health 13 | UserParameter=zfs.zpool.health[*],/usr/bin/sudo /sbin/zpool list -H -o health $1 14 | 15 | # get any fs option 16 | UserParameter=zfs.get.fsinfo[*],/usr/bin/sudo /sbin/zfs get -o value -Hp $2 $1 17 | 18 | # compressratio need special treatment because of the "x" at the end of the number 19 | UserParameter=zfs.get.compressratio[*],/usr/bin/sudo /sbin/zfs get -o value -Hp compressratio $1 | sed "s/x//" 20 | 21 | # memory used by ZFS: sum of the SPL slab allocator's statistics 22 | # "There are a few things not included in that, like the page cache used by mmap(). But you can expect it to be relatively accurate." 23 | UserParameter=zfs.memory.used,echo $(( `cat /proc/spl/kmem/slab | tail -n +3 | awk '{ print $3 }' | tr "\n" "+" | sed "s/$/0/"` )) 24 | 25 | # get any global zfs parameters 26 | UserParameter=zfs.get.param[*],cat /sys/module/zfs/parameters/$1 27 | 28 | # ARC stats from /proc/spl/kstat/zfs/arcstats 29 | UserParameter=zfs.arcstats[*],awk '/^$1/ {printf $$3;}' /proc/spl/kstat/zfs/arcstats 30 | 31 | # detect if a scrub is in progress, 0 = in progress, 1 = not in progress 32 | UserParameter=zfs.zpool.scrub[*],/usr/bin/sudo /sbin/zpool status $1 | grep "scrub in progress" > /dev/null ; echo $? 33 | 34 | # vdev state 35 | UserParameter=zfs.vdev.state[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$2 }' 36 | # vdev READ error counter 37 | UserParameter=zfs.vdev.error_counter.read[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$3 }' | numfmt --from=si 38 | # vdev WRITE error counter 39 | UserParameter=zfs.vdev.error_counter.write[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$4 }' | numfmt --from=si 40 | # vdev CHECKSUM error counter 41 | UserParameter=zfs.vdev.error_counter.cksum[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$5 }' | numfmt --from=si 42 | -------------------------------------------------------------------------------- /userparameters/ZoL_without_sudo.conf: -------------------------------------------------------------------------------- 1 | # ZFS discovery and configuration 2 | # original template from pbergbolt (source = https://www.zabbix.com/forum/showthread.php?t=43347), modified by Slash 3 | 4 | 5 | # pool discovery 6 | UserParameter=zfs.pool.discovery,/sbin/zpool list -H -o name | sed -e '$ ! s/\(.*\)/{"{#POOLNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#POOLNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/' 7 | # dataset discovery, called "fileset" in the zabbix template for legacy reasons 8 | UserParameter=zfs.fileset.discovery,/sbin/zfs list -H -o name | sed -e '$ ! s/\(.*\)/{"{#FILESETNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#FILESETNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/' 9 | # vdev discovery 10 | UserParameter=zfs.vdev.discovery,/sbin/zpool list -Hv | grep '^[[:blank:]]' | egrep -v 'mirror|raidz' | awk '{print $1}' | sed -e '$ ! s/\(.*\)/{"{#VDEV}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#VDEV}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/' 11 | 12 | # pool health 13 | UserParameter=zfs.zpool.health[*],/sbin/zpool list -H -o health $1 14 | 15 | # get any fs option 16 | UserParameter=zfs.get.fsinfo[*],/sbin/zfs get -o value -Hp $2 $1 17 | 18 | # compressratio need special treatment because of the "x" at the end of the number 19 | UserParameter=zfs.get.compressratio[*],/sbin/zfs get -o value -Hp compressratio $1 | sed "s/x//" 20 | 21 | # memory used by ZFS: sum of the SPL slab allocator's statistics 22 | # "There are a few things not included in that, like the page cache used by mmap(). But you can expect it to be relatively accurate." 23 | UserParameter=zfs.memory.used,echo $(( `cat /proc/spl/kmem/slab | tail -n +3 | awk '{ print $3 }' | tr "\n" "+" | sed "s/$/0/"` )) 24 | 25 | # get any global zfs parameters 26 | UserParameter=zfs.get.param[*],cat /sys/module/zfs/parameters/$1 27 | 28 | # ARC stats from /proc/spl/kstat/zfs/arcstats 29 | UserParameter=zfs.arcstats[*],awk '/^$1/ {printf $$3;}' /proc/spl/kstat/zfs/arcstats 30 | 31 | # detect if a scrub is in progress, 0 = in progress, 1 = not in progress 32 | UserParameter=zfs.zpool.scrub[*],/sbin/zpool status $1 | grep "scrub in progress" > /dev/null ; echo $? 33 | 34 | # vdev state 35 | UserParameter=zfs.vdev.state[*],/sbin/zpool status | grep "$1" | awk '{ print $$2 }' 36 | # vdev READ error counter 37 | UserParameter=zfs.vdev.error_counter.read[*],/sbin/zpool status | grep "$1" | awk '{ print $$3 }' | numfmt --from=si 38 | # vdev WRITE error counter 39 | UserParameter=zfs.vdev.error_counter.write[*],/sbin/zpool status | grep "$1" | awk '{ print $$4 }' | numfmt --from=si 40 | # vdev CHECKSUM error counter 41 | UserParameter=zfs.vdev.error_counter.cksum[*],/sbin/zpool status | grep "$1" | awk '{ print $$5 }' | numfmt --from=si 42 | --------------------------------------------------------------------------------