├── .gitignore
├── LICENSE
├── README.md
├── images
├── example_arc_1.png
├── example_arc_2.png
├── example_dataset_usage_1.png
├── example_zfs_throughput.png
├── macros.png
├── trigger_prototypes_zpool.png
└── value_map_1.png
├── template
└── zol_template.xml
└── userparameters
├── ZoL_with_sudo.conf
└── ZoL_without_sudo.conf
/.gitignore:
--------------------------------------------------------------------------------
1 | *.swp
2 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 AceSlash
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Monitor ZFS on Linux on Zabbix
2 |
3 | This template is a modified version of the original work done by pbergdolt and posted on the zabbix forum a while ago here: https://www.zabbix.com/forum/zabbix-cookbook/35336-zabbix-zfs-discovery-monitoring?t=43347 . Also the original home of this variant was on https://share.zabbix.com/zfs-on-linux .
4 |
5 | I have maintained and modified this template over the years and the different versions of ZoL on a large number of servers so I'm pretty confident that it works ;)
6 |
7 | Thanks to external contributors, this template was extended and is now more complete than ever. However, if you find a metric that you need and is missing, don't hesitate to open a ticket or even better, to create a PR!
8 |
9 | Tested Zabbix server version include 4.0, 4.4, 5.0 and 5.2 . The template shipped here is in 4.0 format to allow import to all those versions.
10 |
11 | This template will give you screens and graphs for memory usage, zpool usage and performance, dataset usage, etc. It includes triggers for low disk space (customizable via Zabbix own macros), disks errors, etc.
12 |
13 | Example of graphs:
14 | - Arc memory usage and hit rate:
15 | 
16 | - Complete breakdown of META and DATA usage:
17 | 
18 | - Dataset usage, with available space, and breakdown of used space with directly used space, space used by snapshots and space used by children:
19 | 
20 | - Zpool IO throughput:
21 | 
22 |
23 | # Supported OS and ZoL version
24 | Any Linux variant should work, tested version by myself include:
25 | - Debian 8, 9, 10
26 | - Ubuntu 16.04, 18.04 and 20.04
27 | - CentOS 6 and 7
28 |
29 | About the ZoL version, this template is intended to be used by ZoL version 0.7.0 or superior but still works on the 0.6.x branch.
30 |
31 | # Installation on Zabbix server
32 |
33 | To use this template, follow those steps:
34 |
35 | ## Create the Value mapping "ZFS zpool scrub status"
36 | Go to:
37 | - Administration
38 | - General
39 | - Value mapping
40 |
41 | Then create a new value map named `ZFS zpool scrub status` with the following mappings:
42 |
43 | | Value | Mapped to |
44 | | ----- | --------- |
45 | | 0 | Scrub in progress |
46 | | 1 | No scrub in progress |
47 |
48 | 
49 |
50 | ## Import the template
51 | Import the template that is in the "template" directory of this repository or download it directly with this link: 
52 |
53 | # Installation on the server you want to monitor
54 | ## Prerequisites
55 | The server needs to have some very basic tools to run the user parameters:
56 | - awk
57 | - cat
58 | - grep
59 | - sed
60 | - tail
61 |
62 | Usually, they are already installed and you don't have to install them.
63 | ## Add the userparameters file on the servers you want to monitor
64 |
65 | There are 2 different userparameters files in the "userparameters" directory of this repository.
66 |
67 | One uses sudo to run and thus you must give zabbix the correct rights and the other doesn't use sudo.
68 |
69 | On recent ZFS on Linux versions (eg version 0.7.0+), you don't need sudo to run `zpool list` or `zfs list` so just install the file  and you are done.
70 |
71 | For older ZFS on Linux versions (eg version 0.6.x), you will need to add some sudo rights with the file . On some distribution, ZoL already includes a file with all the necessary rights at `/etc/sudoers.d/zfs` but its content is commented, just remove the comments and any user will be able to list zfs datasets and pools. For convenience, here is the content of the file commented out:
72 | ```
73 | ## Allow read-only ZoL commands to be called through sudo
74 | ## without a password. Remove the first '#' column to enable.
75 | ##
76 | ## CAUTION: Any syntax error introduced here will break sudo.
77 | ##
78 | ## Cmnd alias specification
79 | Cmnd_Alias C_ZFS = \
80 | /sbin/zfs "", /sbin/zfs help *, \
81 | /sbin/zfs get, /sbin/zfs get *, \
82 | /sbin/zfs list, /sbin/zfs list *, \
83 | /sbin/zpool "", /sbin/zpool help *, \
84 | /sbin/zpool iostat, /sbin/zpool iostat *, \
85 | /sbin/zpool list, /sbin/zpool list *, \
86 | /sbin/zpool status, /sbin/zpool status *, \
87 | /sbin/zpool upgrade, /sbin/zpool upgrade -v
88 |
89 | ## allow any user to use basic read-only ZFS commands
90 | ALL ALL = (root) NOPASSWD: C_ZFS
91 | ```
92 | If you don't know where your "userparameters" directory is, this is usually the `/etc/zabbix/zabbix_agentd.d` folder. If in doubt, just look at your `zabbix_agentd.conf` file for the line begining by `Include=`, it will show where it is.
93 |
94 | ## Restart zabbix agent
95 | Once you have added the template, restart zabbix-agent so that it will load the new userparameters.
96 |
97 | # Customization of alert level by server
98 | This template includes macros to define when the "low disk spaces" type triggers will fire.
99 |
100 | By default, you will find them on the macro page of this template:
101 | 
102 |
103 | If you change them here, they will apply to every hosts linked to this template, which may not be such a good idea. Prefer to change the macros on specific servers if needed.
104 |
105 | You can see how the macros are used by looking at the discovery rules, then "Trigger prototypes":
106 | 
107 |
108 | # Important note about Zabbix active items
109 |
110 | This template uses Zabbix items of type `Zabbix agent (active)` (= active items). By default, most template uses `Zabbix agent` items (= passive items).
111 |
112 | If you want, you can convert all the items to `Zabbix agent` and everything will work, but you should really uses active items because those are way more scalable. The official documentation doesn't really make this point clear (https://www.zabbix.com/documentation/4.0/manual/appendix/items/activepassive) but active items are optimized: the agent asks the server for the list of items that the server wants, then send them by batch periodically.
113 |
114 | On the other hand, for passive items, the zabbix server must establish a connection for each items and ask for them, then wait for the anwser: this results in more CPU, memory and network consumption used by both the server and the agent.
115 |
116 | To make an active item work, you must ensure that you have a `ServerActive=your_zabbix_server_fqdn_or_ip` line in your agent config file (usually `/etc/zabbix/zabbix_agentd.conf`).
117 |
118 | You also need to configure the "Host Name" on the zabbix UI to be the same as the server output of the `hostname` command (you can always adjust the "Visible name" in the Zabbix UI to anything you want if needed) because the zabbix agent sends this information to the zabbix server. It basically tells the server "Hello, I am $(hostname), which items do you need from me?" so if there is a mismatch here, the server will most likely answer "I don't know you!" ;-)
119 |
120 | Beyond a certain point, depending on your hardware, you *will have to use active items*.
121 |
122 | An old but still relevant blog about high performance zabbix is available on https://blog.zabbix.com/scalable-zabbix-lessons-on-hitting-9400-nvps/2615/ .
123 |
--------------------------------------------------------------------------------
/images/example_arc_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_arc_1.png
--------------------------------------------------------------------------------
/images/example_arc_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_arc_2.png
--------------------------------------------------------------------------------
/images/example_dataset_usage_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_dataset_usage_1.png
--------------------------------------------------------------------------------
/images/example_zfs_throughput.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/example_zfs_throughput.png
--------------------------------------------------------------------------------
/images/macros.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/macros.png
--------------------------------------------------------------------------------
/images/trigger_prototypes_zpool.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/trigger_prototypes_zpool.png
--------------------------------------------------------------------------------
/images/value_map_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Cosium/zabbix_zfs-on-linux/c0e9b094fce9c5baf91f9c0837fdbc1e5fc97b5e/images/value_map_1.png
--------------------------------------------------------------------------------
/template/zol_template.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 | 4.0
4 | 2021-01-04T21:27:59Z
5 |
6 |
7 | Templates
8 |
9 |
10 |
11 |
12 | ZFS on Linux
13 | ZFS on Linux
14 | OpenZFS (formerly ZFS on Linux) template.
15 |
16 | Home of the project: https://github.com/Cosium/zabbix_zfs-on-linux
17 |
18 |
19 | Templates
20 |
21 |
22 |
23 |
24 | ZFS
25 |
26 |
27 | ZFS ARC
28 |
29 |
30 | ZFS dataset
31 |
32 |
33 | ZFS vdev
34 |
35 |
36 | ZFS zpool
37 |
38 |
39 |
40 | -
41 | OpenZFS version
42 | 7
43 |
44 |
45 | vfs.file.contents[/sys/module/zfs/version]
46 | 1h
47 | 30d
48 | 0
49 | 0
50 | 4
51 |
52 |
53 |
54 |
55 | 0
56 | 0
57 |
58 | 0
59 |
60 |
61 |
62 | 0
63 |
64 |
65 |
66 |
67 |
68 |
69 | 0
70 |
71 |
72 | ZFS
73 |
74 |
75 |
76 |
77 |
78 |
79 | 3s
80 |
81 |
82 |
83 | 200
84 | 1
85 | 0
86 |
87 |
88 | 0
89 | 0
90 | 0
91 | 0
92 |
93 |
94 |
95 | 0
96 | 0
97 |
98 |
99 | -
100 | ZFS ARC stat "$1"
101 | 7
102 |
103 |
104 | zfs.arcstats[arc_dnode_limit]
105 | 1m
106 | 30d
107 | 365d
108 | 0
109 | 3
110 |
111 | B
112 |
113 |
114 | 0
115 | 0
116 |
117 | 0
118 |
119 |
120 |
121 | 0
122 |
123 |
124 |
125 |
126 |
127 |
128 | 0
129 |
130 |
131 | ZFS
132 |
133 |
134 | ZFS ARC
135 |
136 |
137 |
138 |
139 |
140 |
141 | 3s
142 |
143 |
144 |
145 | 200
146 | 1
147 | 0
148 |
149 |
150 | 0
151 | 0
152 | 0
153 | 0
154 |
155 |
156 |
157 | 0
158 | 0
159 |
160 |
161 | -
162 | ZFS ARC stat "$1"
163 | 7
164 |
165 |
166 | zfs.arcstats[arc_meta_limit]
167 | 1m
168 | 30d
169 | 365d
170 | 0
171 | 3
172 |
173 | B
174 |
175 |
176 | 0
177 | 0
178 |
179 | 0
180 |
181 |
182 |
183 | 0
184 |
185 |
186 |
187 |
188 |
189 |
190 | 0
191 |
192 |
193 | ZFS
194 |
195 |
196 | ZFS ARC
197 |
198 |
199 |
200 |
201 |
202 |
203 | 3s
204 |
205 |
206 |
207 | 200
208 | 1
209 | 0
210 |
211 |
212 | 0
213 | 0
214 | 0
215 | 0
216 |
217 |
218 |
219 | 0
220 | 0
221 |
222 |
223 | -
224 | ZFS ARC stat "$1"
225 | 7
226 |
227 |
228 | zfs.arcstats[arc_meta_used]
229 | 1m
230 | 30d
231 | 365d
232 | 0
233 | 3
234 |
235 | B
236 |
237 |
238 | 0
239 | 0
240 |
241 | 0
242 |
243 |
244 |
245 | 0
246 |
247 |
248 |
249 |
250 |
251 | arc_meta_used = hdr_size + metadata_size + dbuf_size + dnode_size + bonus_size
252 | 0
253 |
254 |
255 | ZFS
256 |
257 |
258 | ZFS ARC
259 |
260 |
261 |
262 |
263 |
264 |
265 | 3s
266 |
267 |
268 |
269 | 200
270 | 1
271 | 0
272 |
273 |
274 | 0
275 | 0
276 | 0
277 | 0
278 |
279 |
280 |
281 | 0
282 | 0
283 |
284 |
285 | -
286 | ZFS ARC stat "$1"
287 | 7
288 |
289 |
290 | zfs.arcstats[bonus_size]
291 | 1m
292 | 30d
293 | 365d
294 | 0
295 | 3
296 |
297 | B
298 |
299 |
300 | 0
301 | 0
302 |
303 | 0
304 |
305 |
306 |
307 | 0
308 |
309 |
310 |
311 |
312 |
313 |
314 | 0
315 |
316 |
317 | ZFS
318 |
319 |
320 | ZFS ARC
321 |
322 |
323 |
324 |
325 |
326 |
327 | 3s
328 |
329 |
330 |
331 | 200
332 | 1
333 | 0
334 |
335 |
336 | 0
337 | 0
338 | 0
339 | 0
340 |
341 |
342 |
343 | 0
344 | 0
345 |
346 |
347 | -
348 | ZFS ARC max size
349 | 7
350 |
351 |
352 | zfs.arcstats[c_max]
353 | 1m
354 | 30d
355 | 365d
356 | 0
357 | 3
358 |
359 | B
360 |
361 |
362 | 0
363 | 0
364 |
365 | 0
366 |
367 |
368 |
369 | 0
370 |
371 |
372 |
373 |
374 |
375 |
376 | 0
377 |
378 |
379 | ZFS
380 |
381 |
382 | ZFS ARC
383 |
384 |
385 |
386 |
387 |
388 |
389 | 3s
390 |
391 |
392 |
393 | 200
394 | 1
395 | 0
396 |
397 |
398 | 0
399 | 0
400 | 0
401 | 0
402 |
403 |
404 |
405 | 0
406 | 0
407 |
408 |
409 | -
410 | ZFS ARC minimum size
411 | 7
412 |
413 |
414 | zfs.arcstats[c_min]
415 | 1m
416 | 30d
417 | 365d
418 | 0
419 | 3
420 |
421 | B
422 |
423 |
424 | 0
425 | 0
426 |
427 | 0
428 |
429 |
430 |
431 | 0
432 |
433 |
434 |
435 |
436 |
437 |
438 | 0
439 |
440 |
441 | ZFS
442 |
443 |
444 | ZFS ARC
445 |
446 |
447 |
448 |
449 |
450 |
451 | 3s
452 |
453 |
454 |
455 | 200
456 | 1
457 | 0
458 |
459 |
460 | 0
461 | 0
462 | 0
463 | 0
464 |
465 |
466 |
467 | 0
468 | 0
469 |
470 |
471 | -
472 | ZFS ARC stat "$1"
473 | 7
474 |
475 |
476 | zfs.arcstats[data_size]
477 | 1m
478 | 30d
479 | 365d
480 | 0
481 | 3
482 |
483 | B
484 |
485 |
486 | 0
487 | 0
488 |
489 | 0
490 |
491 |
492 |
493 | 0
494 |
495 |
496 |
497 |
498 |
499 |
500 | 0
501 |
502 |
503 | ZFS
504 |
505 |
506 | ZFS ARC
507 |
508 |
509 |
510 |
511 |
512 |
513 | 3s
514 |
515 |
516 |
517 | 200
518 | 1
519 | 0
520 |
521 |
522 | 0
523 | 0
524 | 0
525 | 0
526 |
527 |
528 |
529 | 0
530 | 0
531 |
532 |
533 | -
534 | ZFS ARC stat "$1"
535 | 7
536 |
537 |
538 | zfs.arcstats[dbuf_size]
539 | 1m
540 | 30d
541 | 365d
542 | 0
543 | 3
544 |
545 | B
546 |
547 |
548 | 0
549 | 0
550 |
551 | 0
552 |
553 |
554 |
555 | 0
556 |
557 |
558 |
559 |
560 |
561 |
562 | 0
563 |
564 |
565 | ZFS
566 |
567 |
568 | ZFS ARC
569 |
570 |
571 |
572 |
573 |
574 |
575 | 3s
576 |
577 |
578 |
579 | 200
580 | 1
581 | 0
582 |
583 |
584 | 0
585 | 0
586 | 0
587 | 0
588 |
589 |
590 |
591 | 0
592 | 0
593 |
594 |
595 | -
596 | ZFS ARC stat "$1"
597 | 7
598 |
599 |
600 | zfs.arcstats[dnode_size]
601 | 1m
602 | 30d
603 | 365d
604 | 0
605 | 3
606 |
607 | B
608 |
609 |
610 | 0
611 | 0
612 |
613 | 0
614 |
615 |
616 |
617 | 0
618 |
619 |
620 |
621 |
622 |
623 |
624 | 0
625 |
626 |
627 | ZFS
628 |
629 |
630 | ZFS ARC
631 |
632 |
633 |
634 |
635 |
636 |
637 | 3s
638 |
639 |
640 |
641 | 200
642 | 1
643 | 0
644 |
645 |
646 | 0
647 | 0
648 | 0
649 | 0
650 |
651 |
652 |
653 | 0
654 | 0
655 |
656 |
657 | -
658 | ZFS ARC stat "$1"
659 | 7
660 |
661 |
662 | zfs.arcstats[hdr_size]
663 | 1m
664 | 30d
665 | 365d
666 | 0
667 | 3
668 |
669 | B
670 |
671 |
672 | 0
673 | 0
674 |
675 | 0
676 |
677 |
678 |
679 | 0
680 |
681 |
682 |
683 |
684 |
685 |
686 | 0
687 |
688 |
689 | ZFS
690 |
691 |
692 | ZFS ARC
693 |
694 |
695 |
696 |
697 |
698 |
699 | 3s
700 |
701 |
702 |
703 | 200
704 | 1
705 | 0
706 |
707 |
708 | 0
709 | 0
710 | 0
711 | 0
712 |
713 |
714 |
715 | 0
716 | 0
717 |
718 |
719 | -
720 | ZFS ARC stat "$1"
721 | 7
722 |
723 |
724 | zfs.arcstats[hits]
725 | 1m
726 | 30d
727 | 365d
728 | 0
729 | 3
730 |
731 |
732 |
733 |
734 | 0
735 | 0
736 |
737 | 0
738 |
739 |
740 |
741 | 0
742 |
743 |
744 |
745 |
746 |
747 |
748 | 0
749 |
750 |
751 | ZFS
752 |
753 |
754 | ZFS ARC
755 |
756 |
757 |
758 |
759 |
760 |
761 | 10
762 |
763 |
764 |
765 |
766 | 3s
767 |
768 |
769 |
770 | 200
771 | 1
772 | 0
773 |
774 |
775 | 0
776 | 0
777 | 0
778 | 0
779 |
780 |
781 |
782 | 0
783 | 0
784 |
785 |
786 | -
787 | ZFS ARC stat "$1"
788 | 7
789 |
790 |
791 | zfs.arcstats[metadata_size]
792 | 1m
793 | 30d
794 | 365d
795 | 0
796 | 3
797 |
798 | B
799 |
800 |
801 | 0
802 | 0
803 |
804 | 0
805 |
806 |
807 |
808 | 0
809 |
810 |
811 |
812 |
813 |
814 |
815 | 0
816 |
817 |
818 | ZFS
819 |
820 |
821 | ZFS ARC
822 |
823 |
824 |
825 |
826 |
827 |
828 | 3s
829 |
830 |
831 |
832 | 200
833 | 1
834 | 0
835 |
836 |
837 | 0
838 | 0
839 | 0
840 | 0
841 |
842 |
843 |
844 | 0
845 | 0
846 |
847 |
848 | -
849 | ZFS ARC stat "$1"
850 | 7
851 |
852 |
853 | zfs.arcstats[mfu_hits]
854 | 1m
855 | 30d
856 | 365d
857 | 0
858 | 3
859 |
860 |
861 |
862 |
863 | 0
864 | 0
865 |
866 | 0
867 |
868 |
869 |
870 | 0
871 |
872 |
873 |
874 |
875 |
876 |
877 | 0
878 |
879 |
880 | ZFS
881 |
882 |
883 | ZFS ARC
884 |
885 |
886 |
887 |
888 |
889 |
890 | 10
891 |
892 |
893 |
894 |
895 | 3s
896 |
897 |
898 |
899 | 200
900 | 1
901 | 0
902 |
903 |
904 | 0
905 | 0
906 | 0
907 | 0
908 |
909 |
910 |
911 | 0
912 | 0
913 |
914 |
915 | -
916 | ZFS ARC stat "$1"
917 | 7
918 |
919 |
920 | zfs.arcstats[mfu_size]
921 | 1m
922 | 30d
923 | 365d
924 | 0
925 | 3
926 |
927 | B
928 |
929 |
930 | 0
931 | 0
932 |
933 | 0
934 |
935 |
936 |
937 | 0
938 |
939 |
940 |
941 |
942 |
943 |
944 | 0
945 |
946 |
947 | ZFS
948 |
949 |
950 | ZFS ARC
951 |
952 |
953 |
954 |
955 |
956 |
957 | 3s
958 |
959 |
960 |
961 | 200
962 | 1
963 | 0
964 |
965 |
966 | 0
967 | 0
968 | 0
969 | 0
970 |
971 |
972 |
973 | 0
974 | 0
975 |
976 |
977 | -
978 | ZFS ARC stat "$1"
979 | 7
980 |
981 |
982 | zfs.arcstats[misses]
983 | 1m
984 | 30d
985 | 365d
986 | 0
987 | 3
988 |
989 |
990 |
991 |
992 | 0
993 | 0
994 |
995 | 0
996 |
997 |
998 |
999 | 0
1000 |
1001 |
1002 |
1003 |
1004 |
1005 |
1006 | 0
1007 |
1008 |
1009 | ZFS
1010 |
1011 |
1012 | ZFS ARC
1013 |
1014 |
1015 |
1016 |
1017 |
1018 |
1019 | 10
1020 |
1021 |
1022 |
1023 |
1024 | 3s
1025 |
1026 |
1027 |
1028 | 200
1029 | 1
1030 | 0
1031 |
1032 |
1033 | 0
1034 | 0
1035 | 0
1036 | 0
1037 |
1038 |
1039 |
1040 | 0
1041 | 0
1042 |
1043 |
1044 | -
1045 | ZFS ARC stat "$1"
1046 | 7
1047 |
1048 |
1049 | zfs.arcstats[mru_hits]
1050 | 1m
1051 | 30d
1052 | 365d
1053 | 0
1054 | 3
1055 |
1056 |
1057 |
1058 |
1059 | 0
1060 | 0
1061 |
1062 | 0
1063 |
1064 |
1065 |
1066 | 0
1067 |
1068 |
1069 |
1070 |
1071 |
1072 |
1073 | 0
1074 |
1075 |
1076 | ZFS
1077 |
1078 |
1079 | ZFS ARC
1080 |
1081 |
1082 |
1083 |
1084 |
1085 |
1086 | 10
1087 |
1088 |
1089 |
1090 |
1091 | 3s
1092 |
1093 |
1094 |
1095 | 200
1096 | 1
1097 | 0
1098 |
1099 |
1100 | 0
1101 | 0
1102 | 0
1103 | 0
1104 |
1105 |
1106 |
1107 | 0
1108 | 0
1109 |
1110 |
1111 | -
1112 | ZFS ARC stat "$1"
1113 | 7
1114 |
1115 |
1116 | zfs.arcstats[mru_size]
1117 | 1m
1118 | 30d
1119 | 365d
1120 | 0
1121 | 3
1122 |
1123 | B
1124 |
1125 |
1126 | 0
1127 | 0
1128 |
1129 | 0
1130 |
1131 |
1132 |
1133 | 0
1134 |
1135 |
1136 |
1137 |
1138 |
1139 |
1140 | 0
1141 |
1142 |
1143 | ZFS
1144 |
1145 |
1146 | ZFS ARC
1147 |
1148 |
1149 |
1150 |
1151 |
1152 |
1153 | 3s
1154 |
1155 |
1156 |
1157 | 200
1158 | 1
1159 | 0
1160 |
1161 |
1162 | 0
1163 | 0
1164 | 0
1165 | 0
1166 |
1167 |
1168 |
1169 | 0
1170 | 0
1171 |
1172 |
1173 | -
1174 | ZFS ARC current size
1175 | 7
1176 |
1177 |
1178 | zfs.arcstats[size]
1179 | 1m
1180 | 30d
1181 | 365d
1182 | 0
1183 | 3
1184 |
1185 | B
1186 |
1187 |
1188 | 0
1189 | 0
1190 |
1191 | 0
1192 |
1193 |
1194 |
1195 | 0
1196 |
1197 |
1198 |
1199 |
1200 |
1201 |
1202 | 0
1203 |
1204 |
1205 | ZFS
1206 |
1207 |
1208 | ZFS ARC
1209 |
1210 |
1211 |
1212 |
1213 |
1214 |
1215 | 3s
1216 |
1217 |
1218 |
1219 | 200
1220 | 1
1221 | 0
1222 |
1223 |
1224 | 0
1225 | 0
1226 | 0
1227 | 0
1228 |
1229 |
1230 |
1231 | 0
1232 | 0
1233 |
1234 |
1235 | -
1236 | ZFS ARC Cache Hit Ratio
1237 | 15
1238 |
1239 |
1240 | zfs.arcstats_hit_ratio
1241 | 1m
1242 | 30d
1243 | 365d
1244 | 0
1245 | 0
1246 |
1247 | %
1248 |
1249 |
1250 | 0
1251 | 0
1252 |
1253 | 0
1254 |
1255 | 100*(last(zfs.arcstats[hits])/(last(zfs.arcstats[hits])+count(zfs.arcstats[hits],#1,0)+last(zfs.arcstats[misses])))
1256 |
1257 | 0
1258 |
1259 |
1260 |
1261 |
1262 |
1263 |
1264 | 0
1265 |
1266 |
1267 | ZFS
1268 |
1269 |
1270 | ZFS ARC
1271 |
1272 |
1273 |
1274 |
1275 |
1276 |
1277 | 3s
1278 |
1279 |
1280 |
1281 | 200
1282 | 1
1283 | 0
1284 |
1285 |
1286 | 0
1287 | 0
1288 | 0
1289 | 0
1290 |
1291 |
1292 |
1293 | 0
1294 | 0
1295 |
1296 |
1297 | -
1298 | ZFS ARC total read
1299 | 15
1300 |
1301 |
1302 | zfs.arcstats_total_read
1303 | 1m
1304 | 30d
1305 | 365d
1306 | 0
1307 | 3
1308 |
1309 | B
1310 |
1311 |
1312 | 0
1313 | 0
1314 |
1315 | 0
1316 |
1317 | last(zfs.arcstats[hits])+last(zfs.arcstats[misses])
1318 |
1319 | 0
1320 |
1321 |
1322 |
1323 |
1324 |
1325 |
1326 | 0
1327 |
1328 |
1329 | ZFS
1330 |
1331 |
1332 | ZFS ARC
1333 |
1334 |
1335 |
1336 |
1337 |
1338 |
1339 | 3s
1340 |
1341 |
1342 |
1343 | 200
1344 | 1
1345 | 0
1346 |
1347 |
1348 | 0
1349 | 0
1350 | 0
1351 | 0
1352 |
1353 |
1354 |
1355 | 0
1356 | 0
1357 |
1358 |
1359 | -
1360 | ZFS parameter $1
1361 | 7
1362 |
1363 |
1364 | zfs.get.param[zfs_arc_dnode_limit_percent]
1365 | 1h
1366 | 30d
1367 | 365d
1368 | 0
1369 | 3
1370 |
1371 | %
1372 |
1373 |
1374 | 0
1375 | 0
1376 |
1377 | 0
1378 |
1379 |
1380 |
1381 | 0
1382 |
1383 |
1384 |
1385 |
1386 |
1387 |
1388 | 0
1389 |
1390 |
1391 | ZFS
1392 |
1393 |
1394 | ZFS ARC
1395 |
1396 |
1397 |
1398 |
1399 |
1400 |
1401 | 3s
1402 |
1403 |
1404 |
1405 | 200
1406 | 1
1407 | 0
1408 |
1409 |
1410 | 0
1411 | 0
1412 | 0
1413 | 0
1414 |
1415 |
1416 |
1417 | 0
1418 | 0
1419 |
1420 |
1421 | -
1422 | ZFS parameter $1
1423 | 7
1424 |
1425 |
1426 | zfs.get.param[zfs_arc_meta_limit_percent]
1427 | 1h
1428 | 30d
1429 | 365d
1430 | 0
1431 | 3
1432 |
1433 | %
1434 |
1435 |
1436 | 0
1437 | 0
1438 |
1439 | 0
1440 |
1441 |
1442 |
1443 | 0
1444 |
1445 |
1446 |
1447 |
1448 |
1449 |
1450 | 0
1451 |
1452 |
1453 | ZFS
1454 |
1455 |
1456 | ZFS ARC
1457 |
1458 |
1459 |
1460 |
1461 |
1462 |
1463 | 3s
1464 |
1465 |
1466 |
1467 | 200
1468 | 1
1469 | 0
1470 |
1471 |
1472 | 0
1473 | 0
1474 | 0
1475 | 0
1476 |
1477 |
1478 |
1479 | 0
1480 | 0
1481 |
1482 |
1483 |
1484 |
1485 |
1486 | Zfs Dataset discovery
1487 | 7
1488 |
1489 |
1490 | zfs.fileset.discovery
1491 | 30m
1492 | 0
1493 |
1494 |
1495 |
1496 | 0
1497 | 0
1498 |
1499 | 0
1500 |
1501 |
1502 |
1503 | 0
1504 |
1505 |
1506 |
1507 |
1508 |
1509 |
1510 | 1
1511 |
1512 |
1513 |
1514 | 2d
1515 | Discover ZFS dataset.
1516 |
1517 |
1518 | Zfs dataset $1 compressratio
1519 | 7
1520 |
1521 |
1522 | zfs.get.compressratio[{#FILESETNAME}]
1523 | 30m
1524 | 30d
1525 | 365d
1526 | 0
1527 | 0
1528 |
1529 | %
1530 |
1531 |
1532 | 0
1533 | 0
1534 |
1535 | 0
1536 |
1537 |
1538 |
1539 | 0
1540 |
1541 |
1542 |
1543 |
1544 |
1545 |
1546 | 0
1547 |
1548 |
1549 | ZFS
1550 |
1551 |
1552 | ZFS dataset
1553 |
1554 |
1555 |
1556 |
1557 |
1558 |
1559 | 1
1560 | 100
1561 |
1562 |
1563 |
1564 | 3s
1565 |
1566 |
1567 |
1568 | 200
1569 | 1
1570 | 0
1571 |
1572 |
1573 | 0
1574 | 0
1575 | 0
1576 | 0
1577 |
1578 |
1579 |
1580 | 0
1581 | 0
1582 |
1583 |
1584 |
1585 |
1586 | Zfs dataset $1 $2
1587 | 7
1588 |
1589 |
1590 | zfs.get.fsinfo[{#FILESETNAME},available]
1591 | 5m
1592 | 30d
1593 | 365d
1594 | 0
1595 | 3
1596 |
1597 | B
1598 |
1599 |
1600 | 0
1601 | 0
1602 |
1603 | 0
1604 |
1605 |
1606 |
1607 | 0
1608 |
1609 |
1610 |
1611 |
1612 |
1613 |
1614 | 0
1615 |
1616 |
1617 | ZFS
1618 |
1619 |
1620 | ZFS dataset
1621 |
1622 |
1623 |
1624 |
1625 |
1626 |
1627 | 3s
1628 |
1629 |
1630 |
1631 | 200
1632 | 1
1633 | 0
1634 |
1635 |
1636 | 0
1637 | 0
1638 | 0
1639 | 0
1640 |
1641 |
1642 |
1643 | 0
1644 | 0
1645 |
1646 |
1647 |
1648 |
1649 | Zfs dataset $1 $2
1650 | 7
1651 |
1652 |
1653 | zfs.get.fsinfo[{#FILESETNAME},referenced]
1654 | 5m
1655 | 30d
1656 | 365d
1657 | 0
1658 | 3
1659 |
1660 | B
1661 |
1662 |
1663 | 0
1664 | 0
1665 |
1666 | 0
1667 |
1668 |
1669 |
1670 | 0
1671 |
1672 |
1673 |
1674 |
1675 |
1676 |
1677 | 0
1678 |
1679 |
1680 | ZFS
1681 |
1682 |
1683 | ZFS dataset
1684 |
1685 |
1686 |
1687 |
1688 |
1689 |
1690 | 3s
1691 |
1692 |
1693 |
1694 | 200
1695 | 1
1696 | 0
1697 |
1698 |
1699 | 0
1700 | 0
1701 | 0
1702 | 0
1703 |
1704 |
1705 |
1706 | 0
1707 | 0
1708 |
1709 |
1710 |
1711 |
1712 | Zfs dataset $1 $2
1713 | 7
1714 |
1715 |
1716 | zfs.get.fsinfo[{#FILESETNAME},usedbychildren]
1717 | 5m
1718 | 30d
1719 | 365d
1720 | 0
1721 | 3
1722 |
1723 | B
1724 |
1725 |
1726 | 0
1727 | 0
1728 |
1729 | 0
1730 |
1731 |
1732 |
1733 | 0
1734 |
1735 |
1736 |
1737 |
1738 |
1739 |
1740 | 0
1741 |
1742 |
1743 | ZFS
1744 |
1745 |
1746 | ZFS dataset
1747 |
1748 |
1749 |
1750 |
1751 |
1752 |
1753 | 3s
1754 |
1755 |
1756 |
1757 | 200
1758 | 1
1759 | 0
1760 |
1761 |
1762 | 0
1763 | 0
1764 | 0
1765 | 0
1766 |
1767 |
1768 |
1769 | 0
1770 | 0
1771 |
1772 |
1773 |
1774 |
1775 | Zfs dataset $1 $2
1776 | 7
1777 |
1778 |
1779 | zfs.get.fsinfo[{#FILESETNAME},usedbydataset]
1780 | 1h
1781 | 30d
1782 | 365d
1783 | 0
1784 | 3
1785 |
1786 | B
1787 |
1788 |
1789 | 0
1790 | 0
1791 |
1792 | 0
1793 |
1794 |
1795 |
1796 | 0
1797 |
1798 |
1799 |
1800 |
1801 |
1802 |
1803 | 0
1804 |
1805 |
1806 | ZFS
1807 |
1808 |
1809 | ZFS dataset
1810 |
1811 |
1812 |
1813 |
1814 |
1815 |
1816 | 3s
1817 |
1818 |
1819 |
1820 | 200
1821 | 1
1822 | 0
1823 |
1824 |
1825 | 0
1826 | 0
1827 | 0
1828 | 0
1829 |
1830 |
1831 |
1832 | 0
1833 | 0
1834 |
1835 |
1836 |
1837 |
1838 | Zfs dataset $1 $2
1839 | 7
1840 |
1841 |
1842 | zfs.get.fsinfo[{#FILESETNAME},usedbysnapshots]
1843 | 5m
1844 | 30d
1845 | 365d
1846 | 0
1847 | 3
1848 |
1849 | B
1850 |
1851 |
1852 | 0
1853 | 0
1854 |
1855 | 0
1856 |
1857 |
1858 |
1859 | 0
1860 |
1861 |
1862 |
1863 |
1864 |
1865 |
1866 | 0
1867 |
1868 |
1869 | ZFS
1870 |
1871 |
1872 | ZFS dataset
1873 |
1874 |
1875 |
1876 |
1877 |
1878 |
1879 | 3s
1880 |
1881 |
1882 |
1883 | 200
1884 | 1
1885 | 0
1886 |
1887 |
1888 | 0
1889 | 0
1890 | 0
1891 | 0
1892 |
1893 |
1894 |
1895 | 0
1896 | 0
1897 |
1898 |
1899 |
1900 |
1901 | Zfs dataset $1 $2
1902 | 7
1903 |
1904 |
1905 | zfs.get.fsinfo[{#FILESETNAME},used]
1906 | 5m
1907 | 30d
1908 | 365d
1909 | 0
1910 | 3
1911 |
1912 | B
1913 |
1914 |
1915 | 0
1916 | 0
1917 |
1918 | 0
1919 |
1920 |
1921 |
1922 | 0
1923 |
1924 |
1925 |
1926 |
1927 |
1928 |
1929 | 0
1930 |
1931 |
1932 | ZFS
1933 |
1934 |
1935 | ZFS dataset
1936 |
1937 |
1938 |
1939 |
1940 |
1941 |
1942 | 3s
1943 |
1944 |
1945 |
1946 | 200
1947 | 1
1948 | 0
1949 |
1950 |
1951 | 0
1952 | 0
1953 | 0
1954 | 0
1955 |
1956 |
1957 |
1958 | 0
1959 | 0
1960 |
1961 |
1962 |
1963 |
1964 |
1965 |
1966 | ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_AVERAGE_ALERT}/100)
1967 | 0
1968 |
1969 | More than {$ZFS_AVERAGE_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
1970 | 0
1971 |
1972 |
1973 | 0
1974 | 3
1975 |
1976 | 0
1977 | 0
1978 |
1979 |
1980 | More than {$ZFS_HIGH_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
1981 | ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_HIGH_ALERT}/100)
1982 |
1983 |
1984 |
1985 |
1986 |
1987 |
1988 | ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_DISASTER_ALERT}/100)
1989 | 0
1990 |
1991 | More than {$ZFS_DISASTER_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
1992 | 0
1993 |
1994 |
1995 | 0
1996 | 5
1997 |
1998 | 0
1999 | 0
2000 |
2001 |
2002 |
2003 |
2004 | ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_HIGH_ALERT}/100)
2005 | 0
2006 |
2007 | More than {$ZFS_HIGH_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
2008 | 0
2009 |
2010 |
2011 | 0
2012 | 4
2013 |
2014 | 0
2015 | 0
2016 |
2017 |
2018 | More than {$ZFS_DISASTER_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
2019 | ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_DISASTER_ALERT}/100)
2020 |
2021 |
2022 |
2023 |
2024 |
2025 |
2026 |
2027 |
2028 | ZFS dataset {#FILESETNAME} usage
2029 | 900
2030 | 200
2031 | 0.0000
2032 | 100.0000
2033 | 1
2034 | 1
2035 | 1
2036 | 1
2037 | 0
2038 | 0.0000
2039 | 0.0000
2040 | 1
2041 | 0
2042 | 0
2043 | 0
2044 |
2045 |
2046 | 1
2047 | 0
2048 | 3333FF
2049 | 0
2050 | 2
2051 | 0
2052 | -
2053 | ZFS on Linux
2054 | zfs.get.fsinfo[{#FILESETNAME},usedbydataset]
2055 |
2056 |
2057 |
2058 | 2
2059 | 0
2060 | FF33FF
2061 | 0
2062 | 2
2063 | 0
2064 | -
2065 | ZFS on Linux
2066 | zfs.get.fsinfo[{#FILESETNAME},usedbysnapshots]
2067 |
2068 |
2069 |
2070 | 3
2071 | 0
2072 | FF3333
2073 | 0
2074 | 2
2075 | 0
2076 | -
2077 | ZFS on Linux
2078 | zfs.get.fsinfo[{#FILESETNAME},usedbychildren]
2079 |
2080 |
2081 |
2082 | 4
2083 | 0
2084 | 33FF33
2085 | 0
2086 | 2
2087 | 0
2088 | -
2089 | ZFS on Linux
2090 | zfs.get.fsinfo[{#FILESETNAME},available]
2091 |
2092 |
2093 |
2094 |
2095 |
2096 |
2097 |
2098 | 3s
2099 |
2100 |
2101 |
2102 | 200
2103 | 1
2104 | 0
2105 |
2106 |
2107 | 0
2108 | 0
2109 | 0
2110 |
2111 |
2112 |
2113 | 0
2114 | 0
2115 |
2116 |
2117 | Zfs Pool discovery
2118 | 7
2119 |
2120 |
2121 | zfs.pool.discovery
2122 | 1h
2123 | 0
2124 |
2125 |
2126 |
2127 | 0
2128 | 0
2129 |
2130 | 0
2131 |
2132 |
2133 |
2134 | 0
2135 |
2136 |
2137 |
2138 |
2139 |
2140 |
2141 | 0
2142 |
2143 |
2144 |
2145 | 3d
2146 |
2147 |
2148 |
2149 | Zpool {#POOLNAME}: Get iostats
2150 | 7
2151 |
2152 |
2153 | vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/io]
2154 | 1m
2155 | 0
2156 | 0
2157 | 0
2158 | 4
2159 |
2160 |
2161 |
2162 |
2163 | 0
2164 | 0
2165 |
2166 | 0
2167 |
2168 |
2169 |
2170 | 0
2171 |
2172 |
2173 |
2174 |
2175 |
2176 |
2177 | 0
2178 |
2179 |
2180 | ZFS
2181 |
2182 |
2183 | ZFS zpool
2184 |
2185 |
2186 |
2187 |
2188 |
2189 |
2190 | 5
2191 | ([0-9]+)\s+([0-9]+)\s+([0-9]+)\s+([0-9]+)\s+([0-9]+).*$
2192 | ["\1", "\2", "\3", "\4"]
2193 |
2194 |
2195 |
2196 | 3s
2197 |
2198 |
2199 |
2200 | 200
2201 | 1
2202 | 0
2203 |
2204 |
2205 | 0
2206 | 0
2207 | 0
2208 | 0
2209 |
2210 |
2211 |
2212 | 0
2213 | 0
2214 |
2215 |
2216 |
2217 |
2218 | Zpool {#POOLNAME} available
2219 | 7
2220 |
2221 |
2222 | zfs.get.fsinfo[{#POOLNAME},available]
2223 | 5m
2224 | 30d
2225 | 365d
2226 | 0
2227 | 3
2228 |
2229 | B
2230 |
2231 |
2232 | 0
2233 | 0
2234 |
2235 | 0
2236 |
2237 |
2238 |
2239 | 0
2240 |
2241 |
2242 |
2243 |
2244 |
2245 |
2246 | 0
2247 |
2248 |
2249 | ZFS
2250 |
2251 |
2252 | ZFS zpool
2253 |
2254 |
2255 |
2256 |
2257 |
2258 |
2259 | 3s
2260 |
2261 |
2262 |
2263 | 200
2264 | 1
2265 | 0
2266 |
2267 |
2268 | 0
2269 | 0
2270 | 0
2271 | 0
2272 |
2273 |
2274 |
2275 | 0
2276 | 0
2277 |
2278 |
2279 |
2280 |
2281 | Zpool {#POOLNAME} used
2282 | 7
2283 |
2284 |
2285 | zfs.get.fsinfo[{#POOLNAME},used]
2286 | 5m
2287 | 30d
2288 | 365d
2289 | 0
2290 | 3
2291 |
2292 | B
2293 |
2294 |
2295 | 0
2296 | 0
2297 |
2298 | 0
2299 |
2300 |
2301 |
2302 | 0
2303 |
2304 |
2305 |
2306 |
2307 |
2308 |
2309 | 0
2310 |
2311 |
2312 | ZFS
2313 |
2314 |
2315 | ZFS zpool
2316 |
2317 |
2318 |
2319 |
2320 |
2321 |
2322 | 3s
2323 |
2324 |
2325 |
2326 | 200
2327 | 1
2328 | 0
2329 |
2330 |
2331 | 0
2332 | 0
2333 | 0
2334 | 0
2335 |
2336 |
2337 |
2338 | 0
2339 | 0
2340 |
2341 |
2342 |
2343 |
2344 | Zpool {#POOLNAME} Health
2345 | 7
2346 |
2347 |
2348 | zfs.zpool.health[{#POOLNAME}]
2349 | 5m
2350 | 30d
2351 | 0
2352 | 0
2353 | 4
2354 |
2355 |
2356 |
2357 |
2358 | 0
2359 | 0
2360 |
2361 | 0
2362 |
2363 |
2364 |
2365 | 0
2366 |
2367 |
2368 |
2369 |
2370 |
2371 |
2372 | 0
2373 |
2374 |
2375 | ZFS
2376 |
2377 |
2378 | ZFS zpool
2379 |
2380 |
2381 |
2382 |
2383 |
2384 |
2385 | 3s
2386 |
2387 |
2388 |
2389 | 200
2390 | 1
2391 | 0
2392 |
2393 |
2394 | 0
2395 | 0
2396 | 0
2397 | 0
2398 |
2399 |
2400 |
2401 | 0
2402 | 0
2403 |
2404 |
2405 |
2406 |
2407 | Zpool {#POOLNAME} read throughput
2408 | 18
2409 |
2410 |
2411 | zfs.zpool.iostat.nread[{#POOLNAME}]
2412 | 0
2413 | 30d
2414 | 365d
2415 | 0
2416 | 0
2417 |
2418 | Bps
2419 |
2420 |
2421 | 0
2422 | 0
2423 |
2424 | 0
2425 |
2426 |
2427 |
2428 | 0
2429 |
2430 |
2431 |
2432 |
2433 |
2434 |
2435 | 0
2436 |
2437 |
2438 | ZFS
2439 |
2440 |
2441 | ZFS zpool
2442 |
2443 |
2444 |
2445 |
2446 |
2447 |
2448 | 12
2449 | $[0]
2450 |
2451 |
2452 | 10
2453 |
2454 |
2455 |
2456 |
2457 | 3s
2458 |
2459 |
2460 |
2461 | 200
2462 | 1
2463 | 0
2464 |
2465 |
2466 | 0
2467 | 0
2468 | 0
2469 | 0
2470 |
2471 |
2472 |
2473 | 0
2474 | 0
2475 |
2476 |
2477 | vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/io]
2478 |
2479 |
2480 |
2481 | Zpool {#POOLNAME} write throughput
2482 | 18
2483 |
2484 |
2485 | zfs.zpool.iostat.nwritten[{#POOLNAME}]
2486 | 0
2487 | 30d
2488 | 365d
2489 | 0
2490 | 0
2491 |
2492 | Bps
2493 |
2494 |
2495 | 0
2496 | 0
2497 |
2498 | 0
2499 |
2500 |
2501 |
2502 | 0
2503 |
2504 |
2505 |
2506 |
2507 |
2508 |
2509 | 0
2510 |
2511 |
2512 | ZFS
2513 |
2514 |
2515 | ZFS zpool
2516 |
2517 |
2518 |
2519 |
2520 |
2521 |
2522 | 12
2523 | $[1]
2524 |
2525 |
2526 | 10
2527 |
2528 |
2529 |
2530 |
2531 | 3s
2532 |
2533 |
2534 |
2535 | 200
2536 | 1
2537 | 0
2538 |
2539 |
2540 | 0
2541 | 0
2542 | 0
2543 | 0
2544 |
2545 |
2546 |
2547 | 0
2548 | 0
2549 |
2550 |
2551 | vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/io]
2552 |
2553 |
2554 |
2555 | Zpool {#POOLNAME} IOPS: reads
2556 | 18
2557 |
2558 |
2559 | zfs.zpool.iostat.reads[{#POOLNAME}]
2560 | 0
2561 | 30d
2562 | 365d
2563 | 0
2564 | 0
2565 |
2566 | iops
2567 |
2568 |
2569 | 0
2570 | 0
2571 |
2572 | 0
2573 |
2574 |
2575 |
2576 | 0
2577 |
2578 |
2579 |
2580 |
2581 |
2582 |
2583 | 0
2584 |
2585 |
2586 | ZFS
2587 |
2588 |
2589 | ZFS zpool
2590 |
2591 |
2592 |
2593 |
2594 |
2595 |
2596 | 12
2597 | $[2]
2598 |
2599 |
2600 | 10
2601 |
2602 |
2603 |
2604 |
2605 | 3s
2606 |
2607 |
2608 |
2609 | 200
2610 | 1
2611 | 0
2612 |
2613 |
2614 | 0
2615 | 0
2616 | 0
2617 | 0
2618 |
2619 |
2620 |
2621 | 0
2622 | 0
2623 |
2624 |
2625 | vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/io]
2626 |
2627 |
2628 |
2629 | Zpool {#POOLNAME} IOPS: writes
2630 | 18
2631 |
2632 |
2633 | zfs.zpool.iostat.writes[{#POOLNAME}]
2634 | 0
2635 | 30d
2636 | 365d
2637 | 0
2638 | 0
2639 |
2640 | iops
2641 |
2642 |
2643 | 0
2644 | 0
2645 |
2646 | 0
2647 |
2648 |
2649 |
2650 | 0
2651 |
2652 |
2653 |
2654 |
2655 |
2656 |
2657 | 0
2658 |
2659 |
2660 | ZFS
2661 |
2662 |
2663 | ZFS zpool
2664 |
2665 |
2666 |
2667 |
2668 |
2669 |
2670 | 12
2671 | $[3]
2672 |
2673 |
2674 | 10
2675 |
2676 |
2677 |
2678 |
2679 | 3s
2680 |
2681 |
2682 |
2683 | 200
2684 | 1
2685 | 0
2686 |
2687 |
2688 | 0
2689 | 0
2690 | 0
2691 | 0
2692 |
2693 |
2694 |
2695 | 0
2696 | 0
2697 |
2698 |
2699 | vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/io]
2700 |
2701 |
2702 |
2703 | Zpool {#POOLNAME} scrub status
2704 | 7
2705 |
2706 |
2707 | zfs.zpool.scrub[{#POOLNAME}]
2708 | 5m
2709 | 30d
2710 | 365d
2711 | 0
2712 | 3
2713 |
2714 |
2715 |
2716 |
2717 | 0
2718 | 0
2719 |
2720 | 0
2721 |
2722 |
2723 |
2724 | 0
2725 |
2726 |
2727 |
2728 |
2729 |
2730 | Detect if the pool is currently scrubbing itself.
2731 |
2732 | This is not a bad thing itself, but it slows down the entire pool and should be terminated when on production server during business hours if it causes a noticeable slowdown.
2733 | 0
2734 |
2735 |
2736 | ZFS
2737 |
2738 |
2739 | ZFS zpool
2740 |
2741 |
2742 |
2743 | ZFS zpool scrub status
2744 |
2745 |
2746 |
2747 |
2748 | 3s
2749 |
2750 |
2751 |
2752 | 200
2753 | 1
2754 | 0
2755 |
2756 |
2757 | 0
2758 | 0
2759 | 0
2760 | 0
2761 |
2762 |
2763 |
2764 | 0
2765 | 0
2766 |
2767 |
2768 |
2769 |
2770 |
2771 |
2772 | ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_AVERAGE_ALERT}/100)
2773 | 0
2774 |
2775 | More than {$ZPOOL_AVERAGE_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
2776 | 0
2777 |
2778 |
2779 | 0
2780 | 3
2781 |
2782 | 0
2783 | 0
2784 |
2785 |
2786 | More than {$ZPOOL_HIGH_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
2787 | ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_HIGH_ALERT}/100)
2788 |
2789 |
2790 |
2791 |
2792 |
2793 |
2794 | ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_DISASTER_ALERT}/100)
2795 | 0
2796 |
2797 | More than {$ZPOOL_DISASTER_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
2798 | 0
2799 |
2800 |
2801 | 0
2802 | 5
2803 |
2804 | 0
2805 | 0
2806 |
2807 |
2808 |
2809 |
2810 | ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_HIGH_ALERT}/100)
2811 | 0
2812 |
2813 | More than {$ZPOOL_HIGH_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
2814 | 0
2815 |
2816 |
2817 | 0
2818 | 4
2819 |
2820 | 0
2821 | 0
2822 |
2823 |
2824 | More than {$ZPOOL_DISASTER_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
2825 | ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_DISASTER_ALERT}/100)
2826 |
2827 |
2828 |
2829 |
2830 |
2831 |
2832 | {ZFS on Linux:zfs.zpool.scrub[{#POOLNAME}].max(12h)}=0
2833 | 0
2834 |
2835 | Zpool {#POOLNAME} is scrubbing for more than 12h on {HOST.NAME}
2836 | 0
2837 |
2838 |
2839 | 0
2840 | 3
2841 |
2842 | 0
2843 | 0
2844 |
2845 |
2846 | Zpool {#POOLNAME} is scrubbing for more than 24h on {HOST.NAME}
2847 | {ZFS on Linux:zfs.zpool.scrub[{#POOLNAME}].max(24h)}=0
2848 |
2849 |
2850 |
2851 |
2852 |
2853 |
2854 | {ZFS on Linux:zfs.zpool.scrub[{#POOLNAME}].max(24h)}=0
2855 | 0
2856 |
2857 | Zpool {#POOLNAME} is scrubbing for more than 24h on {HOST.NAME}
2858 | 0
2859 |
2860 |
2861 | 0
2862 | 4
2863 |
2864 | 0
2865 | 0
2866 |
2867 |
2868 |
2869 |
2870 | {ZFS on Linux:zfs.zpool.health[{#POOLNAME}].str(ONLINE)}=0
2871 | 0
2872 |
2873 | Zpool {#POOLNAME} is {ITEM.VALUE} on {HOST.NAME}
2874 | 0
2875 |
2876 |
2877 | 0
2878 | 4
2879 |
2880 | 0
2881 | 0
2882 |
2883 |
2884 |
2885 |
2886 |
2887 |
2888 | ZFS zpool {#POOLNAME} IOPS
2889 | 900
2890 | 200
2891 | 0.0000
2892 | 100.0000
2893 | 1
2894 | 1
2895 | 0
2896 | 1
2897 | 0
2898 | 0.0000
2899 | 0.0000
2900 | 1
2901 | 0
2902 | 0
2903 | 0
2904 |
2905 |
2906 | 1
2907 | 0
2908 | 5C6BC0
2909 | 0
2910 | 2
2911 | 0
2912 | -
2913 | ZFS on Linux
2914 | zfs.zpool.iostat.reads[{#POOLNAME}]
2915 |
2916 |
2917 |
2918 | 2
2919 | 0
2920 | EF5350
2921 | 0
2922 | 2
2923 | 0
2924 | -
2925 | ZFS on Linux
2926 | zfs.zpool.iostat.writes[{#POOLNAME}]
2927 |
2928 |
2929 |
2930 |
2931 |
2932 | ZFS zpool {#POOLNAME} space usage
2933 | 900
2934 | 200
2935 | 0.0000
2936 | 100.0000
2937 | 1
2938 | 1
2939 | 1
2940 | 1
2941 | 0
2942 | 0.0000
2943 | 0.0000
2944 | 0
2945 | 0
2946 | 0
2947 | 0
2948 |
2949 |
2950 | 1
2951 | 0
2952 | 00EE00
2953 | 0
2954 | 2
2955 | 0
2956 | -
2957 | ZFS on Linux
2958 | zfs.get.fsinfo[{#POOLNAME},available]
2959 |
2960 |
2961 |
2962 | 2
2963 | 0
2964 | EE0000
2965 | 0
2966 | 2
2967 | 0
2968 | -
2969 | ZFS on Linux
2970 | zfs.get.fsinfo[{#POOLNAME},used]
2971 |
2972 |
2973 |
2974 |
2975 |
2976 | ZFS zpool {#POOLNAME} throughput
2977 | 900
2978 | 200
2979 | 0.0000
2980 | 100.0000
2981 | 1
2982 | 1
2983 | 0
2984 | 1
2985 | 0
2986 | 0.0000
2987 | 0.0000
2988 | 1
2989 | 0
2990 | 0
2991 | 0
2992 |
2993 |
2994 | 1
2995 | 0
2996 | 5C6BC0
2997 | 0
2998 | 2
2999 | 0
3000 | -
3001 | ZFS on Linux
3002 | zfs.zpool.iostat.nread[{#POOLNAME}]
3003 |
3004 |
3005 |
3006 | 2
3007 | 2
3008 | EF5350
3009 | 0
3010 | 2
3011 | 0
3012 | -
3013 | ZFS on Linux
3014 | zfs.zpool.iostat.nwritten[{#POOLNAME}]
3015 |
3016 |
3017 |
3018 |
3019 |
3020 |
3021 |
3022 | 3s
3023 |
3024 |
3025 |
3026 | 200
3027 | 1
3028 | 0
3029 |
3030 |
3031 | 0
3032 | 0
3033 | 0
3034 |
3035 |
3036 |
3037 | 0
3038 | 0
3039 |
3040 |
3041 | Zfs vdev discovery
3042 | 7
3043 |
3044 |
3045 | zfs.vdev.discovery
3046 | 1h
3047 | 0
3048 |
3049 |
3050 |
3051 | 0
3052 | 0
3053 |
3054 | 0
3055 |
3056 |
3057 |
3058 | 0
3059 |
3060 |
3061 |
3062 |
3063 |
3064 |
3065 | 0
3066 |
3067 |
3068 |
3069 | 3d
3070 |
3071 |
3072 |
3073 | vdev {#VDEV}: CHECKSUM error counter
3074 | 7
3075 |
3076 |
3077 | zfs.vdev.error_counter.cksum[{#VDEV}]
3078 | 5m
3079 | 30d
3080 | 365d
3081 | 0
3082 | 3
3083 |
3084 |
3085 |
3086 |
3087 | 0
3088 | 0
3089 |
3090 | 0
3091 |
3092 |
3093 |
3094 | 0
3095 |
3096 |
3097 |
3098 |
3099 |
3100 | This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
3101 |
3102 | If yes, use 'zpool replace' to replace the device.
3103 |
3104 | If not, clear the error with 'zpool clear'.
3105 | 0
3106 |
3107 |
3108 | ZFS
3109 |
3110 |
3111 | ZFS vdev
3112 |
3113 |
3114 |
3115 |
3116 |
3117 |
3118 | 3s
3119 |
3120 |
3121 |
3122 | 200
3123 | 1
3124 | 0
3125 |
3126 |
3127 | 0
3128 | 0
3129 | 0
3130 | 0
3131 |
3132 |
3133 |
3134 | 0
3135 | 0
3136 |
3137 |
3138 |
3139 |
3140 | vdev {#VDEV}: READ error counter
3141 | 7
3142 |
3143 |
3144 | zfs.vdev.error_counter.read[{#VDEV}]
3145 | 5m
3146 | 30d
3147 | 365d
3148 | 0
3149 | 3
3150 |
3151 |
3152 |
3153 |
3154 | 0
3155 | 0
3156 |
3157 | 0
3158 |
3159 |
3160 |
3161 | 0
3162 |
3163 |
3164 |
3165 |
3166 |
3167 | This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
3168 |
3169 | If yes, use 'zpool replace' to replace the device.
3170 |
3171 | If not, clear the error with 'zpool clear'.
3172 | 0
3173 |
3174 |
3175 | ZFS
3176 |
3177 |
3178 | ZFS vdev
3179 |
3180 |
3181 |
3182 |
3183 |
3184 |
3185 | 3s
3186 |
3187 |
3188 |
3189 | 200
3190 | 1
3191 | 0
3192 |
3193 |
3194 | 0
3195 | 0
3196 | 0
3197 | 0
3198 |
3199 |
3200 |
3201 | 0
3202 | 0
3203 |
3204 |
3205 |
3206 |
3207 | vdev {#VDEV}: WRITE error counter
3208 | 7
3209 |
3210 |
3211 | zfs.vdev.error_counter.write[{#VDEV}]
3212 | 5m
3213 | 30d
3214 | 365d
3215 | 0
3216 | 3
3217 |
3218 |
3219 |
3220 |
3221 | 0
3222 | 0
3223 |
3224 | 0
3225 |
3226 |
3227 |
3228 | 0
3229 |
3230 |
3231 |
3232 |
3233 |
3234 | This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
3235 |
3236 | If yes, use 'zpool replace' to replace the device.
3237 |
3238 | If not, clear the error with 'zpool clear'.
3239 | 0
3240 |
3241 |
3242 | ZFS
3243 |
3244 |
3245 | ZFS vdev
3246 |
3247 |
3248 |
3249 |
3250 |
3251 |
3252 | 3s
3253 |
3254 |
3255 |
3256 | 200
3257 | 1
3258 | 0
3259 |
3260 |
3261 | 0
3262 | 0
3263 | 0
3264 | 0
3265 |
3266 |
3267 |
3268 | 0
3269 | 0
3270 |
3271 |
3272 |
3273 |
3274 | vdev {#VDEV}: total number of errors
3275 | 15
3276 |
3277 |
3278 | zfs.vdev.error_total[{#VDEV}]
3279 | 5m
3280 | 30d
3281 | 365d
3282 | 0
3283 | 3
3284 |
3285 |
3286 |
3287 |
3288 | 0
3289 | 0
3290 |
3291 | 0
3292 |
3293 | last(zfs.vdev.error_counter.cksum[{#VDEV}])+last(zfs.vdev.error_counter.read[{#VDEV}])+last(zfs.vdev.error_counter.write[{#VDEV}])
3294 |
3295 | 0
3296 |
3297 |
3298 |
3299 |
3300 |
3301 | This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
3302 |
3303 | If yes, use 'zpool replace' to replace the device.
3304 |
3305 | If not, clear the error with 'zpool clear'.
3306 | 0
3307 |
3308 |
3309 | ZFS
3310 |
3311 |
3312 | ZFS vdev
3313 |
3314 |
3315 |
3316 |
3317 |
3318 |
3319 | 3s
3320 |
3321 |
3322 |
3323 | 200
3324 | 1
3325 | 0
3326 |
3327 |
3328 | 0
3329 | 0
3330 | 0
3331 | 0
3332 |
3333 |
3334 |
3335 | 0
3336 | 0
3337 |
3338 |
3339 |
3340 |
3341 |
3342 |
3343 | {ZFS on Linux:zfs.vdev.error_total[{#VDEV}].last()}>0
3344 | 0
3345 |
3346 | vdev {#VDEV} has encountered {ITEM.VALUE} errors on {HOST.NAME}
3347 | 0
3348 |
3349 |
3350 | 0
3351 | 4
3352 | This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
3353 |
3354 | If yes, use 'zpool replace' to replace the device.
3355 |
3356 | If not, clear the error with 'zpool clear'.
3357 |
3358 | You may also run a zpool scrub to check if some other undetected errors are present on this vdev.
3359 | 0
3360 | 0
3361 |
3362 |
3363 |
3364 |
3365 |
3366 |
3367 | ZFS vdev {#VDEV} errors
3368 | 900
3369 | 200
3370 | 0.0000
3371 | 100.0000
3372 | 1
3373 | 1
3374 | 0
3375 | 1
3376 | 0
3377 | 0.0000
3378 | 0.0000
3379 | 1
3380 | 0
3381 | 0
3382 | 0
3383 |
3384 |
3385 | 0
3386 | 0
3387 | CC00CC
3388 | 0
3389 | 2
3390 | 0
3391 | -
3392 | ZFS on Linux
3393 | zfs.vdev.error_counter.cksum[{#VDEV}]
3394 |
3395 |
3396 |
3397 | 1
3398 | 0
3399 | F63100
3400 | 0
3401 | 2
3402 | 0
3403 | -
3404 | ZFS on Linux
3405 | zfs.vdev.error_counter.read[{#VDEV}]
3406 |
3407 |
3408 |
3409 | 2
3410 | 0
3411 | BBBB00
3412 | 0
3413 | 2
3414 | 0
3415 | -
3416 | ZFS on Linux
3417 | zfs.vdev.error_counter.write[{#VDEV}]
3418 |
3419 |
3420 |
3421 |
3422 |
3423 |
3424 |
3425 | 3s
3426 |
3427 |
3428 |
3429 | 200
3430 | 1
3431 | 0
3432 |
3433 |
3434 | 0
3435 | 0
3436 | 0
3437 |
3438 |
3439 |
3440 | 0
3441 | 0
3442 |
3443 |
3444 |
3445 |
3446 |
3447 | {$ZFS_ARC_META_ALERT}
3448 | 90
3449 |
3450 |
3451 | {$ZFS_AVERAGE_ALERT}
3452 | 90
3453 |
3454 |
3455 | {$ZFS_DISASTER_ALERT}
3456 | 99
3457 |
3458 |
3459 | {$ZFS_HIGH_ALERT}
3460 | 95
3461 |
3462 |
3463 | {$ZPOOL_AVERAGE_ALERT}
3464 | 85
3465 |
3466 |
3467 | {$ZPOOL_DISASTER_ALERT}
3468 | 99
3469 |
3470 |
3471 | {$ZPOOL_HIGH_ALERT}
3472 | 90
3473 |
3474 |
3475 |
3476 |
3477 |
3478 | ZFS ARC
3479 | 1
3480 | 4
3481 |
3482 |
3483 | 0
3484 | 1500
3485 | 150
3486 | 0
3487 | 0
3488 | 1
3489 | 1
3490 | 0
3491 | 0
3492 | 0
3493 |
3494 |
3495 | 0
3496 | 0
3497 |
3498 | ZFS ARC memory usage
3499 | ZFS on Linux
3500 |
3501 | 3
3502 |
3503 |
3504 |
3505 | 0
3506 | 1500
3507 | 150
3508 | 0
3509 | 1
3510 | 1
3511 | 1
3512 | 0
3513 | 0
3514 | 0
3515 |
3516 |
3517 | 0
3518 | 0
3519 |
3520 | ZFS ARC Cache Hit Ratio
3521 | ZFS on Linux
3522 |
3523 | 3
3524 |
3525 |
3526 |
3527 | 0
3528 | 1500
3529 | 150
3530 | 0
3531 | 2
3532 | 1
3533 | 1
3534 | 0
3535 | 0
3536 | 0
3537 |
3538 |
3539 | 0
3540 | 0
3541 |
3542 | ZFS ARC breakdown
3543 | ZFS on Linux
3544 |
3545 | 3
3546 |
3547 |
3548 |
3549 | 0
3550 | 1500
3551 | 150
3552 | 0
3553 | 3
3554 | 1
3555 | 1
3556 | 0
3557 | 0
3558 | 0
3559 |
3560 |
3561 | 0
3562 | 0
3563 |
3564 | ZFS ARC arc_meta_used breakdown
3565 | ZFS on Linux
3566 |
3567 | 3
3568 |
3569 |
3570 |
3571 |
3572 |
3573 | ZFS zpools
3574 | 3
3575 | 1
3576 |
3577 |
3578 | 20
3579 | 400
3580 | 100
3581 | 0
3582 | 0
3583 | 1
3584 | 1
3585 | 0
3586 | 0
3587 | 0
3588 |
3589 |
3590 | 0
3591 | 0
3592 |
3593 | ZFS zpool {#POOLNAME} IOPS
3594 | ZFS on Linux
3595 |
3596 | 1
3597 |
3598 |
3599 |
3600 | 20
3601 | 400
3602 | 100
3603 | 1
3604 | 0
3605 | 1
3606 | 1
3607 | 0
3608 | 0
3609 | 0
3610 |
3611 |
3612 | 0
3613 | 0
3614 |
3615 | ZFS zpool {#POOLNAME} throughput
3616 | ZFS on Linux
3617 |
3618 | 1
3619 |
3620 |
3621 |
3622 | 20
3623 | 400
3624 | 100
3625 | 2
3626 | 0
3627 | 1
3628 | 1
3629 | 0
3630 | 0
3631 | 0
3632 |
3633 |
3634 | 0
3635 | 0
3636 |
3637 | ZFS zpool {#POOLNAME} space usage
3638 | ZFS on Linux
3639 |
3640 | 1
3641 |
3642 |
3643 |
3644 |
3645 |
3646 |
3647 |
3648 |
3649 |
3650 | {ZFS on Linux:vfs.file.contents[/sys/module/zfs/version].diff(0)}>0
3651 | 0
3652 |
3653 | Version of OpenZFS is now {ITEM.VALUE} on {HOST.NAME}
3654 | 0
3655 |
3656 |
3657 | 0
3658 | 1
3659 |
3660 | 0
3661 | 0
3662 |
3663 |
3664 |
3665 |
3666 | {ZFS on Linux:zfs.arcstats[dnode_size].last()}>({ZFS on Linux:zfs.arcstats[arc_dnode_limit].last()}*0.9)
3667 | 0
3668 |
3669 | ZFS ARC dnode size > 90% dnode max size on {HOST.NAME}
3670 | 0
3671 |
3672 |
3673 | 0
3674 | 4
3675 |
3676 | 0
3677 | 0
3678 |
3679 |
3680 |
3681 |
3682 | {ZFS on Linux:zfs.arcstats[arc_meta_used].last()}>({ZFS on Linux:zfs.arcstats[arc_meta_limit].last()}*0.01*{$ZFS_ARC_META_ALERT})
3683 | 0
3684 |
3685 | ZFS ARC meta size > {$ZFS_ARC_META_ALERT}% meta max size on {HOST.NAME}
3686 | 0
3687 |
3688 |
3689 | 0
3690 | 4
3691 |
3692 | 0
3693 | 0
3694 |
3695 |
3696 |
3697 |
3698 |
3699 |
3700 | ZFS ARC arc_meta_used breakdown
3701 | 900
3702 | 200
3703 | 0.0000
3704 | 100.0000
3705 | 1
3706 | 1
3707 | 1
3708 | 1
3709 | 0
3710 | 0.0000
3711 | 0.0000
3712 | 1
3713 | 0
3714 | 0
3715 | 0
3716 |
3717 |
3718 | 0
3719 | 0
3720 | 3333FF
3721 | 0
3722 | 2
3723 | 0
3724 | -
3725 | ZFS on Linux
3726 | zfs.arcstats[metadata_size]
3727 |
3728 |
3729 |
3730 | 1
3731 | 0
3732 | 00EE00
3733 | 0
3734 | 2
3735 | 0
3736 | -
3737 | ZFS on Linux
3738 | zfs.arcstats[dnode_size]
3739 |
3740 |
3741 |
3742 | 2
3743 | 0
3744 | EE0000
3745 | 0
3746 | 2
3747 | 0
3748 | -
3749 | ZFS on Linux
3750 | zfs.arcstats[hdr_size]
3751 |
3752 |
3753 |
3754 | 3
3755 | 0
3756 | EEEE00
3757 | 0
3758 | 2
3759 | 0
3760 | -
3761 | ZFS on Linux
3762 | zfs.arcstats[dbuf_size]
3763 |
3764 |
3765 |
3766 | 4
3767 | 0
3768 | EE00EE
3769 | 0
3770 | 2
3771 | 0
3772 | -
3773 | ZFS on Linux
3774 | zfs.arcstats[bonus_size]
3775 |
3776 |
3777 |
3778 |
3779 |
3780 | ZFS ARC breakdown
3781 | 900
3782 | 200
3783 | 0.0000
3784 | 100.0000
3785 | 1
3786 | 1
3787 | 1
3788 | 1
3789 | 0
3790 | 0.0000
3791 | 0.0000
3792 | 1
3793 | 0
3794 | 0
3795 | 0
3796 |
3797 |
3798 | 0
3799 | 0
3800 | 3333FF
3801 | 0
3802 | 2
3803 | 0
3804 | -
3805 | ZFS on Linux
3806 | zfs.arcstats[data_size]
3807 |
3808 |
3809 |
3810 | 1
3811 | 0
3812 | 00AA00
3813 | 0
3814 | 2
3815 | 0
3816 | -
3817 | ZFS on Linux
3818 | zfs.arcstats[metadata_size]
3819 |
3820 |
3821 |
3822 | 2
3823 | 0
3824 | EE0000
3825 | 0
3826 | 2
3827 | 0
3828 | -
3829 | ZFS on Linux
3830 | zfs.arcstats[dnode_size]
3831 |
3832 |
3833 |
3834 | 3
3835 | 0
3836 | CCCC00
3837 | 0
3838 | 2
3839 | 0
3840 | -
3841 | ZFS on Linux
3842 | zfs.arcstats[hdr_size]
3843 |
3844 |
3845 |
3846 | 4
3847 | 0
3848 | A54F10
3849 | 0
3850 | 2
3851 | 0
3852 | -
3853 | ZFS on Linux
3854 | zfs.arcstats[dbuf_size]
3855 |
3856 |
3857 |
3858 | 5
3859 | 0
3860 | 888888
3861 | 0
3862 | 2
3863 | 0
3864 | -
3865 | ZFS on Linux
3866 | zfs.arcstats[bonus_size]
3867 |
3868 |
3869 |
3870 |
3871 |
3872 | ZFS ARC Cache Hit Ratio
3873 | 900
3874 | 200
3875 | 0.0000
3876 | 100.0000
3877 | 1
3878 | 1
3879 | 0
3880 | 1
3881 | 0
3882 | 0.0000
3883 | 0.0000
3884 | 1
3885 | 1
3886 | 0
3887 | 0
3888 |
3889 |
3890 | 0
3891 | 0
3892 | 00CC00
3893 | 0
3894 | 2
3895 | 0
3896 | -
3897 | ZFS on Linux
3898 | zfs.arcstats_hit_ratio
3899 |
3900 |
3901 |
3902 |
3903 |
3904 | ZFS ARC memory usage
3905 | 900
3906 | 200
3907 | 0.0000
3908 | 100.0000
3909 | 1
3910 | 1
3911 | 0
3912 | 1
3913 | 0
3914 | 0.0000
3915 | 0.0000
3916 | 1
3917 | 2
3918 | 0
3919 |
3920 | ZFS on Linux
3921 | zfs.arcstats[c_max]
3922 |
3923 |
3924 |
3925 | 0
3926 | 5
3927 | 0000EE
3928 | 0
3929 | 2
3930 | 0
3931 | -
3932 | ZFS on Linux
3933 | zfs.arcstats[size]
3934 |
3935 |
3936 |
3937 | 1
3938 | 2
3939 | DD0000
3940 | 0
3941 | 2
3942 | 0
3943 | -
3944 | ZFS on Linux
3945 | zfs.arcstats[c_max]
3946 |
3947 |
3948 |
3949 | 2
3950 | 0
3951 | 00BB00
3952 | 0
3953 | 2
3954 | 0
3955 | -
3956 | ZFS on Linux
3957 | zfs.arcstats[c_min]
3958 |
3959 |
3960 |
3961 |
3962 |
3963 |
3964 |
3965 | ZFS zpool scrub status
3966 |
3967 |
3968 | 0
3969 | Scrub in progress
3970 |
3971 |
3972 | 1
3973 | No scrub in progress
3974 |
3975 |
3976 |
3977 |
3978 |
3979 |
--------------------------------------------------------------------------------
/userparameters/ZoL_with_sudo.conf:
--------------------------------------------------------------------------------
1 | # ZFS discovery and configuration
2 | # original template from pbergbolt (source = https://www.zabbix.com/forum/showthread.php?t=43347), modified by Slash
3 |
4 |
5 | # pool discovery
6 | UserParameter=zfs.pool.discovery,/usr/bin/sudo /sbin/zpool list -H -o name | sed -e '$ ! s/\(.*\)/{"{#POOLNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#POOLNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/'
7 | # dataset discovery, called "fileset" in the zabbix template for legacy reasons
8 | UserParameter=zfs.fileset.discovery,/usr/bin/sudo /sbin/zfs list -H -o name | sed -e '$ ! s/\(.*\)/{"{#FILESETNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#FILESETNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/'
9 | # vdev discovery
10 | UserParameter=zfs.vdev.discovery,/usr/bin/sudo /sbin/zpool list -Hv | grep '^[[:blank:]]' | egrep -v 'mirror|raidz' | awk '{print $1}' | sed -e '$ ! s/\(.*\)/{"{#VDEV}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#VDEV}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/'
11 |
12 | # pool health
13 | UserParameter=zfs.zpool.health[*],/usr/bin/sudo /sbin/zpool list -H -o health $1
14 |
15 | # get any fs option
16 | UserParameter=zfs.get.fsinfo[*],/usr/bin/sudo /sbin/zfs get -o value -Hp $2 $1
17 |
18 | # compressratio need special treatment because of the "x" at the end of the number
19 | UserParameter=zfs.get.compressratio[*],/usr/bin/sudo /sbin/zfs get -o value -Hp compressratio $1 | sed "s/x//"
20 |
21 | # memory used by ZFS: sum of the SPL slab allocator's statistics
22 | # "There are a few things not included in that, like the page cache used by mmap(). But you can expect it to be relatively accurate."
23 | UserParameter=zfs.memory.used,echo $(( `cat /proc/spl/kmem/slab | tail -n +3 | awk '{ print $3 }' | tr "\n" "+" | sed "s/$/0/"` ))
24 |
25 | # get any global zfs parameters
26 | UserParameter=zfs.get.param[*],cat /sys/module/zfs/parameters/$1
27 |
28 | # ARC stats from /proc/spl/kstat/zfs/arcstats
29 | UserParameter=zfs.arcstats[*],awk '/^$1/ {printf $$3;}' /proc/spl/kstat/zfs/arcstats
30 |
31 | # detect if a scrub is in progress, 0 = in progress, 1 = not in progress
32 | UserParameter=zfs.zpool.scrub[*],/usr/bin/sudo /sbin/zpool status $1 | grep "scrub in progress" > /dev/null ; echo $?
33 |
34 | # vdev state
35 | UserParameter=zfs.vdev.state[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$2 }'
36 | # vdev READ error counter
37 | UserParameter=zfs.vdev.error_counter.read[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$3 }' | numfmt --from=si
38 | # vdev WRITE error counter
39 | UserParameter=zfs.vdev.error_counter.write[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$4 }' | numfmt --from=si
40 | # vdev CHECKSUM error counter
41 | UserParameter=zfs.vdev.error_counter.cksum[*],/usr/bin/sudo /sbin/zpool status | grep "$1" | awk '{ print $$5 }' | numfmt --from=si
42 |
--------------------------------------------------------------------------------
/userparameters/ZoL_without_sudo.conf:
--------------------------------------------------------------------------------
1 | # ZFS discovery and configuration
2 | # original template from pbergbolt (source = https://www.zabbix.com/forum/showthread.php?t=43347), modified by Slash
3 |
4 |
5 | # pool discovery
6 | UserParameter=zfs.pool.discovery,/sbin/zpool list -H -o name | sed -e '$ ! s/\(.*\)/{"{#POOLNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#POOLNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/'
7 | # dataset discovery, called "fileset" in the zabbix template for legacy reasons
8 | UserParameter=zfs.fileset.discovery,/sbin/zfs list -H -o name | sed -e '$ ! s/\(.*\)/{"{#FILESETNAME}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#FILESETNAME}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/'
9 | # vdev discovery
10 | UserParameter=zfs.vdev.discovery,/sbin/zpool list -Hv | grep '^[[:blank:]]' | egrep -v 'mirror|raidz' | awk '{print $1}' | sed -e '$ ! s/\(.*\)/{"{#VDEV}":"\1"},/' | sed -e '$ s/\(.*\)/{"{#VDEV}":"\1"}]}/' | sed -e '1 s/\(.*\)/{ \"data\":[\1/'
11 |
12 | # pool health
13 | UserParameter=zfs.zpool.health[*],/sbin/zpool list -H -o health $1
14 |
15 | # get any fs option
16 | UserParameter=zfs.get.fsinfo[*],/sbin/zfs get -o value -Hp $2 $1
17 |
18 | # compressratio need special treatment because of the "x" at the end of the number
19 | UserParameter=zfs.get.compressratio[*],/sbin/zfs get -o value -Hp compressratio $1 | sed "s/x//"
20 |
21 | # memory used by ZFS: sum of the SPL slab allocator's statistics
22 | # "There are a few things not included in that, like the page cache used by mmap(). But you can expect it to be relatively accurate."
23 | UserParameter=zfs.memory.used,echo $(( `cat /proc/spl/kmem/slab | tail -n +3 | awk '{ print $3 }' | tr "\n" "+" | sed "s/$/0/"` ))
24 |
25 | # get any global zfs parameters
26 | UserParameter=zfs.get.param[*],cat /sys/module/zfs/parameters/$1
27 |
28 | # ARC stats from /proc/spl/kstat/zfs/arcstats
29 | UserParameter=zfs.arcstats[*],awk '/^$1/ {printf $$3;}' /proc/spl/kstat/zfs/arcstats
30 |
31 | # detect if a scrub is in progress, 0 = in progress, 1 = not in progress
32 | UserParameter=zfs.zpool.scrub[*],/sbin/zpool status $1 | grep "scrub in progress" > /dev/null ; echo $?
33 |
34 | # vdev state
35 | UserParameter=zfs.vdev.state[*],/sbin/zpool status | grep "$1" | awk '{ print $$2 }'
36 | # vdev READ error counter
37 | UserParameter=zfs.vdev.error_counter.read[*],/sbin/zpool status | grep "$1" | awk '{ print $$3 }' | numfmt --from=si
38 | # vdev WRITE error counter
39 | UserParameter=zfs.vdev.error_counter.write[*],/sbin/zpool status | grep "$1" | awk '{ print $$4 }' | numfmt --from=si
40 | # vdev CHECKSUM error counter
41 | UserParameter=zfs.vdev.error_counter.cksum[*],/sbin/zpool status | grep "$1" | awk '{ print $$5 }' | numfmt --from=si
42 |
--------------------------------------------------------------------------------