Giter Site home page Giter Site logo

zabbix_zfs-on-linux's Introduction

Monitor ZFS on Linux on Zabbix

This template is a modified version of the original work done by pbergdolt and posted on the zabbix forum a while ago here: https://www.zabbix.com/forum/zabbix-cookbook/35336-zabbix-zfs-discovery-monitoring?t=43347 . Also the original home of this variant was on https://share.zabbix.com/zfs-on-linux .

I have maintained and modified this template over the years and the different versions of ZoL on a large number of servers so I'm pretty confident that it works ;)

Thanks to external contributors, this template was extended and is now more complete than ever. However, if you find a metric that you need and is missing, don't hesitate to open a ticket or even better, to create a PR!

Tested Zabbix server version include 4.0, 4.4, 5.0 and 5.2 . The template shipped here is in 4.0 format to allow import to all those versions.

This template will give you screens and graphs for memory usage, zpool usage and performance, dataset usage, etc. It includes triggers for low disk space (customizable via Zabbix own macros), disks errors, etc.

Example of graphs:

  • Arc memory usage and hit rate: arc1
  • Complete breakdown of META and DATA usage: arc2
  • Dataset usage, with available space, and breakdown of used space with directly used space, space used by snapshots and space used by children: dataset
  • Zpool IO throughput: throughput

Supported OS and ZoL version

Any Linux variant should work, tested version by myself include:

  • Debian 8, 9, 10
  • Ubuntu 16.04, 18.04 and 20.04
  • CentOS 6 and 7

About the ZoL version, this template is intended to be used by ZoL version 0.7.0 or superior but still works on the 0.6.x branch.

Installation on Zabbix server

To use this template, follow those steps:

Create the Value mapping "ZFS zpool scrub status"

Go to:

  • Administration
  • General
  • Value mapping

Then create a new value map named ZFS zpool scrub status with the following mappings:

Value Mapped to
0 Scrub in progress
1 No scrub in progress

value_map

Import the template

Import the template that is in the "template" directory of this repository or download it directly with this link: template

Installation on the server you want to monitor

Prerequisites

The server needs to have some very basic tools to run the user parameters:

  • awk
  • cat
  • grep
  • sed
  • tail

Usually, they are already installed and you don't have to install them.

Add the userparameters file on the servers you want to monitor

There are 2 different userparameters files in the "userparameters" directory of this repository.

One uses sudo to run and thus you must give zabbix the correct rights and the other doesn't use sudo.

On recent ZFS on Linux versions (eg version 0.7.0+), you don't need sudo to run zpool list or zfs list so just install the file ZoL_without_sudo.conf and you are done.

For older ZFS on Linux versions (eg version 0.6.x), you will need to add some sudo rights with the file ZoL_with_sudo.conf. On some distribution, ZoL already includes a file with all the necessary rights at /etc/sudoers.d/zfs but its content is commented, just remove the comments and any user will be able to list zfs datasets and pools. For convenience, here is the content of the file commented out:

## Allow read-only ZoL commands to be called through sudo
## without a password. Remove the first '#' column to enable.
##
## CAUTION: Any syntax error introduced here will break sudo.
##
## Cmnd alias specification
Cmnd_Alias C_ZFS = \
  /sbin/zfs "", /sbin/zfs help *, \
  /sbin/zfs get, /sbin/zfs get *, \
  /sbin/zfs list, /sbin/zfs list *, \
  /sbin/zpool "", /sbin/zpool help *, \
  /sbin/zpool iostat, /sbin/zpool iostat *, \
  /sbin/zpool list, /sbin/zpool list *, \
  /sbin/zpool status, /sbin/zpool status *, \
  /sbin/zpool upgrade, /sbin/zpool upgrade -v

## allow any user to use basic read-only ZFS commands
ALL ALL = (root) NOPASSWD: C_ZFS

If you don't know where your "userparameters" directory is, this is usually the /etc/zabbix/zabbix_agentd.d folder. If in doubt, just look at your zabbix_agentd.conf file for the line begining by Include=, it will show where it is.

Restart zabbix agent

Once you have added the template, restart zabbix-agent so that it will load the new userparameters.

Customization of alert level by server

This template includes macros to define when the "low disk spaces" type triggers will fire.

By default, you will find them on the macro page of this template: macros

If you change them here, they will apply to every hosts linked to this template, which may not be such a good idea. Prefer to change the macros on specific servers if needed.

You can see how the macros are used by looking at the discovery rules, then "Trigger prototypes": macros

Important note about Zabbix active items

This template uses Zabbix items of type Zabbix agent (active) (= active items). By default, most template uses Zabbix agent items (= passive items).

If you want, you can convert all the items to Zabbix agent and everything will work, but you should really uses active items because those are way more scalable. The official documentation doesn't really make this point clear (https://www.zabbix.com/documentation/4.0/manual/appendix/items/activepassive) but active items are optimized: the agent asks the server for the list of items that the server wants, then send them by batch periodically.

On the other hand, for passive items, the zabbix server must establish a connection for each items and ask for them, then wait for the anwser: this results in more CPU, memory and network consumption used by both the server and the agent.

To make an active item work, you must ensure that you have a ServerActive=your_zabbix_server_fqdn_or_ip line in your agent config file (usually /etc/zabbix/zabbix_agentd.conf).

You also need to configure the "Host Name" on the zabbix UI to be the same as the server output of the hostname command (you can always adjust the "Visible name" in the Zabbix UI to anything you want if needed) because the zabbix agent sends this information to the zabbix server. It basically tells the server "Hello, I am $(hostname), which items do you need from me?" so if there is a mismatch here, the server will most likely answer "I don't know you!" ;-)

Beyond a certain point, depending on your hardware, you will have to use active items.

An old but still relevant blog about high performance zabbix is available on https://blog.zabbix.com/scalable-zabbix-lessons-on-hitting-9400-nvps/2615/ .

zabbix_zfs-on-linux's People

Contributors

aceslash avatar castorsky avatar stumbaumr avatar thopos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zabbix_zfs-on-linux's Issues

items can become unsupported e.g. through division by zero

Hi,
the calculated item "ZFS ARC Cache Hit Ratio" with the key zfs.arcstats_hit_ratio has become unsupported on one of my monitored hosts. The reason is given as "Cannot evaluate expression: division by zero."

It is calculated with this formular:
100*(last(zfs.arcstats[hits])/(last(zfs.arcstats[hits])+last(zfs.arcstats[misses])))

Would it be worth doing this a little more sophisticated so that the divisor should never be zero.?

Issue "Get iostats" in ZFS pool discovery

Hello,
i am using:

target Server= Ubuntu 20.04

version of zfs installed:  2.1.4-0ubuntu0.1
srcversion:     E308D3FB5CF298B22C93CCC
vermagic:       5.15.0-1026-aws SMP mod_unload modversions aarch64

zabbix_server (Zabbix) 5.4.12
zabbix_agentd (daemon) (Zabbix) 4.0.17

the item Zpool {#POOLNAME}: Get iostats in ZFS discovery pool value should be changed from
vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/io]

to

vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/iostats]

once i change it i am getting this error:
image

i can't modify the file manually as is autogenereted from the kernel.

i have tryed this too:
vfs.file.regexp[/proc/spl/kstat/zfs/rpool/iostats,"^[0-9]"]

it makes switch off the agent getting this error:
Screenshot 2023-05-24 at 13 04 53

any help?

thanks

FreeNAS

Hello,

Nice plugin but unfortunately does not work on FreeNAS, next to the dataset discovery almost everything else is broken.

cat: /proc/spl/kmem/slab: No such file or directory
arithmetic expression: expecting primary: "  "
cat: /sys/module/zfs/parameters/: No such file or directory
awk: can't open file /proc/spl/kstat/zfs/arcstats
 source line number 1
zfs.vdev.error_counter.cksum                  [t|

in


CKSUM
0
0

errors



in


CKSUM
0
0

errors]

Unfortunately there is only 1 other FreeNAS zabbix template and it does not even contain anything related to ZFS just useless stuff like CPU usage/jumps etc.

Here is some research on my own. On Freenas as Zabbix user you will need the sudo, for example without that zfs.vdev.discovery would only return 2 results.

Any help is welcome.

Does not alert on cksum errors

Hi,
on zabbix it does not alert if the pool has an error on the cksum

root@prometheus26:~# zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
rpool       ONLINE       0     0     0
  mirror-0  ONLINE       0     0     0
    sda3    ONLINE       0     0     0
    sdb3    ONLINE       0     0     0
  mirror-1  ONLINE       0     0     0
    sdc     ONLINE       0     0     0
    sdd     ONLINE       0     0    33

Cannot send request: wrong discovery rule type.

When hitting the check now button, I get a error message that it has a wrong discorvery rule type.

Zabbix version: 4.2.8

I have follewed the setup guide. I checked every step it is all in place.

But maybe I missed something?

Monitoring cache/log drives

Greetings,
Thank you very much for this Zabbix template! I just put this on my systems a week ago and it's working great for me on my ZFS systems so far. Some of the things its pointed out to me in warnings has gotten me to learn more about ZFS to better optimize my system so I appreciate that a lot!

I did run into an issue that I would like to monitor for future reference, but I am not sure how/best way.

zpool status shows my configuration (cleaned up a bit; and it's idle right now):

config:

	NAME                                 STATE     READ WRITE CKSUM
	vmpool                               ONLINE       0     0     0
	  mirror-0                           ONLINE       0     0     0
	    ata-ST4000DM004                  ONLINE       0     0     0
	    ata-ST4000DM004                  ONLINE       0     0     0
	  mirror-1                           ONLINE       0     0     0
	    ata-ST3000DM007                  ONLINE       0     0     0
	    sda                              ONLINE       0     0     0
	logs	
	  sdb                                ONLINE       0     0     0
	cache
	  sdg                                ONLINE       0     0     0

For the workload on this box, the cache and log drives make a HUGE difference - as in between usable and barely-but-frustrating-usable. Saturday night the log SSD went completely kaput. Like it just blinked out of the system (I've had many SSD's report they were running great only to insta-die on me so I'm never surprised when they die anymore). Now I've been running a weekly Smart check (also recorded into Zabbix) so it would have alerted me eventually that a drive was missing, however, what I noticed was that on Sunday the system was sluggish and by Monday morning it was painfully slow. ZFS just went back to using memory so it was still reporting health OK! I ran out to pick-up a replacement SSD and within minutes of me swapping the drive into the system as the new log device it was humming along great! That's when I went to Zabbix to figure out how to alert me faster should this happen again.

I've already changed my smart checks to daily from weekly so that will tell me if another SSD just insta-dies. However, I was hoping to see some information about the cache and log files as captured by this template. I only see the "CHECKSUM/READ/WRITE error counter" and "total number of errors" which (unsurprisingly to me) never flagged anything but zeros on the failed drive.

Thoughts on what values might be good to monitor for the cache/log drives? I was thinking about something like "the total number of drives in the ZFS pool just shrank!" or something like that.

I also thought about trying to scrape data out of zpool iostat -v pool_name. Under heavy load it isn't unusual to see my log SSD become tens of GB, but most of the time it is a few hundred K at most. The cache drive is very frequently full though (again, right now the box is pretty idle).

logs                                     -      -      -      -      -      -
  sdb                                 544K   111G      0      2      0   149K
cache                                    -      -      -      -      -      -
  sdg                                 105G  7.10G      0      1  59.7K   219K

Not sure capturing the log info will be useful as the heavy loads usually process pretty quick and the spike in use would probably not be reflected in Zabbix fast enough. However, the use of the cache drive might be interesting as it fluctuates a lot.

Hopefully that's not too much of a info dump, but I thought it was worth asking you before I tried to hack away on the template to add new metrics. Any thoughts on the best way to capture this data and alert me faster should the SSD for log/cache blink out on me again?

Just in case it is of interest, I am running on the latest kernel for SL 7.7 with zfs-0.8.3-1.

Thanks!

Zabbix 6.0: No "Administration → General → Value mapping" anymore

Installed Zabbix 6.0 LTS today and there is no "Administration → General → Value mapping" anymore. But now I see a value mapping under each hosts config. Should that work too if I set the same value mapping there?

Also many of the Attributes are now just called 'ZFS ARC stat "$1"' or 'ZFS parameter $1' so its hard to tell what the values are meaning.
zol1
zol2

Cannot obtain file /proc/spl/kstat/zfs/test/io

Hi.
I am using the latest Zabbix 5.4 with the template and it is continually showing the following in the logs:

2021/10/18 16:31:00.000611 check 'vfs.file.contents[/proc/spl/kstat/zfs/test/io]' is not supported: Cannot obtain file /proc/spl/kstat/zfs/test/io information: stat /proc/spl/kstat/zfs/test/io: no such file or directory

any thoughts?
thanks,
Geoff

Dublicated discovery items for {#POOLNAME} and {#FILESETNAME}

Zabbix throws an error for zpool discovery, if zfs pool and zfs dataset have the same name.

As example, proxmox zfs layout
zpool list rpool NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 952G 3.29G 949G - - 2% 0% 1.00x ONLINE -
zfs list rpool NAME USED AVAIL REFER MOUNTPOINT rpool 3.29G 919G 104K /rpool

ZFS: Zfs Pool discovery
Cannot create item: item with the same key "zfs.get.fsinfo[rpool,available]" already exists.
Cannot create item: item with the same key "zfs.get.fsinfo[rpool,used]" already exists.

nan with numfmt with vdev's

Greetings,
I've been successfully working with the configs from 12eecbf.

Adding a new host with ZFS. Decided to grab the latest template and config file. I was getting errors in the vdev data so I cranked the logging.

The command on the old host returned a command that looked something like this: /sbin/zpool status | grep "sda" | awk '{ print $3 }'which returned 0.

The command on the new host returned a command that looked something like this: /sbin/zpool status | grep "sda" | awk '{ print $3 }' | numfmt --from=si which returned 'nan'

It looks like numfmt returns nan anytime I pass it a 0. eg: echo 0 | numfmt --from=si or even echo 0 | numfmt

I think I get the point of converting the numbers when it returns something like 1M, but before I put something that catches nan to return 0 I thought I would ask if anyone else is seeing this issue. For the moment I'm just using the old config file and it seems to work.

Thoughts? Thanks!

Issue with the import of template on Zabbix 4.0.32

Environment:
Debian 9
Zabbix Server 4.0.32

Error faced when trying to import the template:
Details Import failed
Field "name" cannot be set to NULL.

Additional comments:
I have been trying to import the file now for a while on a Zabbix 4.0.32 instance and I am having issues doing this, as it seens like the Zabbix server does not like the format of the file.

The value within "Value mapping" have been created with the data mentioned in the readme.

arcstat parameters are not populated with data

Latest data in Zabbix show the red flag on arcstat metrics with the following error Value "c_max 4 16636397568" of type "string" is not suitable for value type "Numeric (unsigned)"
I tried to change the userparameter zfs.arcstats[*] command to awk '/^$1/ {printf $3;}' /proc/spl/kstat/zfs/arcstats because it produces a nice number when run from shell - but the problem did go away

ARK status awk issue with zabbix-agent2

Debian default mawk doesn't support zfs.arcstats - "Value of type "string" is not suitable for value type "Numeric (unsigned)". Value "awk: program limit exceeded: maximum number of fields size=32767 FILENAME="/proc/spl/kstat/zfs/arcstats" FNR=48 NR=48"

ARC stats from /proc/spl/kstat/zfs/arcstats
UserParameter=zfs.arcstats[*],awk '/^$1/ {printf $$3;}' /proc/spl/kstat/zfs/arcstats

Gawk has no such limitation, but prints nothing while running "gawk '/^misses/ {printf $$3;}' /proc/spl/kstat/zfs/arcstats" (with two dollar signs)
"gawk '/^misses/ {printf $3;}' /proc/spl/kstat/zfs/arcstats" returns 1021618956 integer as expected (with one dollar sign)

Proxmox 5 with Debian 9 stretch, gawk 4.1.4+dfsg-1, mawk 1.3.3-17+b3

issue on installing?

Hi,
Currently trying to install on debian stretch proxmox followed the procedures but what im lost is on the userparameters, inside of /etc/zabbix/zabbix_agentd.d/ i would add the file ZoL_without_sudo.conf and then restart zabbix?

Thank you

Pool health monitoring broke under Zabbix 6.0

This template used to work fine for me under Zabbix 4.0 but I discovered today that the pool health monitoring broke when I upgraded my Zabbix server to 6.0.

From looking at other recently reported issues and reading

https://www.zabbix.com/documentation/4.0/en/manual/installation/upgrade_notes_400#deprecated-macros-in-item-names

My guess was that I needed to replace

# pool health
UserParameter=zfs.zpool.health[*],/sbin/zpool list -H -o health $1

in ZoL_without_sudo.conf with:

# pool health
UserParameter=zfs.zpool.health[*],/sbin/zpool list -H -o health "{#POOLNAME}"

but that hasn't fixed it, after restarting zabbix-agent2.

I think this template provides more info than I need but its useless to me if the pool health check doesn't work.

Zabbix Agent2 - Discovery Error

Hello,
when i go to host configuration in web gui of zabbix server to execute discovery there is this error for ZFS templates:

Invalid discovery rule value: cannot parse as a valid JSON object: invalid object format, expected opening character '{' or '[' at: 'sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper'

and then it says Status: unsupported
Ubuntu Server: 20.04 with zfs on linux 0.8.3-1ubuntu12.5
I used the newer template without the need of sudo.

"Unsupported item key" errors

Hi @AceSlash !

Thanks for writing and sharing this zabbix ZoL template, unfortunately it isn't working properly for me yet.

Before I get onto my actual problem, could you please update the description on https://share.zabbix.com/zfs-on-linux from:

"The documentation has moved to github: https://github.com/AceSlash/zabbix_zfs-on-linux" to something like:

"Please use the newer version of this template available from https://github.com/AceSlash/zabbix_zfs-on-linux"

I am running zabbix 4.0 under Ubuntu 18.04. The ZoL host I'm trying to monitor is proxmox VE / Debian 9. I removed the older version of your template and user parameters and replaced them with the updated ones from this repo, but after restarting the zabbix agent on my PVE server I see these errors in the zabbix-agent log:

18508:20190605:093052.533 Starting Zabbix Agent [demeter.cs.salford.ac.uk]. Zabbix 4.0.8 (revision 2b50c941de).
 18508:20190605:093052.534 **** Enabled features ****
 18508:20190605:093052.534 IPv6 support:          YES
 18508:20190605:093052.534 TLS support:           YES
 18508:20190605:093052.534 **************************
 18508:20190605:093052.534 using configuration file: /etc/zabbix/zabbix_agentd.conf
 18508:20190605:093052.534 agent #0 started [main process]
 18509:20190605:093052.535 agent #1 started [collector]
 18510:20190605:093052.535 agent #2 started [listener #1]
 18511:20190605:093052.536 agent #3 started [listener #2]
 18512:20190605:093052.536 agent #4 started [listener #3]
 18513:20190605:093052.537 agent #5 started [active checks #1]
 18513:20190605:093052.546 active check "zfs.arcstats[arc_dnode_limit]" is not supported: Unsupported item key.
 18513:20190605:093052.546 active check "zfs.arcstats[arc_meta_limit]" is not supported: Unsupported item key.
 18513:20190605:093052.546 active check "zfs.arcstats[arc_meta_used]" is not supported: Unsupported item key.
 18513:20190605:093052.546 active check "zfs.arcstats[bonus_size]" is not supported: Unsupported item key.
 18513:20190605:093052.546 active check "zfs.arcstats[c_max]" is not supported: Unsupported item key.
 18513:20190605:093052.546 active check "zfs.arcstats[c_min]" is not supported: Unsupported item key.
 18513:20190605:093052.546 active check "zfs.arcstats[data_size]" is not supported: Unsupported item key.
 18513:20190605:093052.546 active check "zfs.arcstats[dbuf_size]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[dnode_size]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[hdr_size]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[hits]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[metadata_size]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[mfu_hits]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[mfu_size]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[misses]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[mru_hits]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[mru_size]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.arcstats[size]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.fileset.discovery" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.get.param[zfs_arc_dnode_limit_percent]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.get.param[zfs_arc_meta_limit_percent]" is not supported: Unsupported item key.
 18513:20190605:093052.547 active check "zfs.pool.discovery" is not supported: Unsupported item key.

Thanks for your help!

Can't import into Zabbix 7.0.0alpha1

Importing to 7.0.0alpha1 throws an error:
Incorrect trigger expression. Host "ZFS on Linux" does not exist or you have no access to this host.

Adding zpool/dataset IO statistics

Hi,

I'm currently adding io statistics for zpools and datasets, but for datasets i'd introduce a dependency on:
iozstat. Before creating PR i 'm wondering if that'd be a big problem.
I think it's a nice tool to have around anyway :)

vdev error monitoring fails on big numbers because it doesn't use the -p flag

Currently the vdev number of READ WRITE and CKSUM failure is taken from the output of the zpool status command.

This is fine until you get a number high enough (>1000) and get the human readable output like "2K", which breaks the item.

Solution is simply to modify the template and use the -p flag which exists for this purpose.

arcstat parameters are not populated with data

Latest data in Zabbix show the red flag on arcstat metrics with the following error Value "c_max 4 16636397568" of type "string" is not suitable for value type "Numeric (unsigned)"
I tried to change the userparameter zfs.arcstats[*] command to awk '/^$1/ {printf $3;}' /proc/spl/kstat/zfs/arcstats because it produces a nice number when run from shell - but the problem did go away

a zpool is also a dataset

This template differentiates a zpool from a zfs dataset by the presence of "/" in the name, is this correct?

I happen to have some hosts where the zpool itself is the only dataset, like here:

# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
users   472G   147G   325G        -         -    13%    31%  1.00x    ONLINE  -
# zfs list
NAME    USED  AVAIL     REFER  MOUNTPOINT
users   147G   311G      147G  /users
# 

Thus I'm missing useful statistics (e.g. compression ratio) about the dataset "users" because the template does not recognize "users" as a rightful dataset.

Is there a way to circumvent this other than create children datasets in the pool and move all data thereto (I would hate this)?

Item "Get iostats" in ZFS pool discovery fix

Hello,

the item "Zpool {#POOLNAME}: Get iostats" in ZFS discovery pool value should be changed from
vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/io]
to
vfs.file.contents[/proc/spl/kstat/zfs/{#POOLNAME}/iostats]

Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.