Giter Site home page Giter Site logo

hanadb_exporter's Introduction

SAP HANA Database exporter

Exporter CI Dashboards CI

Prometheus exporter written in Python, to export SAP HANA database metrics. The project is based in the official prometheus exporter: prometheus_client.

The exporter is able to export the metrics from more than 1 database/tenant if the multi_tenant option is enabled in the configuration file (enabled by default).

The labels sid (system identifier), insnr (instance number), database_name (database name) and host (machine hostname) will be exported for all the metrics.

Prerequisites

  1. A running and reachable SAP HANA database (single or multi container). Running the exporter in the same machine where the HANA database is running is recommended. Ideally each database should be monitored by one exporter.

  2. A SAP HANA Connector, for that, you have two options:

The installation of the connector is covered in the Installation section.

  1. Some metrics are collected on the HANA monitoring views by the SAP Host agent. Make sure to have it installed and running to have access to all the monitoring metrics.

Metrics file

The exporter uses an additional file to know the metrics that are going to be exported. Here more information about the metrics file.

Installation

The project can be installed in many ways, including but not limited to:

  1. RPM
  2. Manual clone

RPM

On openSUSE or SUSE Linux Enterprise use zypper package manager:

zypper install prometheus-hanadb_exporter

Find the latest development repositories at SUSE's Open Build Service.

Manual clone

The exporter is developed to be used with Python3.
The usage of a virtual environment is recommended.

git clone https://github.com/SUSE/hanadb_exporter
cd hanadb_exporter # project root folder
virtualenv virt
source virt/bin/activate
# uncomment one of the next two options (to use hdbcli, you will need to have the HANA client folder where this python package is available)
# pip install pyhdb
# pip install path-to-hdbcli-N.N.N.tar.gaz
pip install .
# pip install -e . # To install in development mode
# deactivate # to exit from the virtualenv

If you prefer, you can install the PyHDB SAP HANA connector as a RPM package doing (example for Tumbleweed, but available for other versions):

# All the commands must be executed as root user
zypper addrepo https://download.opensuse.org/repositories/network:/ha-clustering:/sap-deployments:/devel/openSUSE_Tumbleweed/network:ha-clustering:sap-deployments:devel.repo
zypper ref
zypper in python3-PyHDB

Configuring the exporter

Create the config.json configuration file. An example of config.json available in config.json.example. Here the most important items in the configuration file:

  • listen_address: Address where the prometheus exporter will be exposed (0.0.0.0 by default).
  • exposition_port: Port where the prometheus exporter will be exposed (9968 by default).
  • multi_tenant: Export the metrics from other tenants. To use this the connection must be done with the System Database (port 30013).
  • timeout: Timeout to connect to the database. After this time the app will fail (even in daemon mode).
  • hana.host: Address of the SAP HANA database.
  • hana.port: Port where the SAP HANA database is exposed.
  • hana.userkey: Stored user key. This is the secure option if you don't want to have the password in the configuration file. The userkey and user/password are self exclusive being the first the default if both options are set.
  • hana.user: An existing user with access right to the SAP HANA database.
  • hana.password: Password of an existing user.
  • hana.ssl: Enable SSL connection (False by default). Only available for dbapi connector
  • hana.ssl_validate_cert: Enable SSL certification validation. This field is required by HANA cloud. Only available for dbapi connector
  • hana.aws_secret_name: The secret name containing the username and password. This is a secure option to use AWS secrets manager if SAP HANA database is stored on AWS. aws_secret_name and user/password are self exclusive, aws_secret_name is the default if both options are set.
  • logging.config_file: Python logging system configuration file (by default WARN and ERROR level messages will be sent to the syslog)
  • logging.log_file: Logging file (/var/log/hanadb_exporter.log by default)

The logging configuration file follows the python standard logging system style: Python logging.

Using the default configuration file, it will redirect the logs to the file assigned in the json configuration file and to the syslog (only logging level up to WARNING).

Using the stored user key

This is the recommended option if we want to keep the database secure (for development environments the user/password with SYSTEM user can be used as it's faster to setup). To use the userkey option the dbapi must be installed (usually stored in /hana/shared/PRD/hdbclient/hdbcli-N.N.N.tar.gz and installable with pip3). It cannot be used from other different client (the key is stored in the client itself). This will raise the hdbcli.dbapi.Error: (-10104, 'Invalid value for KEY') error. For that a new stored user key must be created with the user that is running python. For that (please, notice that the hdbclient is the same as the dbapi python package):

/hana/shared/PRD/hdbclient/hdbuserstore set yourkey host:30013@SYSTEMDB hanadb_exporter pass

Using AWS Secrets Manager

If SAP HANA database is stored on AWS EC2 instance, this is a secure option to store the user/password without having them in the configuration file. To use this option:

  • Create a secret in key/value pairs format, specify Key username and then for Value enter the database user. Add a second Key password and then for Value enter the password. For the secret name, enter a name for your secret, and pass that name in the configuration file as a value for aws_secret_name item. Secret json example:
{
  "username": "database_user",
  "password": "database_password"
}
  • Allow read-only access from EC2 IAM role to the secret by attaching a resource-based policy to the secret. Policy Example:
{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect": "Allow",
      "Principal": {"AWS": "arn:aws:iam::123456789012:role/EC2RoleToAccessSecrets"},
      "Action": "secretsmanager:GetSecretValue",
      "Resource": "*",
    }
  ]
}

Some tips:

  • Set SYSTEMDB as default database, this way the exporter will know where to get the tenants data.
  • Don't use the stored user key created for the backup as this is created using the sidadm user.
  • The usage of a user with access only to the monitoring tables is recommended instead of using SYSTEM user.
  • If a user with monitoring role is used the user must exist in all the databases (SYSTEMDB+tenants).

Create a new user with monitoring role

Run the next commands to create a user with moniroting roles (the commands must be executed in all the databases):

su - prdadm
hdbsql -u SYSTEM -p pass -d SYSTEMDB #(PRD for the tenant in this example)
CREATE USER HANADB_EXPORTER_USER PASSWORD MyExporterPassword NO FORCE_FIRST_PASSWORD_CHANGE;
CREATE ROLE HANADB_EXPORTER_ROLE;
GRANT MONITORING TO HANADB_EXPORTER_ROLE;
GRANT HANADB_EXPORTER_ROLE TO HANADB_EXPORTER_USER;

Running the exporter

Start the exporter by running the following command:

hanadb_exporter -c config.json -m metrics.json
# Or
python3 hanadb_exporter/main.py -c config.json -m metrics.json

If a config.json configuration file is stored in /etc/hanadb_exporter the exporter can be started with the next command too:

hanadb_exporter --identifier config # Notice that the identifier matches with the config file without extension

Running as a daemon

The hanadb_exporter can be executed using systemd. For that, the best option is to install the project using the rpm package as described in Installation.

After that we need to create the configuration file as /etc/hanadb_exporter/my-exporter.json (the name of the file is relevant as we will use it to start the daemon). The config.json.example can be used as example (the example file is stored in /usr/etc/hanadb_exporter folder too).

The default metrics file is stored in /usr/etc/hanadb_exporter/metrics.json. If a new metrics.json is stored in /etc/hanadb_exporter this will be used.

The logging configuration file can be updated as well to customize changing the new configuration file logging.config_file entry (default one available in /usr/etc/hanadb_exporter/logging_config.ini).

Now, the exporter can be started as a daemon. As we can have multiple hanadb_exporter instances running in one machine, the service is created using a template file, so an extra information must be given to systemd (this is done adding the @ keyword after the service name together with the name of the configuration file created previously in /etc/hanadb_exporter/{name}.json):

# All the command must be executed as root user
systemctl start prometheus-hanadb_exporter@my-exporter
# Check the status with
systemctl status prometheus-hanadb_exporter@my-exporter
# Enable the exporter to be started at boot time
systemctl enable prometheus-hanadb_exporter@my-exporter

License

See the LICENSE file for license rights and limitations.

Authors

Reviewers

Pull request preferred reviewers for this project:

References

https://prometheus.io/docs/instrumenting/writing_exporters/

https://prometheus.io/docs/practices/naming/

http://sap.optimieren.de/hana/hana/html/sys_statistics_views.html

https://help.sap.com/viewer/1efad1691c1f496b8b580064a6536c2d/Cloud/en-US/39eca89d94ca464ca52385ad50fc7dea.html

hanadb_exporter's People

Contributors

angelabriel avatar arbulu89 avatar ayoub-belarbi avatar diegoakechi avatar elturkym avatar fjnalta avatar juadk avatar karolyczovek avatar krig avatar mallozup avatar ozlvpivk avatar pirat013 avatar stefanotorresi avatar wombelix avatar yeoldegrove avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hanadb_exporter's Issues

Running two instance of hanadb_exporter

Hallo everyone,

Is it possible to run two instances of hanadb_exporter when we clone it manually to the system and run it inside Virtual environment? I have tried creating two virtual environments and running second instance inside it, but it does not work. I have created an additional config file for the second instance. Any inputs will be very helpful. Thanks in advance.

Best

Unable to Start with Multi-Tenant Enabled...

We have several HANA servers, all multi-tenant systems. When we try to start the hanadb_exporter application with the multi-tenant option set to true, it fails to start with an authentication issue. However, if we start with it set to false, then it starts up without issue.

We have confirmed that the same user exists in all of the tenants and the permissions match that of the DB commands that are in the docs.

Our config:

{
  "listen_address": "0.0.0.0",
  "exposition_port": 7825,
  "multi_tenant": true,
  "timeout": 30,
  "hana": {
    "host": "<fqdn>",
    "port": 32013,
    "user": "<db user>",
    "password": "<db pass>",
    "ssl": false,
    "ssl_validate_cert": false
  },
  "logging": {
    "config_file": "/opt/hanadb_exporter/logging_config.ini",
    "log_file": "/var/log/hanadb_exporter.log"
  }
}

Obviously we're using actual values for the host, user, and password fields. They've been removed for security.

The SAP instance is 20, so the 32013 port is correct. We've used the FQDN, hostname, and localhost as options to the host option, but there's no change.

When we try to start it, we get:

# hanadb_exporter -c /opt/hanadb_exporter/config.json -m /opt/hanadb_exporter/metrics.json 
2023-07-11 01:28:37,673 ERROR hanadb_exporter Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/shaptools/hdb_connector/connectors/dbapi_connector.py", line 55, in connect
    **self.__properties
hdbcli.dbapi.Error: (10, 'authentication failed')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/hanadb_exporter", line 9, in <module>
    main.run()
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/main.py", line 134, in run
    timeout=config.get('timeout', 600))
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/db_manager.py", line 130, in start
    self._connect_tenants(host, connection_data)
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/db_manager.py", line 64, in _connect_tenants
    host, tenant_port, **connection_data)
  File "/usr/lib/python3.6/site-packages/shaptools/hdb_connector/connectors/dbapi_connector.py", line 58, in connect
    raise base_connector.ConnectionError('connection failed: {}'.format(err))
shaptools.hdb_connector.connectors.base_connector.ConnectionError: connection failed: (10, 'authentication failed')

We are using the dbapi method, not the pyhdb.

I feel like there's something we're missing, but it's unclear what that is. Any help would be appreciated.

Thanks!

password for HANA DB user

Hi,

I'd recommend to use the HANA secure store to get access to the DB and run the query's. Therefor we should add the option to use this instead of clear user and password in a file.
The only thing you need is the HANA client package installed.
Did we really need the full system user privileges to run the queries or would it be possible to use a different HANA permission profile to do the job as well?
That would be a good combination of improvement.

Metric labels shuffled randomly

Hello .We have some strange problems with metric labels.
OS: SLES 12 SP4
Installed using zypper
hanadb-exporter 0.7.3
hdbcli 2.4.126

Resulting metric labels is definitely out of normal.
They are being shuffled in random between restarts. Sometimes they are correct, but mostly out of order

# HELP hanadb_sql_service_elapsed_time_ms Total elapsed time of SQL statements executions by service and SQL type in miliseconds
# TYPE hanadb_sql_service_elapsed_time_ms gauge
hanadb_sql_service_elapsed_time_ms{database_name="SYSTEMDB",host="s4h-db-prod1",insnr="10",port="31001",service="SELECT",sid="YHP",sql_type="nameserver"} 5.8540495e+07
...
hanadb_sr_takeover_log_position_bigint{database_name="SYSTEMDB",end_time="2020-05-26 19:25:03.4641090",insnr="10",log_pos_time="Secondary",operation_mode="s4h-db-prod2",shipped_log_pos_time="2020-05-26 19:24:30.1893150",sid="YHP",src_host="2020-05-26 19:24:30.1893150",src_site_name="Primary",start_time="2020-05-26 19:24:49.6322830",tgt_host="s4h-db-prod1",tgt_site_name="logreplay"} 1.2111712e+08
...
# HELP hanadb_sql_service_elap_per_exec_avg_ms Average elapsed time per execution by service and SQL type in miliseconds
# TYPE hanadb_sql_service_elap_per_exec_avg_ms gauge
hanadb_sql_service_elap_per_exec_avg_ms{database_name="USP",host="s4h-db-prod1",insnr="10",port="31049",service="SELECT",sid="YHP",sql_type="docstore"} 3.54

Connect to System DB on HANA 2

Is it possible to connect to a HANA 2 instance to the system database? I can not find the according parameter in the config.json to set the database.

Thanks,
Marcus

Exporter Container Image

Hello,

We are looking for an exporter that we can use to monitor our Hana and Hana cloud instances. I came across this repo and it looks promising for our use case. Would it be possible to provide a Docker Image for the exporter?

Regards,
Abhi

OSError: [Errno 5] Input/output error

Hello team,

I hope you're doing well,

The exporter was working well, but after restarting it we aren't able to collecte metrics and we got the errors below in the hanadb_exporter.log:

Traceback (most recent call last):

File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap

self._bootstrap_inner()

File "/usr/lib64/python3.6/threading.py", line 926, in _bootstrap_inner

(self.name, _format_exc()), file=_sys.stderr)

OSError: [Errno 5] Input/output error

Could you please help us ? We are facing this issue in the PROD system.

Regards,
Redouane

custom HANA query does not report the expected result

Running specific, custom HANA queries and transforming them into a Prometheus metric format with a dedicated hanadb_exporter instance is reporting an error.

The query which was used is:

"Select host, CPU, data_read_time, data_read_size, data_write_time, data_write_size, log_write_time, log_write_size from M_LOAD_HISTORY_SERVICE;"

The exporter was started manually with the new profile and the custom metrics file. After triggering the exporter via curl from an external host, this message was shown:

usr/etc/hanadb_exporter # ----------------------------------------
Exception happened during processing of request from ('192.168.144.1', 49712)
Traceback (most recent call last):
  File "/usr/lib64/python3.6/socketserver.py", line 654, in process_request_thread
    self.finish_request(request, client_address)
  File "/usr/lib64/python3.6/socketserver.py", line 364, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib64/python3.6/socketserver.py", line 724, in __init__
    self.handle()
  File "/usr/lib64/python3.6/http/server.py", line 418, in handle
    self.handle_one_request()
  File "/usr/lib64/python3.6/http/server.py", line 406, in handle_one_request
    method()
  File "/usr/lib/python3.6/site-packages/prometheus_client/exposition.py", line 152, in do_GET
    output = encoder(registry)
  File "/usr/lib/python3.6/site-packages/prometheus_client/exposition.py", line 121, in generate_latest
    output.append(sample_line(s))
  File "/usr/lib/python3.6/site-packages/prometheus_client/exposition.py", line 79, in sample_line
    for k, v in sorted(line.labels.items())]))
  File "/usr/lib/python3.6/site-packages/prometheus_client/exposition.py", line 79, in <listcomp>
    for k, v in sorted(line.labels.items())]))
AttributeError: ("'int' object has no attribute 'replace'", Metric(test_data_read_time_ms, Hana Data Read time, gauge, ms, [Sample(name='test_data_read_time_ms', labels={'sid': 'ETU', 'insnr': '00', 'database_name': 'SYSTEMDB', 'host': 'hana02', 'cpu': 0}, value=0, timestamp=None, exemplar=None), Sample(name='test_data_read_time_ms', labels={'sid': 'ETU'
....

The metric file looks like this:

# cat newmetric.json
{
 "Select host, CPU, data_read_time, data_read_size, data_write_time, data_write_size, log_write_time, log_write_size from M_LOAD_HISTORY_SERVICE;":
  {
    "enabled": true,
    "hana_version_range": ["1.0.0", "3.0.0"],
    "metrics": [
      {
        "name": "test_data_read_time",
        "description": "Hana Data Read time",
        "labels": ["HOST", "CPU"],
        "value": "DATA_READ_TIME",
        "unit": "ms",
        "type": "gauge"
      }
    ]
  }
}

The server who has triggered the command is getting this message:

# curl 192.168.144.11:9667/metrics
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
        "http://www.w3.org/TR/html4/strict.dtd">
<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html;charset=utf-8">
        <title>Error response</title>
    </head>
    <body>
        <h1>Error response</h1>
        <p>Error code: 500</p>
        <p>Message: error generating metric output.</p>
        <p>Error code explanation: 500 - Server got itself in trouble.</p>
    </body>
</html>

System Replication Takeover History panel does not show anything

Due to some test I saw the panel for the system replication history is empty.
The command queries to visualize something for the panel are:

hanadb_sr_takeover_replication_status{host=~"$node_name"...
hanadb_sr_takeover_duration_time_seconds{host=~"$node_name",....
hanadb_sr_takeover_log_position_bigint{host=~"$node_name"...
hanadb_sr_takeover_shipped_log_position_bigint{host=~"$node_name"...

If checked the values from the exporter metric page and found the difference.

E.g. the above first mentioned query line reports this result:
hanadb_sr_takeover_replication_status{database_name="SYSTEMDB",end_time="2021-06-10 11:20:15.4702670",insnr="00",log_pos_time="2021-06-10 11:19:02.6610690",operation_mode="logreplay",shipped_log_pos_time="2021-06-10 11:19:02.6610690",sid="ETU",src_host="hana01",src_site_name="left",start_time="2021-06-10 11:19:56.2858300",tgt_host="hana02",tgt_site_name="right"} 4.0

The output is not host= it is src_host.

retrieving the HANA DB Credentials from AWS Secrets Manager is not working if IMDSv2 is enabled on the instance

When trying to set up the secure credentials retrieval using AWS Secrets Manager on an EC2 instance which has IMDSv2 Enabled, the secrets_manager.py script errors out as below:


2023-03-28 15:18:15,370 ERROR hanadb_exporter Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/secrets_manager.py", line 33, in get_db_credentials
    ec2_info_response.raise_for_status()
  File "/usr/lib/python3.6/site-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://169.254.169.254/latest/dynamic/instance-identity/document

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/bin/hanadb_exporter", line 9, in <module>
    main.run()
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/main.py", line 150, in run
    db_credentials = secrets_manager.get_db_credentials(aws_secret_name)
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/secrets_manager.py", line 35, in get_db_credentials
    raise SecretsManagerError("EC2 information request failed") from e
hanadb_exporter.secrets_manager.SecretsManagerError: EC2 information request failed

Ideally the script needs to be upgraded to allow both cases, instances using IMDSv1 and IMDSv2.

Thanks a lot !

Bests

Performance impact

We are implementing the hanadb_exporter on our infrastructure but noticed that the tool has quite a big impact on CPU usage on big HANA installations, mainly caused by some of the queries from the metrics.json (SELECT TOP 10 ct.host, LPAD(ct.port,5) port, ct.schema_name, ct.table_name... is the heaviest one).

We could of course remove the heavy queries from the metrics.json, but would prefer to collect them at a much lower frequency. Some data are quite stable and so there's no need to collect them so often. We would install multiple hanadb_exporter instances: one with a metrics file for "light" queries that are collected often, and one with "heavy" queries that we only collect a few times a day.

Right now there doesn't seem to be a configuration option to set a collection interval. Would you consider to add such a feature? I think other users can benefit from it because I can hardly image the current CPU impact is acceptable for most companies.

Proble at startup of service

I have installed the exporter using zypper
Installation was fine.
After created the config file (config.json) I try to start with the sequence:
systemctl daemon-reload
systemctl start prometheus-hanadb_exporter@config
systemctl enable prometheus-hanadb_exporter@config
systemctl status prometheus-hanadb_exporter@config
systemctl stop prometheus-hanadb_exporter@config
systemctl restart prometheus-hanadb_exporter@config

But the service do not start. Below is the error

Traceback (most recent call last):
File "/usr/bin/hanadb_exporter", line 9, in
main.run()
File "/usr/lib/python3.6/site-packages/hanadb_exporter/main.py", line 110, in run
config = parse_config(config_file)
File "/usr/lib/python3.6/site-packages/hanadb_exporter/main.py", line 42, in parse_config
json_data = json.load(f_ptr)
File "/usr/lib64/python3.6/json/init.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib64/python3.6/json/init.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python3.6/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 13 column 3 (char 260)

My python version is 3.6.15
My OS version is: SUSE Linux Enterprise Server 15 SP3

Automatic login to grafana by token url from application angular

Hello,

I have an application developed with angular, which displays some metric from prometheus.

I have keycloak which managed the authentication for my angular application as well as for my grafana, it is an sso logic.

In my angular application I have a link that redirects to grafana.

I want the client to be authenticated directly without entering the login and password again in grafana.

N.P: I have access to the token in my angular application.

Thank you!

... has not returned any record

@arbulu89 follow the steps install hanadb_exporter ,PO db can get monitor data,but S4 db cannot get data . see the log as:

2021-04-08 10:54:02,655 INFO shaptools.hdb_connector.connectors.base_connector executing sql query: SELECT host, LPAD(port,5) port, file_name, file_type, used_size/1024/1024 used_size_mb, total_size/1024/1024 
total_size_mb, (total_size - used_size)/1024/1024 available_size_mb, LPAD(TO_DECIMAL(MAP(total_size, 0, 0, ( 1 - used_size / total_size ) * 100), 10, 2), 8) frag_pct FROM sys.m_volume_files WHERE file_type = 'DATA';2021-04-08 10:54:02,659 INFO shaptools.hdb_connector.connectors.base_connector query records: [('txdevs4db01', '30003', '1 alert(s) occurred (without info alerts).', 'Investigate the alerts.', '2021-04-07 14:1
6:05.0640000', '1')]2021-04-08 10:54:02,666 INFO shaptools.hdb_connector.connectors.base_connector query records: [('txdevs4db01', '30003', '/hana/data/TXD/mnt00001/hdb00003.00003/datavolume_0000.dat', 'DATA', Decimal('136155.480
46875'), Decimal('141760'), Decimal('5604.51953125'), '    3.95'), ('txdevs4db01', '30011', '/hana/data/TXD/mnt00001/hdb00004.00003/datavolume_0000.dat', 'DATA', Decimal('66.51953125'), Decimal('192.03125'), Decimal('125.51171875'), '   65.36'), ('txdevs4db01', '30007', '/hana/data/TXD/mnt00001/hdb00002.00003/datavolume_0000.dat', 'DATA', Decimal('65.7734375'), Decimal('192.03125'), Decimal('126.2578125'), '   65.74'), ('txdevs4db01', '30040', '/hana/data/TXD/mnt00001/hdb00005.00003/datavolume_0000.dat', 'DATA', Decimal('66.01953125'), Decimal('192.03125'), Decimal('126.01171875'), '   65.62')]----------------------------------------
Exception happened during processing of request from ('10.240.228.1', 57522)
2021-04-08 10:54:02,666 INFO shaptools.hdb_connector.connectors.base_connector executing sql query: SELECT md.host, md.usage_type, md.path, md.filesystem_type, TO_DECIMAL(md.total_device_size / 1024 / 1024, 10
, 2) total_device_size_mb, TO_DECIMAL(md.total_size / 1024 / 1024, 10, 2) total_size_mb, TO_DECIMAL(md.used_size / 1024 / 1024, 10, 2) total_used_size_mb, TO_DECIMAL(du.used_size / 1024 / 1024, 10, 2) used_size_mb FROM sys.m_disk_usage du, sys.m_disks md WHERE du.host = md.host AND du.usage_type = md.usage_type;Traceback (most recent call last):
  File "/usr/lib64/python3.6/socketserver.py", line 639, in process_request_thread
    self.finish_request(request, client_address)
  File "/usr/lib64/python3.6/socketserver.py", line 361, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib64/python3.6/socketserver.py", line 696, in __init__
    self.handle()
  File "/usr/lib64/python3.6/http/server.py", line 418, in handle
    self.handle_one_request()
  File "/usr/lib64/python3.6/http/server.py", line 406, in handle_one_request
    method()
  File "/usr/lib/python3.6/site-packages/prometheus_client/exposition.py", line 159, in do_GET
    self.wfile.write(output)
  File "/usr/lib64/python3.6/socketserver.py", line 775, in write
    self._sock.sendall(b)
BrokenPipeError: [Errno 32] Broken pipe
----------------------------------------
2021-04-08 10:54:02,697 INFO shaptools.hdb_connector.connectors.base_connector query records: [(datetime.datetime(2021, 4, 8, 10, 54, 1, 291479), 'txdevs4db01', 'eth0', Decimal('0'), Decimal('34.72'), Decimal(
'245.27'), Decimal('93.2'), Decimal('85.13'), Decimal('0'), Decimal('0')), (datetime.datetime(2021, 4, 8, 10, 54, 1, 291428), 'txdevs4db01', 'lo', Decimal('0'), Decimal('5991.59'), Decimal('5991.59'), Decimal('391.22'), Decimal('391.22'), Decimal('0'), Decimal('0'))]

hanadb_exporter.log

API: add `query` keyword json to first parameter

Current behaviour:

  "SELECT MAX(TIMESTAMP) TIMESTAMP, HOST, MEASURED_ELEMENT_NAME CORE, SUM(MAP(CAPTION, 'User Time', TO_NUMBER(VALUE), 0)) USER_PCT, SUM(MAP(CAPTION, 'System Time', TO_NUMBER(VALUE), 0)) SYSTEM_PCT, SUM(MAP(CAPTION, 'Wait Time', TO_NUMBER(VALUE), 0)) WAITIO_PCT, SUM(MAP(CAPTION, 'Idle Time', 0, TO_NUMBER(VALUE))) BUSY_PCT, SUM(MAP(CAPTION, 'Idle Time', TO_NUMBER(VALUE), 0)) IDLE_PCT FROM sys.M_HOST_AGENT_METRICS WHERE MEASURED_ELEMENT_TYPE = 'Processor' GROUP BY HOST, MEASURED_ELEMENT_NAME;":
  {
    "metrics": [
      {

we should have instead:

"query":  "SELECT MAX(TIMESTAMP) TIMESTAMP, HOST, MEASURED_ELEMENT_NAME CORE, SUM(MAP(CAPTION, 'User Time', TO_NUMBER(VALUE), 0)) USER_PCT, SUM(MAP(CAPTION, 'System Time', TO_NUMBER(VALUE), 0)) SYSTEM_PCT, SUM(MAP(CAPTION, 'Wait Time', TO_NUMBER(VALUE), 0)) WAITIO_PCT, SUM(MAP(CAPTION, 'Idle Time', 0, TO_NUMBER(VALUE))) BUSY_PCT, SUM(MAP(CAPTION, 'Idle Time', TO_NUMBER(VALUE), 0)) IDLE_PCT FROM sys.M_HOST_AGENT_METRICS WHERE MEASURED_ELEMENT_TYPE = 'Processor' GROUP BY HOST, MEASURED_ELEMENT_NAME;":
  {
    "metrics": [
      {

Disk I/O metrics should not report values for multiple partitions on the same device

I think I/O metrics should only be reported per device, not per partition.

e.g.

hanadb_disk_io_latency_ms{disk="vda",host="stefanotorresi-hana01"} 0.58
hanadb_disk_io_latency_ms{disk="vda1",host="stefanotorresi-hana01"} 0.0
hanadb_disk_io_latency_ms{disk="vda2",host="stefanotorresi-hana01"} 0.0
hanadb_disk_io_latency_ms{disk="vda3",host="stefanotorresi-hana01"} 0.6
hanadb_disk_io_latency_ms{disk="vdb",host="stefanotorresi-hana01"} 0.28
hanadb_disk_io_latency_ms{disk="vdb1",host="stefanotorresi-hana01"} 0.28
hanadb_disk_io_latency_ms{disk="vdc",host="stefanotorresi-hana01"} 0.35

only vda vdb and vdc lines should be reported, not the single partitions vda1 vda2 vda3 vdb1, which often are duplicated, wrong or slightly skewed values.

Problem with updated version of hanadb exporter

Hi,

we are using quite an old installation of the hanadb exporter:

- python = 3.6.9
- hanadb exporter = 0.5.0
- shaptools = 0.3.2
- pyhdb = 0.3.4

Now we want to update the exporter and all other components to newer versions. We tried the following combination:

- python = 3.8.9 and 3.6.13
- hanadb exporter = 0.7.3
- shaptools = 0.3.11 and 0.3.8
- pyhdb = 0.3.4 and hdbcli = 2.7.26

Unfortunately none of the combination above worked for us. The exporter can connect to the database but the queries do not return any values.

(This log is captured while using python 3.6.13, hanadb exporter 0.7.3, shaptools 0.3.8, hdbcli 2.7.26)

2021-04-19 15:18:36,560 INFO shaptools.hdb_connector.connectors.base_connector dbapi package loaded
2021-04-19 15:18:36,561 INFO hanadb_exporter.db_manager user/password combination will be used to connect to the databse
2021-04-19 15:18:36,561 INFO shaptools.hdb_connector.connectors.base_connector connecting to SAP HANA database at XXX
2021-04-19 15:18:37,271 INFO shaptools.hdb_connector.connectors.base_connector connected successfully
2021-04-19 15:18:37,272 INFO hanadb_exporter.prometheus_exporter Querying database metadata...
2021-04-19 15:18:37,272 INFO shaptools.hdb_connector.connectors.base_connector executing sql query: SELECT
(SELECT value
FROM M_SYSTEM_OVERVIEW
WHERE section = 'System'
AND name = 'Instance ID') SID,
(SELECT value
FROM M_SYSTEM_OVERVIEW
WHERE section = 'System'
AND name = 'Instance Number') INSNR,
m.database_name,
m.version
FROM m_database m;
2021-04-19 15:18:37,419 INFO shaptools.hdb_connector.connectors.base_connector query records: [('XXX', 'XXX', 'XXX', '2.00.043.00.1569560581')]
2021-04-19 15:18:37,419 INFO hanadb_exporter.prometheus_exporter Metadata retrieved. version: 2.00.043.00.1569560581, sid: XXX, insnr: XXX, database: XXX
2021-04-19 15:18:37,420 INFO shaptools.hdb_connector.connectors.base_connector executing sql query: XXX;
2021-04-19 15:18:38,843 INFO shaptools.hdb_connector.connectors.base_connector query records: []

The metrics config did not change between the old running exporter and the new one, so I have no clue, why it is not working. Our old exporter is still running and returns the expected results. So the query itself works fine.

Do you have any recommendation regarding compatibel versions? Or any other hint?

Any help would be appreciated.

Thanks in advance
Lukas

prometheus-hanadb_exporter installation through zypper does not install latest upstream version

Hello All,

I am currently trying to set up the HANA DB Exporter on a SLES 15 for SAP SP4 Instance running on AWS.

As per the documentation, I'd like to use the configuration option to have the exporter read database credentials from AWS Secrets Manager.

Unfortunately, the service gives me the following error:

2023-03-28 14:33:20,458 ERROR hanadb_exporter Traceback (most recent call last):
  File "/usr/bin/hanadb_exporter", line 9, in <module>
    main.run()
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/main.py", line 134, in run
    timeout=config.get('timeout', 600))
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/db_manager.py", line 105, in start
    kwargs.get('userkey', None), kwargs.get('user', ''), kwargs.get('password', ''))
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/db_manager.py", line 83, in _get_connection_data
    'Provided user data is not valid. userkey or user/password pair must be provided')
ValueError: Provided user data is not valid. userkey or user/password pair must be provided

I have checked the main.py file located in /usr/lib/python3.6/site-packages/hanadb_exporter/main.py and it looks like there's a discrepency between what I have on the instance and the file in here https://github.com/SUSE/hanadb_exporter/blob/master/hanadb_exporter/main.py
=> It looks like the files are not up to date when installing the prometheus hana db exporter from zypper.

Below is the content of the folder /usr/lib/python3.6/site-packages/hanadb_exporter:

total 36
-rw-r--r-- 1 root root  158 Nov  2  2020 __init__.py
drwxr-xr-x 2 root root  208 Mar 28 14:32 __pycache__
-rw-r--r-- 1 root root 5355 Nov  2  2020 db_manager.py
-rw-r--r-- 1 root root 4624 Nov  2  2020 main.py
-rw-r--r-- 1 root root 7273 Nov  2  2020 prometheus_exporter.py
-rw-r--r-- 1 root root 2811 Nov  2  2020 prometheus_metrics.py
-rw-r--r-- 1 root root 1807 Nov  2  2020 utils.py

=> It is missing the secrets_manager.py file which reinforces my doubts on the installation from zypper (RPM) does not take the latest available version upstream.

FYI, this is the output I am getting when running zypper in:

zypper info prometheus-hanadb_exporter
Refreshing service 'Basesystem_Module_x86_64'.
Refreshing service 'Containers_Module_x86_64'.
Refreshing service 'Desktop_Applications_Module_x86_64'.
Refreshing service 'Development_Tools_Module_x86_64'.
Refreshing service 'Legacy_Module_x86_64'.
Refreshing service 'Public_Cloud_Module_x86_64'.
Refreshing service 'Python_3_Module_x86_64'.
Refreshing service 'SAP_Applications_Module_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_High_Availability_Extension_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Live_Patching_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_for_SAP_Applications_x86_64'.
Refreshing service 'Server_Applications_Module_x86_64'.
Refreshing service 'Web_and_Scripting_Module_x86_64'.
Loading repository data...
Reading installed packages...
Information for package prometheus-hanadb_exporter:
---------------------------------------------------
Repository     : SLE-Module-SAP-Applications15-SP4-Pool
Name           : prometheus-hanadb_exporter
Version        : 0.7.3+git.1604318097.c2b074f-3.6.1
Arch           : noarch
Vendor         : SUSE LLC <https://www.suse.com/>
Support Level  : Level 3
Installed Size : 124.3 KiB
Installed      : Yes
Status         : up-to-date
Source package : prometheus-hanadb_exporter-0.7.3+git.1604318097.c2b074f-3.6.1.src
Upstream URL   : https://github.com/SUSE/hanadb_exporter
Summary        : SAP HANA database metrics exporter
Description    :
    SAP HANA database metrics exporter 

my config file is as following:

{
    "listen_address": "0.0.0.0",
    "exposition_port": 9668,
    "multi_tenant": true,
    "timeout": 30,
    "hana": {
        "host": "localhost",
        "port": 32213,
        "aws_secret_name": "NAME_OF_THE_SECRET",
        "ssl": true,
        "ssl_validate_cert": false
    },
    "logging": {
        "config_file": "/etc/hanadb_exporter/logging_config.ini"
    }
}

Is there anything I am missing ?

Is there a way to have zypper install the latest upstream version ?

Thanks in advance.

Bests.

Do not start due to timeout type

Hi, do you encounter the same issue ? If not, do you know how to solve mine ?

The exporter doesn't want to start because of the timeout format difference (string and float)

2022-04-13 12:18:05,000 ERROR hanadb_exporter Traceback (most recent call last):
  File "/usr/bin/hanadb_exporter", line 9, in <module>
    main.run()
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/main.py", line 134, in run
    timeout=config.get('timeout', 600))
  File "/usr/lib/python3.6/site-packages/hanadb_exporter/db_manager.py", line 107, in start
    timeout = current_time + kwargs.get('timeout', 600)
TypeError: unsupported operand type(s) for +: 'float' and 'str'

HANA DB Exporter in Kubernetis cluster

hi,
we have no SUSE Linux system, we are using RedHat Linux. Therefore, we want to install SUSE in a Kubernetes cluster. You recommend to install the exporter on the same machine where the HANA DB is running, but this is not possible.
we tried to install the Exporter on RedHat, but we get some errors.
Is the installation od SUSE and Exporter in a Kubernetes cluster also a recommended way?
we want to monitor 3 SAP HANA DB`s.
How do I have to setup the config.json file?

best regards

Conny

Not displaying some Metrics for a Single Container System

Hallo Everyone,
We have two HANA Databases HD1 and HD2.
Whereas the first one (HD1) is single container System, on the other hand HD2 is multi tenant.
The hanadb_exporter is running fine for both systems with setting "multi_tenant": false for HD1 and "multi_tenant": true for HD2 and the requisite address and ports in the configuration file.
However, the exported metrics are quite different in both the cases.
Some metrics might be based on the multi tenancy factors but there are some which should generally be shown by both of them. These are mostly related to cpu, network and disk statistics.

The following metrics are shown by HD2 (which is multi tenant) but not by HD1(which is single container System):

hanadb_cpu_user_percent
hanadb_cpu_system_percent
hanadb_cpu_waitio_percent
hanadb_cpu_busy_percent
hanadb_cpu_idle_percent
hanadb_network_collisions_per_seconds
hanadb_network_receive_rate_kb_per_seconds
hanadb_network_transmission_rate_kb_per_seconds
hanadb_network_receive_requests_per_seconds
hanadb_network_transmission_rate_requests_per_seconds
hanadb_network_receive_rate_errors_per_seconds
hanadb_network_transmission_rate_errors_per_seconds
hanadb_disk_total_device_size_mb
hanadb_disk_total_size_mb
hanadb_disk_total_used_size_mb
hanadb_disk_used_size_mb
hanadb_disk_io_queue_length_requests
hanadb_disk_io_latency_ms
hanadb_disk_io_service_time_ms
hanadb_disk_io_wait_time_ms
hanadb_disk_io_requests_per_second
hanadb_disk_io_throughput_kb_second

These metrics seem to be very important to get the correct picture about system load and Performance. Please provide some insights about the possible reason why the single container system HD1 is not able to extract these metrics. Any help and inputs about the same is highly appreciated.
Thanks in advance

connection failed: (4321, 'only secure connections are allowed')

Hi,

we are facing an issue with our HANA databases where SSL connections are enforced for security reasons with the following configuration parameter:

global.ini / communication / sslenforce = true

hanadb_exporter throws the following error message:

hanadb_exporter.db_manager the connection to the system database failed. error message: connection failed: (4321, 'only secure connections are allowed')
hanadb_exporter Traceback (most recent call last):
File "/usr/bin/hanadb_exporter", line 9, in <module>
main.run()
File "/usr/lib/python3.4/site-packages/hanadb_exporter/main.py", line 134, in run
timeout=config.get('timeout', 600))
File "/usr/lib/python3.4/site-packages/hanadb_exporter/db_manager.py", line 127, in start
'timeout reached connecting the System database')
ConnectionError: timeout reached connecting the System database

Is there a way to connect to the database using SSL?

help

check if package are avalaible and correct version installed
pip install -r requirements.txt
You can add this command into documentation

TypeError: QueryResult object does not support indexing

Is this also supported for HANA 2.0 Databases. I get the following Error:

INFO:shaptools.hdb_connector.connectors.base_connector:pyhdb package loaded
INFO:shaptools.hdb_connector.connectors.base_connector:connecting to SAP HANA database at XXXXXXXXX:30013
INFO:shaptools.hdb_connector.connectors.base_connector:connected successfully
INFO:hanadb_exporter:prometheus exporter selected
INFO:shaptools.hdb_connector.connectors.base_connector:executing sql query: SELECT ROUND(SUM(total_memory_used_size/1024/1024),2) used_memory_mb FROM m_service_memory;
INFO:shaptools.hdb_connector.connectors.base_connector:query records: [(Decimal('10907.01'),)]
Traceback (most recent call last):
File "app.py", line 56, in
main()
File "app.py", line 51, in main
REGISTRY.register(collector)
File "/srv/hanadb_exporter/virt/local/lib/python2.7/site-packages/prometheus_client/registry.py", line 24, in register
names = self._get_names(collector)
File "/srv/hanadb_exporter/virt/local/lib/python2.7/site-packages/prometheus_client/registry.py", line 64, in _get_names
for metric in desc_func():
File "/srv/hanadb_exporter/hanadb_exporter/exporters/prometheus_exporter.py", line 77, in collect
metric_obj = self._execute(metric)
File "/srv/hanadb_exporter/hanadb_exporter/exporters/prometheus_exporter.py", line 50, in _execute
metric_obj = self._manage_gauge(metric, value)
File "/srv/hanadb_exporter/hanadb_exporter/exporters/prometheus_exporter.py", line 65, in _manage_gauge
metric_obj.add_metric([], str(value[0][0]))
TypeError: 'QueryResult' object does not support indexing

connection failed: (10, 'authentication failed')

run cmd with the follow error:
hdbcli.dbapi.Error: (10, 'authentication failed')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/bin/hanadb_exporter", line 9, in
main.run()
File "/usr/lib/python3.6/site-packages/hanadb_exporter/main.py", line 134, in run
timeout=config.get('timeout', 600))
File "/usr/lib/python3.6/site-packages/hanadb_exporter/db_manager.py", line 130, in start
self._connect_tenants(host, connection_data)
File "/usr/lib/python3.6/site-packages/hanadb_exporter/db_manager.py", line 64, in _connect_tenants
host, tenant_port, **connection_data)
File "/usr/lib/python3.6/site-packages/shaptools/hdb_connector/connectors/dbapi_connector.py", line 58, in connect
raise base_connector.ConnectionError('connection failed: {}'.format(err))
shaptools.hdb_connector.connectors.base_connector.ConnectionError: connection failed: (10, 'authentication failed')

Different Paths in Single Tenant vs Multi-Tenant

I've seen this happen on multiple systems now, so it must be a bug.

When the system has multi_tenant set to false, the metrics path to be scraped is /metrics. However, as soon as multi_tenant is set to true, the path changes to just /.

This path change is unexpected because I'm not able to locate it anywhere thus far.

Thus far, the HANA DB Exporter has been installed from the SUSE Open Build service and has varied from SLES 12.5 to 15.3 systems. All installed from mid-July to early-August.

Add a query ID/name to manage queries

Right now, there is no way to target a specific query by id/name... this will be handy in a lot of cases including showing the query id instead of the whole query when logging for example.

hanadb_exporter can not start

#hanadb_exporter --identifier config
Traceback (most recent call last):
File "/usr/bin/hanadb_exporter", line 9, in
main.run()
File "/usr/lib/python3.4/site-packages/hanadb_exporter/main.py", line 110, in run
config = parse_config(config_file)
File "/usr/lib/python3.4/site-packages/hanadb_exporter/main.py", line 42, in parse_config
json_data = json.load(f_ptr)
File "/usr/lib64/python3.4/json/init.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib64/python3.4/json/init.py", line 318, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python3.4/json/decoder.py", line 359, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name enclosed in double quotes: line 13 column 3 (char 264)

No provider of 'prometheus-hanadb_exporter' found.

I am trying to install this exporter with zypper on AWS EC2 machine running SLES. However, it fails with message as No provider of 'prometheus-hanadb_exporter' found.. What is missing?

zypper install prometheus-hanadb_exporter
Refreshing service 'Basesystem_Module_x86_64'.
Refreshing service 'Containers_Module_x86_64'.
Refreshing service 'Desktop_Applications_Module_x86_64'.
Refreshing service 'Development_Tools_Module_x86_64'.
Refreshing service 'Legacy_Module_x86_64'.
Refreshing service 'Public_Cloud_Module_x86_64'.
Refreshing service 'SUSE_Cloud_Application_Platform_Tools_Module_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_x86_64'.
Refreshing service 'Server_Applications_Module_x86_64'.
Refreshing service 'Web_and_Scripting_Module_x86_64'.
Loading repository data...
Reading installed packages...
'prometheus-hanadb_exporter' not found in package names. Trying capabilities.

add version to API:

we should add version as parameter to the JSON metrics API.

This will allow us more safety. It might also be worth to research how we version further the api since it isn't a pkg we should imho don't use 1.1 etc

Question: GRAFANA Dashboards for Metrics?

Hi,

has someone already build grafana dashboards on the exposed metrics and is able to share them? Would be grad to also make the visualisation/alerting part available.

Best,

Disk usage metric should not report lines for missing directories

hanadb_disk_*_size_mb metrics all report a line for a non-existing directory for the ROOTKEY_BACKUP usage type:

e.g.

hanadb_disk_total_size_mb{filesystem_type="<ERROR>",host="stefanotorresi-hana01",instance="192.168.123.15:8001",job="hanadb",path="/usr/sap/PRD/HDB00/backup/sec/",usage_type="ROOTKEY_BACKUP"}

Disk statistics appear to not handle multiple devices

the hanadb_disk_*_size_mb metrics group doesn't seem to report correct metrics in relation to multiple filesystems.

e.g. in a node with the following filesystems reported by df -BM -T:

Filesystem    Type     1M-blocks    Used Available Use% Mounted on
/dev/vda3     ext4        70021M   2120M    64302M   4% /
/dev/vdb1     xfs         65504M  23573M    41931M  36% /hana

the hanadb_disk_total_size_mb metric is reported as follows:

# HELP hanadb_disk_total_size_mb Specifies the volume size in MB. It will be repeated if the volume is shared between usages_types.
# TYPE hanadb_disk_total_size_mb gauge
hanadb_disk_total_size_mb{filesystem_type="xfs",host="stefanotorresi-hana01",path="/hana/data/PRD/",usage_type="DATA"} 65503.0
hanadb_disk_total_size_mb{filesystem_type="xfs",host="stefanotorresi-hana01",path="/usr/sap/PRD/HDB00/backup/data/",usage_type="DATA_BACKUP"} 65503.0
hanadb_disk_total_size_mb{filesystem_type="xfs",host="stefanotorresi-hana01",path="/hana/log/PRD/",usage_type="LOG"} 65503.0
hanadb_disk_total_size_mb{filesystem_type="<ERROR>",host="stefanotorresi-hana01",path="/usr/sap/PRD/HDB00/backup/sec/",usage_type="ROOTKEY_BACKUP"} 0.0
hanadb_disk_total_size_mb{filesystem_type="xfs",host="stefanotorresi-hana01",path="/usr/sap/PRD/HDB00/stefanotorresi-hana01/",usage_type="TRACE"} 65503.0

Note that the entries with the path starting with /usr should not report 65503 but about 70021, which is the correct size of the / filesystem.
Also, the filesystem_type label does not correspond.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.