Giter Site home page Giter Site logo

peimanja / artifactory_exporter Goto Github PK

View Code? Open in Web Editor NEW
135.0 135.0 36.0 783 KB

JFrog Artifactory Prometheus Exporter written in Go

License: Apache License 2.0

Go 98.39% Dockerfile 1.61%
artifactory artifactory-exporter go golang jfrog-artifactory metrics monitoring prometheus prometheus-exporter

artifactory_exporter's People

Contributors

alex-tsyganok avatar cortesem avatar davidshadix avatar den-patrakeev avatar dependabot[bot] avatar inoahnothing avatar julioz avatar kacperperschke avatar lbpdt avatar martinm82 avatar mjtrangoni avatar mstansberry avatar peimanja avatar rohitggarg avatar rohitggarg-qb avatar schoofsc avatar yonahd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

artifactory_exporter's Issues

Docker image is not compatible with userns

Overview of the Issue

When the userns-remap feature is set in /etc/docker/daemon.json, the container can not be run. CircleCI has a good write-up on what may potentially be the issue here: https://circleci.com/docs/2.0/high-uid-error/

Reproduction Steps

/etc/docker/daemon.json:

{
    "userns-remap": "artifactory"
}

/etc/subgid:

artifactory:100000:10000

/etc/subuid:

artifactory:100000:10000
# sysctl user.max_user_namespaces
user.max_user_namespaces = 10000

Operating system and Environment details

# cat /etc/*release
CentOS Linux release 7.6.1810
<TRUNCATED>
# uname -r
3.10.0-957.21.3.el7.x86_64

Logs

docker: failed to register layer: ApplyLayer exit status 1 stdout:  stderr: Container ID 65532 cannot be mapped to a host ID.

Support debian repository mismatch checking at the component level

Feature Description

Export metrics specific to the debian repository mismatched files hosted by Artifactory

Use Case(s)

We have seen the debian repositories hosted on artifactory break sporadically:

E: Failed to fetch https://redacted/repo/component/Packages.bz2  File has unexpected size (2672 != 2222). Mirror sync in progress? [IP: redacted 443]
   Hashes of expected file:
    - Filesize:2222 [weak]
    - SHA256:3e1939f3232ee25e9c7e95992fc6ba932a78559afae3cf720bce37d5fd5c983f
    - SHA1:75c359d9f9f9a76a0092ce28178eca52f6f4ae43 [weak]
    - MD5Sum:25fb127cf8d902db1be92995f107b8e3 [weak]
   Release file created at: Tue, 14 Jun 2022 20:20:59 +0000
E: Failed to fetch redacted/repo/component/Packages.bz2  
E: Failed to fetch redacted/repo/component/Packages.bz2  

Would you be open to a PR that does the following:

  • iterates over every debian repository on the target artifactory server
  • iterates through each component, architecture, and package format
  • Ensure that the InRelease file and the files it points at are not mismatched
  • exports a series of metrics, something like:
# for mismatch
artifactory_debian_inrelease_mismatch{repo="$repo", component="$component"} 1
# for not-mismatched
artifactory_debian_inrelease_mismatch{repo="$repo", component="$component"} 0

# where component would be something like "main/binary-amd64/Packages"
# and repo would be something like "focal"

Let me know! If I have the time/energy, I could perhaps contribute a PR like this.

Add version output for artifactory_exporter binary

Feature Description

It would be very nice to have an opportunity to request the version from artifactory_exporter binary with some flags like

artifactory_exporter -v
artifactory_exporter --version

or at least add the version to output of existing help

artifactory_exporter --help

Use Case(s)

When I want to check what is the current version of artifactory_exporter do I have (for bug reporting, changelog reading, upgrading), I cannot do this.

What I've tried

$ /usr/bin/artifactory_exporter -v
artifactory_exporter: error: unknown short flag '-v', try --help

$ /usr/bin/artifactory_exporter -V
artifactory_exporter: error: unknown short flag '-V', try --help

$ /usr/bin/artifactory_exporter --version
artifactory_exporter: error: unknown long flag '--version', try --help

$ /usr/bin/artifactory_exporter version
artifactory_exporter: error: unexpected version, try --help
$ /usr/bin/artifactory_exporter --help
usage: artifactory_exporter [<flags>]

Flags:
  -h, --help                    Show context-sensitive help (also try --help-long and --help-man).
      --web.listen-address=":9531"
                                Address to listen on for web interface and telemetry.
      --web.telemetry-path="/metrics"
                                Path under which to expose metrics.
      --artifactory.scrape-uri="http://localhost:8081/artifactory"
                                URI on which to scrape JFrog Artifactory.
      --artifactory.ssl-verify  Flag that enables SSL certificate verification for the scrape URI
      --artifactory.timeout=5s  Timeout for trying to get stats from JFrog Artifactory.
      --log.level=info          Only log messages with the given severity or above. One of: [debug, info, warn, error]
      --log.format=logfmt       Output format of log messages. One of: [logfmt, json]

`/api/v1/metrics`

If the artifactory system.yaml has metrics enabled, this provides additional metrics for prometheus. One could scrape the metrics off that directly, but it is still behind basic auth, so I am thinking this exporter might be able to reverse proxy that. WDYT? Thinking about creating a PR

Artifact create/download per repo metrics

Would it be possible to include metrics for the number of artifacts created and downloaded per repo?
This would round out the metrics requirements for a project we are working on using your exporter...
For example (from https://github.com/petrjurasek/artifactory-prometheus-exporter)

# HELP artifactory_artifacts_downloaded Downloaded artifacts
# TYPE artifactory_artifacts_downloaded gauge
artifactory_artifacts_downloaded{key="example-repo-local",minutes_ago="1"} 0.0
artifactory_artifacts_downloaded{key="example-repo-local",minutes_ago="60"} 0.0
artifactory_artifacts_downloaded{key="example-repo-local",minutes_ago="5"} 0.0
# HELP artifactory_artifacts_created Created artifacts
# TYPE artifactory_artifacts_created gauge
artifactory_artifacts_created{key="example-repo-local",minutes_ago="1"} 0.0
artifactory_artifacts_created{key="example-repo-local",minutes_ago="60"} 0.0
artifactory_artifacts_created{key="example-repo-local",minutes_ago="5"} 0.0

Log level bug in v1.13.0

Hello, we're facing some issues with the logs since v1.13.0.
Also when you set the log.level to info it prints a high amount of debug logs anyway.

Enable artifactory_artifacts* metrics

Hi
I'm using Artifactory enterprise 7.21.12 and want to use your exporter for artifactory monitoring.
I run exporter from binary and get all metrics excluding artifactory_artifacts* metrics. Haw to enable it?

/api/replications endpoint doesn't exist

Looks like you are calling /api/replications which according to the documentation and my testing on v6.4 and v6.8 (Enterprise License) doesn't actually exist in this form.

Error messages in logs :
level=debug ts=2020-07-01T15:08:44.638Z caller=replication.go:29 msg="Fetching replications stats"
level=debug ts=2020-07-01T15:08:44.638Z caller=utils.go:24 msg="Fetching http" path=http://localhost:8081/artifactory/api/replications
level=error ts=2020-07-01T15:08:44.649Z caller=utils.go:53 msg="There was an error making API call" endpoint=http://localhost:8081/artifactory/api/replications err="Not Found" status=404
level=error ts=2020-07-01T15:08:44.649Z caller=replication.go:14 msg="Couldn't scrape Artifactory when fetching replications" err="Not Found"

and then because of the return 0 in the error catch in collector.go none of the other metrics are scraped :

err = e.exportReplications(ch)

There is :

  1. /api/replications/{repoKey} - https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-GetRepositoryReplicationConfiguration
  2. /api/system/replications - https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-GetRemoteRepositoriesRegisteredforReplication

Based on the rest of your code I think you mean the first one to get all the replication configs for each repo but to do that you are going to need to get the list of repo's and iterate over them.

as a temporary fix I've removed that block and it is now working without error.
barney-garrett@43de3e3

Edit : using the Binary version

Noisy warning message when license expiry is `"N/R"`

The code in

level.Warn(e.logger).Log("msg", "Couldn't parse Artifactory license ValidThrough", "err", err)
will issue a warning message every time it fails to parse the expiry date, example log message: level=warn ts=2023-04-18T10:27:38.129Z caller=system.go:41 msg="Couldn't parse Artifactory license ValidThrough" err="parsing time \"N/R\" as \"Jan 2, 2006\": cannot parse \"N/R\" as \"Jan\""

How can this be improved to avoid spamming the logs with a non-useful message?

This is how the metrics looks like in Prometheus: artifactory_system_license{expires="N/R", licensed_to="Artifactory Online Dedicated", node_id="foo", type="enterprise"}

Before parsing it could be checked for example whether validThroughTime == "N/R" and then use validThrough = timeNow but without emitting a warning - should I raise a PR for this?

Helm existing secret failure

In helm values if set existingSecret: true, the manifest gets render error.
envFrom:
- secretRef:
name: %!s(bool=true)

Error: YAML parse error on prometheus-artifactory-exporter/templates/deployment.yaml: error converting YAML to JSON: yaml: line 59: found character that cannot start any token

Trouble connecting to artifactory container

I've got the following in my environment file:

ARTI_USERNAME=xxx
ARTI_PASSWORD=xxx
ARTI_SSL_VERIFY=false
ARTI_SCRAPE_URI="http://artifactory:8081/artifactory"
ARTI_TIMEOUT="15s"

and these two containers in my docker-compose file:

  artifactory:
    image: docker.bintray.io/jfrog/artifactory-pro:7.xx.xx
    restart: unless-stopped
    environment:
      EXTRA_JAVA_OPTIONS: "
        -Xms512m
        -Xmx2g
        -Xss256k
        -XX:+UseG1GC"
        #-Xmx2g
    ports:
      - "8081:8081"
      - "8082:8082"
    volumes:
      - artifactory_home:/var/opt/jfrog/artifactory
      - /docker/old_artifactory:/home/artifactory/old_artifactory
      - backups:/home/artifactory/backups

  artifactory_exporter:
    image: peimanja/artifactory_exporter:latest
    env_file:
      - VARS_ARTIFACTORY
    ports:
      - "9531:9531"
    command:
      - "--log.level=debug"
    volumes:
      - /etc/ca-certificates:/etc/ca-certificates

artifactory_exporter can connect to prometheus but is unable to scrape from jfrog artifactory with the following error message:

level=debug ts=2023-02-05T18:17:59.159Z caller=system.go:77 msg="Fetching license stats"
level=debug ts=2023-02-05T18:17:59.159Z caller=utils.go:44 msg="Fetching http" path=http://artifactory:8081/artifactory/api/system/license
level=error ts=2023-02-05T18:17:59.172Z caller=utils.go:81 msg="There was an error making API call" endpoint=http://artifactory:8081/artifactory/api/system/license err="[map[message:Forbidden status:403]]" status=(MISSING)

Can anyone give me a clue as to what is going on here? Is it an issue with the certs? Am I directing the exporter inappropriately?

Response time from metrics page takes forever

Overview of the Issue

Immediately after restart curl localhost:9531/metrics response takes 25 sec.
But with each next min it grows like in twice ending up in endless wait after 30 min or so.

[PROD] [22:10:49] root@zi-repo:~
# time curl localhost:9531/metrics
---
real	0m25.396s
user	0m0.005s
sys	0m0.010s


[PROD] [22:11:23] root@zi-repo:~
# time curl localhost:9531/metrics
---
real	0m45.626s
user	0m0.003s
sys	0m0.010s

[PROD] [22:12:44] root@zi-repo:~
# time curl localhost:9531/metrics
---
real	1m21.190s
user	0m0.005s
sys	0m0.010s

[PROD] [22:15:34] root@zi-repo:~
# time curl localhost:9531/metrics
---
real	2m28.466s
user	0m0.003s
sys	0m0.016s

[PROD] [22:18:29] root@zi-repo:~
# time curl localhost:9531/metrics
real	3m38.536s
user	0m0.009s
sys	0m0.015s

[PROD] [22:28:21] root@zi-repo:~
# time curl localhost:9531/metrics
real	6m58.838s
user	0m0.010s
sys	0m0.020s

Operating system and Environment details

Centos 7

Logs

[PROD] [22:10:41] root@zi-repo:~
# service artifactory_exporter.service status
Redirecting to /bin/systemctl status artifactory_exporter.service
โ— artifactory_exporter.service - Artifactory Exporter
   Loaded: loaded (/etc/systemd/system/artifactory_exporter.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-02-16 22:10:41 UTC; 3s ago
 Main PID: 26577 (artifactory_exp)
   CGroup: /system.slice/artifactory_exporter.service
           โ””โ”€26577 /usr/bin/artifactory_exporter --artifactory.scrape-uri=http://localhost:8081/artifactory --log.level=debug --artifactory.timeout=15s

Feb 16 22:10:41 zi-repo systemd[1]: Started Artifactory Exporter.
Feb 16 22:10:41 zi-repo artifactory_exporter[26577]: level=info ts=2022-02-16T22:10:41.653Z caller=artifactory_exporter.go:30 msg="Starting artifactory_exporter" version="(version=v1.9.1, branch=refs/tags/v1.9.1, revision=07d2a646a...c7e3724ebf3db87)"
Feb 16 22:10:41 zi-repo artifactory_exporter[26577]: level=info ts=2022-02-16T22:10:41.654Z caller=artifactory_exporter.go:31 msg="Build context" context="(go=go1.14, user=, date=2021-02-01T10:00:53Z)"
Feb 16 22:10:41 zi-repo artifactory_exporter[26577]: level=info ts=2022-02-16T22:10:41.654Z caller=artifactory_exporter.go:32 msg="Listening on address" address=:9531

No errors in logs

Feb 16 22:20:29 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:28.791Z caller=system.go:66 msg="Fetching license stats"
Feb 16 22:20:29 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:28.791Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/license
Feb 16 22:20:29 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:28.793Z caller=security.go:23 msg="Fetching users stats"
Feb 16 22:20:29 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:28.793Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/security/users
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.816Z caller=security.go:20 msg="Counting users"
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.817Z caller=security.go:50 msg="Registering metric" metric=users realm=ldap value=1188
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.817Z caller=security.go:50 msg="Registering metric" metric=users realm=internal value=27
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.817Z caller=security.go:50 msg="Registering metric" metric=users realm=saml value=2
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.817Z caller=security.go:47 msg="Fetching groups stats"
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.817Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/security/groups
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.818Z caller=security.go:69 msg="Registering metric" metric=groups value=25
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.818Z caller=replication.go:29 msg="Fetching replications stats"
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.818Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/replications
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.820Z caller=replication.go:31 msg="Registering metric" metric=enabled repo=libs-release-local-recovered type=pull url= cron="0 0 0 * * ?" value=1
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.820Z caller=replication.go:31 msg="Registering metric" metric=enabled repo=old-manual-import type=pull url= cron="1 * * * * ?" value=0
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.820Z caller=replication.go:31 msg="Registering metric" metric=enabled repo=old-repo type=pull url= cron="0 * * * * ?" value=0
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.820Z caller=system.go:17 msg="Fetching health stats"
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.820Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/ping
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.823Z caller=system.go:24 msg="System ping returned OK"
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.823Z caller=system.go:41 msg="Fetching build stats"
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.823Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/version
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.824Z caller=storageinfo.go:45 msg="Fetching storage info stats"
Feb 16 22:20:31 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:20:31.824Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/storageinfo

regular "Registering metric messages" and "Converting size to bytes"

Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=utils.go:30 msg="Converting size to bytes"
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=utils.go:13 msg="Removing other characters to extract number from string"
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=utils.go:24 msg="Successfully converted string to number" string="12.38 KB" number=12.38
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=utils.go:49 msg="Successfully converted string to bytes" string="12.38 KB" value=12677.12
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=utils.go:13 msg="Removing other characters to extract number from string"
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=utils.go:24 msg="Successfully converted string to number" string=0% number=0
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=storage.go:116 msg="Registering metric" metric=repoUsed repo=rpm-test type=local package_type=npm value=9.21698304e+06
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=storage.go:119 msg="Registering metric" metric=repoFolders repo=rpm-test type=local package_type=npm value=0
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=storage.go:122 msg="Registering metric" metric=repoItems repo=rpm-test type=local package_type=npm value=1
Feb 16 22:26:12 zi-repo artifactory_exporter: level=debug ts=2022-02-16T22:26:12.469Z caller=storage.go:125 msg="Registering metric" metric=repoFiles repo=rpm-test type=local package_type=npm value=1

System is not busy

top - 21:52:57 up 19 days, 12:00,  2 users,  load average: 8.04, 7.57, 7.93
Tasks: 305 total,   1 running, 304 sleeping,   0 stopped,   0 zombie
%Cpu(s): 10.9 us,  3.7 sy,  0.0 ni, 83.9 id,  0.0 wa,  0.0 hi,  1.5 si,  0.0 st
KiB Mem : 36899112 total,   250124 free, 13836920 used, 22812068 buff/cache
KiB Swap:  1257468 total,  1246708 free,    10760 used. 22645980 avail Mem

After several days there're too many open files in logs. At this time lsof -p shows about 1000 open files.

Feb 16 15:20:41 zi-repo artifactory_exporter: 2022/02/16 15:20:41 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:42 zi-repo artifactory_exporter: 2022/02/16 15:20:42 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 5ms
Feb 16 15:20:42 zi-repo artifactory_exporter: 2022/02/16 15:20:42 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 10ms
Feb 16 15:20:42 zi-repo artifactory_exporter: 2022/02/16 15:20:42 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 20ms
Feb 16 15:20:42 zi-repo artifactory_exporter: 2022/02/16 15:20:42 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 40ms
Feb 16 15:20:42 zi-repo artifactory_exporter: 2022/02/16 15:20:42 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 80ms
Feb 16 15:20:43 zi-repo artifactory_exporter: 2022/02/16 15:20:43 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 160ms
Feb 16 15:20:43 zi-repo artifactory_exporter: 2022/02/16 15:20:43 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 320ms
Feb 16 15:20:43 zi-repo artifactory_exporter: 2022/02/16 15:20:43 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 640ms
Feb 16 15:20:44 zi-repo artifactory_exporter: 2022/02/16 15:20:44 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:45 zi-repo artifactory_exporter: 2022/02/16 15:20:45 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:46 zi-repo artifactory_exporter: 2022/02/16 15:20:46 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:47 zi-repo artifactory_exporter: 2022/02/16 15:20:47 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:48 zi-repo artifactory_exporter: 2022/02/16 15:20:48 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:49 zi-repo artifactory_exporter: 2022/02/16 15:20:49 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:50 zi-repo artifactory_exporter: 2022/02/16 15:20:50 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:51 zi-repo artifactory_exporter: 2022/02/16 15:20:51 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:52 zi-repo artifactory_exporter: 2022/02/16 15:20:52 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s
Feb 16 15:20:53 zi-repo artifactory_exporter: 2022/02/16 15:20:53 http: Accept error: accept tcp 0.0.0.0:9531: accept4: too many open files; retrying in 1s

After restart lsof is like:

[PROD] [22:37:02] root@zi-repo:~
# lsof -p 26577 |wc -l
61

Any thoughts?

Fetch Interval

Can I configure the scrap interval time to a higher one? lets say scrap every 5 minutes

Some metrics cannot be queried

problem statement
When I started the exporter, I found that some metrics were missing, as follows:
artifactory_artifacts_created_1m
artifactory_artifacts_downloaded_1m
So the Grafana I configured has no monitoring data

In addition, I also failed to use --optional-metric, please help me to look at these problems
image

other information
Prometheus_exporter version๏ผšv1.11.0
Artifactory version๏ผš6.17.0

Add work with type license 'Community Edition for C/C++'

Feature Description

Hello there,

In this moment artifactory_exporter don't work with Artifactory Community Edition for C/C++.
https://docs.conan.io/en/latest/uploading_packages/artifactory/artifactory_ce.html

Error in debug log:
artifactory_exporter | level=info ts=2022-12-28T14:46:55.792Z caller=artifactory_exporter.go:30 msg="Starting artifactory_exporter" version="(version=v1.10.0, branch=refs/tags/v1.10.0, revision=e2fa8870c7d7eb739af40fdd725480c349026276)" artifactory_exporter | level=info ts=2022-12-28T14:46:55.792Z caller=artifactory_exporter.go:31 msg="Build context" context="(go=go1.18.9, user=github-actions, date=2022-12-07T18:15:36Z)" artifactory_exporter | level=info ts=2022-12-28T14:46:55.792Z caller=artifactory_exporter.go:32 msg="Listening on address" address=:9531 artifactory_exporter | level=debug ts=2022-12-28T14:47:06.593Z caller=system.go:77 msg="Fetching license stats" artifactory_exporter | level=debug ts=2022-12-28T14:47:06.593Z caller=utils.go:44 msg="Fetching http" path=http://artifactory:8081/artifactory/api/system/license artifactory_exporter | level=debug ts=2022-12-28T14:47:08.687Z caller=security.go:27 msg="Fetching users stats" artifactory_exporter | level=debug ts=2022-12-28T14:47:08.687Z caller=utils.go:44 msg="Fetching http" path=http://artifactory:8081/artifactory/api/security/users artifactory_exporter | level=error ts=2022-12-28T14:47:08.692Z caller=utils.go:81 msg="There was an error making API call" endpoint=http://artifactory:8081/artifactory/api/security/users err="[map[message:This REST API is available only in Artifactory Pro (see: jfrog.com/artifactory/features). If you are already running Artifactory Pro please make sure your server is activated with a valid license key.\n status:400]]" status=(MISSING) artifactory_exporter | level=error ts=2022-12-28T14:47:08.693Z caller=security.go:32 msg="Couldn't scrape Artifactory when fetching security/users" err="[map[message:This REST API is available only in Artifactory Pro (see: jfrog.com/artifactory/features). If you are already running Artifactory Pro please make sure your server is activated with a valid license key.\n status:400]]"

Cause:

if licenseType != "oss" && licenseType != "jcr edition" {

JSON response server from:
http://artifactory:8081/artifactory/api/system/license

{ "type": "Community Edition for C/C++", "validThrough": "", "licensedTo": "" }

Setup a proxy

Hi,

We have an Artifactory running in the cloud. "Artifactory exporter" is installed in our LAN.
We need a proxy to reach Artifactory (Cloud)

Is it possible to add new flags to setup a proxy and a noproxy ?

Thanks for the help.
H.

Argument to exclude AQL related metics

Feature Description

Argument to exclude aql related metics

Use Case(s)

We do not necessary need metrics like created1m, created5m and want to have a way to exclude these.
Problem is that these metrics are using AQL which can run for a long time. With short timeout set it causing endless AQL queries against Artifactory instance and blocking other searches performed from users.

Is Xray in scope for this project?

We are piloting this repository, and have deployed this in our infrastructure and love the metrics that come out of the box. We would like to send some contributions upstream. I wanted to ask if Xray was explicitly out of scope for this project, or if you would welcome some Xray metric contributions?

Add replication status metrics

Feature Description

One can already monitor which repos have replication enabled using artifactory_replication_enabled; however, it would be awesome if the exporter could additionally monitor replication statuses.

That information is already available via the Artifactory API:
https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-ScheduledReplicationStatus

This already captures which repos have replication enabled:
https://github.com/peimanja/artifactory_exporter/blob/master/artifactory/replication.go

So the above could potentially feed that repo list into the new feature, which then checks just those repos' replication statuses, similar to this bash command:

$ for repo in $(jf rt curl /api/replications -s | jq -r '.[] | select(.enabled == true) | .repoKey' | sort -h); do jf rt curl /api/replication/$repo -s | jq -r '.targets[] | select(.url | test (".*standby.*")) | select(.status != null) | { (.repoKey): .status }'; done
{
  "repo1": "ok"
}
{
  "repo2": "ok"
}
{
  "repo3": "ok"
}
...

Use Case(s)

This would be great for monitoring replication statuses to track any issues there.

msg="Can't scrape Artifactory when fetching replications" err="HTTP status 405"

Overview of the Issue

Getting Can't scrape Artifactory when fetching replications error in the logs when running this exporter.

Reproduction Steps

Run this exporter against Artifactory with Trial license

Logs

level=error ts=2020-01-16T07:38:43.447Z caller=collector.go:266 msg="Can't scrape Artifactory when fetching replications" err="HTTP status 405"
level=error ts=2020-01-16T07:38:48.527Z caller=collector.go:266 msg="Can't scrape Artifactory when fetching replications" err="HTTP status 405"
level=error ts=2020-01-16T07:40:28.472Z caller=collector.go:266 msg="Can't scrape Artifactory when fetching replications" err="HTTP status 405"
level=error ts=2020-01-16T07:40:39.185Z caller=collector.go:266 msg="Can't scrape Artifactory when fetching replications" err="HTTP status 405"

Please refer to the comments on this

2a2bcd6

No metric named 'artifactory_artifacts_created_1m' and so on..

Hello there,
when I ran this exporter and push data to Prometheus and show dashboard in Grafana, I came into a problem: artifactory_artifacts_created_1m, artifactory_artifacts_downloaded_1m and so on are missing.

The dashboard file is also created by peimanja from here

The exporter version is v1.9.5.

Do you know why the metric is lost and the solution?

Unable to parse license result with latest OSS version

Artifactory OSS (7.3.2) causes parse error in the license validation metric.
Using artifactory_exporter v0.5.1 on linux_amd64

level=warn ts=2020-04-09T05:04:35.429Z caller=collector.go:212 msg="Can't parse Artifactory license ValidThrough" err="parsing time "" as "Jan 2, 2006": cannot parse "" as "Jan""

Debug log:
root@ip-10-100-2-28[~] # /usr/local/bin/artifactory_exporter --log.level=debug
level=info ts=2020-04-09T05:17:27.906Z caller=main.go:30 msg="Listening on address" address=:9531
level=debug ts=2020-04-09T05:17:31.695Z caller=system.go:51 msg="Fetching license stats"
level=debug ts=2020-04-09T05:17:31.695Z caller=collector.go:151 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/license
level=debug ts=2020-04-09T05:17:31.727Z caller=system.go:10 msg="Fetching health stats"
level=debug ts=2020-04-09T05:17:31.727Z caller=collector.go:151 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/ping
level=debug ts=2020-04-09T05:17:31.731Z caller=system.go:17 msg="System ping returned OK"
level=debug ts=2020-04-09T05:17:31.731Z caller=system.go:30 msg="Fetching build stats"
level=debug ts=2020-04-09T05:17:31.731Z caller=collector.go:151 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/version
level=warn ts=2020-04-09T05:17:31.732Z caller=collector.go:212 msg="Can't parse Artifactory license ValidThrough" err="parsing time "" as "Jan 2, 2006": cannot parse "" as "Jan""
level=debug ts=2020-04-09T05:17:31.732Z caller=storage.go:71 msg="Fetching storage info stats"
level=debug ts=2020-04-09T05:17:31.732Z caller=collector.go:151 msg="Fetching http" path=http://localhost:8081/artifactory/api/storageinfo
level=debug ts=2020-04-09T05:17:31.736Z caller=storage.go:187 msg="Extracting repo summeriest"

Log format doesn't change anything

Dear Developer,

I run this software on our server, but the output on the servername:9531/metrics site doesn't change at all.

I tried to run it with these ways:

./artifactory_exporter
./artifactory_exporter --log.format=json
./artifactory_exporter --log.format=logfmt

We got same format as an output for all methods. The target is reading these infos by Grafana, but the error is this below at this moment, while trying to add Prometheus data source:
Error reading Prometheus: bad_response: readObjectStart: expect { or n, but found <, error found in #1 byte of ...|<html> |..., bigger context ...|<html> <head><title>JFrog Artifactory |...

What am I doing wrong?

Thanks!
Feriman

There was an error making API call

Hello,

does the exporter have to run on every Artifactory host or can the external Web URL (e.g. https://artifactory.my-domain.com/artifactory) also be used as ARTI_SCRAPE_URI? Anyway, I get this error message using the Web URL:

level=info ts=2020-06-30T08:31:09.741Z caller=artifactory_exporter.go:30 msg="Starting artifactory_exporter" version="(version=master, branch=master, revision=48827804e60076fc770845154044b29395f098e7)"
level=info ts=2020-06-30T08:31:09.741Z caller=artifactory_exporter.go:31 msg="Build context" context="(go=go1.13.12, user=, date=2020-06-24T18:30:32Z)"
level=info ts=2020-06-30T08:31:09.741Z caller=artifactory_exporter.go:32 msg="Listening on address" address=:9531
level=debug ts=2020-06-30T08:31:13.372Z caller=system.go:66 msg="Fetching license stats"
level=debug ts=2020-06-30T08:31:13.373Z caller=utils.go:21 msg="Fetching http" path=https://artifactory.my-domain.com/artifactory/api/system/license
level=debug ts=2020-06-30T08:31:13.924Z caller=security.go:23 msg="Fetching users stats"
level=debug ts=2020-06-30T08:31:13.925Z caller=utils.go:21 msg="Fetching http" path=https://artifactory.my-domain.com/artifactory/api/security/users
level=error ts=2020-06-30T08:31:14.223Z caller=utils.go:50 msg="There was an error making API call" endpoint=https://artifactory.my-domain.com/artifactory/api/security/users err="[map[message:Error while trying to authenticate user 'techuser-monitoring'. status:401]]" status=(MISSING)
level=error ts=2020-06-30T08:31:14.224Z caller=security.go:43 msg="Couldn't scrape Artifactory when fetching security/users" err="[map[message:Error while trying to authenticate user 'techuser-monitoring'. status:401]]"

Health check endpoint needs to be updated

Summary:
The http://localhost:8081/artifactory/api/system/ping endpoint used to check Artifactory's health status is insufficient. Requesting an option to use http://localhost:8082/router/api/v1/system/health for the artifactory_system_healthy metric.

Note: the new endpoint is on port 8082, not 8081

With v7.x, Artifactory has switch to a micro-service architecture. We had a bad node and /artifactory/api/system/ping returns OK. artifactory_system_healthy metric returned healthy. With http://localhost:8082/router/api/v1/system/health, it returned status code 503:

{
  "router": {
    "node_id": "node2",
    "state": "HEALTHY",
    "message": "OK"
  },
  "services": [
    {
      "service_id": "jf-access@cf62cb13-312c-46c5-991d-a48344080da9",
      "node_id": "node2",
      "state": "UNHEALTHY_PEER",
      "message": "Service is healthy; there are missing services: jffe"
    },
    {
      "service_id": "jf-artifactory@077da22b-c412-4df0-92f4-c4141b813080",
      "node_id": "node2",
      "state": "UNHEALTHY_PEER",
      "message": "Service is healthy; there are missing services: jffe"
    },
    {
      "service_id": "jfevt@cf62cb13-312c-46c5-991d-a48344080da9",
      "node_id": "node2",
      "state": "UNHEALTHY_PEER",
      "message": "Service is healthy; there are missing services: jffe"
    },
    {
      "service_id": "jfmd@01f10z6zy2aqwrk5twc5kdc7hp",
      "node_id": "node2",
      "state": "UNHEALTHY_PEER",
      "message": "Service is healthy; there are missing services: jffe"
    }
  ]
}

Empty metrics for artifactory_storage_filestore_free_bytes and artifactory_storage_filestore_used_bytes

When i'm trying to calculate file storage % usage, i've faced an issue with empty metrics
artifactory_storage_filestore_free_bytes and artifactory_storage_filestore_used_bytes

Overview of the Issue

A paragraph or two about the issue you're experiencing.

Reproduction Steps

Steps to reproduce this issue:

Operating system and Environment details

i'm running peimanja/artifactory_exporter:v0.4.1
artifactory version 6.15.0

Logs

In artifactory exporter i see only start message in log
level=info ts=2020-01-27T14:32:04.457Z caller=main.go:30 msg="Listening on address" address=:9531

Package release

Up until release v1.9.4 there were several packages released along with the source code.
For example the packages for Linux 32/64 bit.

Release v1.9.5 from a couple days ago "only" shows the source code.
Was this a one-off or are all future releases only going to be source code?

Thanks for any feedback.

Feasibility for Artifactory Version 5.11.6

As in documentation, it is mentioned that this is tested on version 6.16.0.
When I am trying this with my Artifactory version 5.11.6, I am getting an error saying "Can't scrape Artifactory when fetching replications" err="HTTP status 405".
My Artifactory is licensed with type: Commercial
Can you please clarify if the Artifactory exporter will work for version 5.11.6 or not?

Kubernetes Pod should recycle after config/secret changes

Hi Team,
I am running exporter as kubernetes pod and used secrets resource and mapped this secret with deployment resource. How to add annotation on kubernetes deployment to recycle pod when ever secret / config file changes..?

artifactory_up 0 with no error

Overview of the Issue

I can not scrape an artifactory instance over the internet.
Using docker in a corporate environment, behind a proxy (docker proxy is setup this way)
Not even an error shows up..

Logs

$ docker run --env-file=artifactory/artifactory.env -p 9531:9531 peimanja/artifactory_exporter:latest --log.level=debug
level=info ts=2021-11-22T08:54:16.766Z caller=artifactory_exporter.go:30 msg="Starting artifactory_exporter" version="(version=, branch=refs/heads/master, revision=cae4dbf6f22a924747feb4712f991266520f2a69)"
level=info ts=2021-11-22T08:54:16.766Z caller=artifactory_exporter.go:31 msg="Build context" context="(go=go1.17.2, user=, date=2021-10-20T06:05:06Z)"
level=info ts=2021-11-22T08:54:16.766Z caller=artifactory_exporter.go:32 msg="Listening on address" address=:9531
level=debug ts=2021-11-22T08:54:30.790Z caller=system.go:66 msg="Fetching license stats"
level=debug ts=2021-11-22T08:54:30.790Z caller=utils.go:21 msg="Fetching http" path=https://MYADRESS/artifactory/api/system/license

.env

WEB_TELEMETRY_PATH=/metrics
ARTI_SCRAPE_URI=https://MYADRESS/artifactory
ARTI_ACCESS_TOKEN=***
ARTI_TIMEOUT=5s
ARTI_SSL_VERIFY=false

localhost:9531/metrics

# HELP artifactory_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which artifactory_exporter was built.
# TYPE artifactory_exporter_build_info gauge
artifactory_exporter_build_info{branch="refs/heads/master",goversion="go1.17.2",revision="cae4dbf6f22a924747feb4712f991266520f2a69",version=""} 1
# HELP artifactory_exporter_json_parse_failures Number of errors while parsing Json.
# TYPE artifactory_exporter_json_parse_failures counter
artifactory_exporter_json_parse_failures 0
# HELP artifactory_exporter_total_api_errors Current total API errors.
# TYPE artifactory_exporter_total_api_errors counter
artifactory_exporter_total_api_errors 1
# HELP artifactory_exporter_total_scrapes Current total artifactory scrapes.
# TYPE artifactory_exporter_total_scrapes counter
artifactory_exporter_total_scrapes 1
# HELP artifactory_up Was the last scrape of artifactory successful.
# TYPE artifactory_up gauge
artifactory_up 0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 10
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.17.2"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 647920
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 647920
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 4062
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 144
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 3.918968e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 647920
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 2.080768e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.687552e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 2556
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 2.048e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 3.76832e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 2700
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 7200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 37264
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 49152
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.473924e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.077178e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 425984
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 425984
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 9.260048e+06
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 8
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.03
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 11
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 7.643136e+06
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.63757125571e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.107226624e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

How to fetch information for each artifactory node?

I feel this is a dummy question, sorry about it ๐Ÿ˜…

We have artifactory running with 3 nodes. When I setup the artifactory_exporter the dashboard does not show all 3 nodes. When I look at the time-series artifactory_system_license I only get one value (I was expecting 3, one for each artifactory node).

We have configured the artifactory_exporter to fetch metrics from the artifactory service object.
Something like this:

  - name: ARTI_SCRAPE_URI
    value: http://artifactory-ha.artifactory:8081/artifactory

Where artifactory-ha.artifactory points to the service object of artifactory - which can then hit any of the available 3 nodes.

After looking around a bit, the conclusion I arrived was that to get metrics about the 3 nodes, I need to setup 3 instances of the artifactory_exporter, which one pointing directly to a different artifactory node.

Maybe this is too obvious, but can you confirm that to get info about each node, the artifactory_exporter has to be deployed in repeat, each instance pointing to a different artifactory node?

Thanks for the help.

Kubernetes artifactory_exporter pod not able to reach artifactory pod - metrics showing artifactory_up 0

Overview of the Issue

  • Deployed artifactory_exporter on AWS EKS cluster and Kubernetes artifactory_exporter pod not able to reach artifactory pod, both pods are running on same service like multiple containers with different ports
  • artifactory 8082
  • artifactory_exporter 9531
  • Promethues is able to reach the artifactory exporter pod by using the target of exporter service IP with 9531 port /metrics path
    But artifactory exporter is not able to get the artifactory application pod metrics by using this
Environment Variables from: 
      artifactory-exporter Secret Optional: false
      Environment:
      ARTI_SCRAPE_URI: http://$ARTIFACTORY_POD_IP:8081/artifactory
      ARTI_USERNAME: admin
      ARTI_PASSWORD: test123
      ARTIFACTORY_POD_IP: (v1:status.podIP)

Based on this https://github.com/peimanja/artifactory_exporter#flags tried many time & different way to fix this issue but still having same issue

Exact issue:
When i checked with metrics of exporter URL, i am getting artifactory_up is 0. Why this artifactory_exporter is not able to reach OR get all metrics..?
- Even i opened port on EKS node ( ec2 securitry group )
- Port exposed appropriately on Kubernetes deployment / containers
- Passing the vallues to artifactory_exporter as per document https://github.com/peimanja/artifactory_exporter#flags

But i don't know why artifactory_up 0

localhost:9531/metrics

# HELP artifactory_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which artifactory_exporter was built.
# TYPE artifactory_exporter_build_info gauge
artifactory_exporter_build_info{branch="refs/heads/master",goversion="go1.17.2",revision="cae4dbf6f22a924747feb4712f991266520f2a69",version=""} 1
# HELP artifactory_exporter_json_parse_failures Number of errors while parsing Json.
# TYPE artifactory_exporter_json_parse_failures counter
artifactory_exporter_json_parse_failures 0
# HELP artifactory_exporter_total_api_errors Current total API errors.
# TYPE artifactory_exporter_total_api_errors counter
artifactory_exporter_total_api_errors 1
# HELP artifactory_exporter_total_scrapes Current total artifactory scrapes.
# TYPE artifactory_exporter_total_scrapes counter
artifactory_exporter_total_scrapes 1
# HELP artifactory_up Was the last scrape of artifactory successful.
# TYPE artifactory_up gauge
artifactory_up 0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0

Reproduction Steps

Code:

resource "kubernetes_deployment" "artifactory" {

  metadata {
    name      = "artifactory"
    labels    = local.labels
    namespace = local.namespace
  }

  spec {
    replicas = 1

    strategy {
      rolling_update {
        max_surge       = 1
        max_unavailable = 0
      }
    }
    selector {
      match_labels = local.labels
    }

    template {
      metadata {
        labels = local.labels
      }

      spec {
        affinity {
          node_affinity {
            required_during_scheduling_ignored_during_execution {
              node_selector_term {
                match_expressions {
                  key      = "topology.ebs.csi.aws.com/zone"
                  operator = "In"
                  values   = [var.az]
                }
              }
            }
          }
        }

        container {
          name              = "artifactory"
          image             = var.artifactory_image
          image_pull_policy = "Always"

          env {
            name = "artifactory_pod_ip"
            value_from {
              field_ref {
                field_path = "status.podIP"
              }
            }
          }

          port {
            name           = "http-port"
            container_port = "8082"
          }

          port {
            name           = "http-port1"
            container_port = "8081"
          }

          volume_mount {
            name       = "artifactory-home"
            mount_path = "/var/opt/jfrog/artifactory"
          }
        }

        container {
          name              = "artifactory-exporter"
          image             = var.artifactory_exporter
          image_pull_policy = "Always"
#          args              = ["--artifactory.scrape-uri=http://$ARTIFACTORY_POD_IP:8081/artifactory"]

          env {
            name  = "ARTI_SCRAPE_URI"
            value = "http://$ARTIFACTORY_POD_IP:8081/artifactory"
          }

          env {
            name  = "ARTI_USERNAME"
            value = "admin"
          }

          env {
            name  = "ARTI_PASSWORD"
            value = "test123"
          }

          port {
            name           = "http-port"
            container_port = "9531"
          }

          env {
            name = "ARTIFACTORY_POD_IP"
            value_from {
              field_ref {
                field_path = "status.podIP"
              }
            }
          }

          env_from {
            secret_ref {
              name = kubernetes_secret.artifactory_exporter.metadata.0.name
            }
          }
        }

        init_container {
          name              = "grant-permissions"
          image             = var.busybox_artifactory_image
          image_pull_policy = "Always"
          command           = ["sh"]
          args              = ["-c", "chown -R 1030 /var/opt/jfrog/artifactory"]

          volume_mount {
            name       = "artifactory-home"
            mount_path = "/var/opt/jfrog/artifactory"
          }
        }

        restart_policy                   = "Always"
        termination_grace_period_seconds = 30

        volume {
          name = "artifactory-home"
          persistent_volume_claim {
            claim_name = kubernetes_persistent_volume_claim.artifactory.metadata.0.name
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "artifactory" {

  metadata {
    name      = "artifactory"
    labels    = local.labels
    namespace = local.namespace
  }
  spec {
    selector = local.labels

    port {
      name        = "artifactory"
      port        = 80
      target_port = 8082
    }

    port {
      name        = "artifactory-exporter"
      port        = 9531
      target_port = 9531
    }

    type = "ClusterIP"
  }
}

Logs

Name:               artifactory-8f5c78975-mspkr
Namespace:          cloud-env-dev-artifactory
Priority:           0
PriorityClassName:  <none>
Node:               ip-xxxxxx.ec2.internal/24.12.9.165
Start Time:         Mon, 22 Nov 2021 13:05:52 +0000
Labels:             app=artifactory
                    pod-template-hash=8f5c78975
Annotations:        kubernetes.io/psp: eks.privileged
Status:             Running
IP:                 24.12.9.169
Controlled By:      ReplicaSet/artifactory-8f5c78975
Init Containers:
  grant-permissions:
    Container ID:  docker://0fb8d73bf63a860a68a47563088c9ee6083c3d0282177c5d4f1864c00ca2b39c
    Image:         busybox:1.26.2
    Image ID:      docker-pullable://busybox@sha256:be3c11fdba7cfe299214e46edc642e09514dbb9bbefcd0d3836c05a1e0cd0642
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
    Args:
      -c
      chown -R 1030 /var/opt/jfrog/artifactory
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 22 Nov 2021 13:05:53 +0000
      Finished:     Mon, 22 Nov 2021 13:05:53 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/opt/jfrog/artifactory from artifactory-home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kl69h (ro)
Containers:
  artifactory:
    Container ID:   docker://ca5f04929c7e2a543504b5dd8ef8f77fc5a01f07edd8a0c67c1d66a49295c7df
    Image:          661072482170.dkr.ecr.us-east-1.amazonaws.com/prjm/jfrog-artifactory:7.10.6
    Image ID:       docker-pullable://xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/prjm/jfrog-artifactory@sha256:6ff8ed78b0c1a66fd9d93bb6d000d4ee3feb2a21c9adda5bd116d36e0845221c
    Ports:          8082/TCP, 8081/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Mon, 22 Nov 2021 13:05:54 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      artifactory_pod_ip:   (v1:status.podIP)
    Mounts:
      /var/opt/jfrog/artifactory from artifactory-home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kl69h (ro)
  artifactory-exporter:
    Container ID:   docker://77f3bfa9701ce82a157cbaf8a29d15290b084357a7174489ed3c7c256159931a
    Image:          peimanja/artifactory_exporter:v1.9.2
    Image ID:       docker-pullable://peimanja/artifactory_exporter@sha256:aeeb229ea7cd180f8118f0b113b18108ff5e2c6ab9733980662e81be3e9e43b6
    Port:           9531/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 22 Nov 2021 13:05:54 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      artifactory-exporter  Secret  Optional: false
    Environment:
      ARTI_SCRAPE_URI:     http://$ARTIFACTORY_POD_IP:8081/artifactory
      ARTI_USERNAME:       admin
      ARTI_PASSWORD:       test123
      ARTIFACTORY_POD_IP:   (v1:status.podIP)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kl69h (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  artifactory-home:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  artifactory-pv-claim
    ReadOnly:   false
  default-token-kl69h:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kl69h
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                                  Message
  ----    ------     ----   ----                                  -------
  Normal  Scheduled  3m17s  default-scheduler                     Successfully assigned cloud-env-dev-artifactory/artifactory-8f5c78975-mspkr to ip-24-12-9-165.ec2.internal
  Normal  Pulling    3m16s  kubelet, ip-24-12-9-165.ec2.internal  Pulling image "busybox:1.26.2"
  Normal  Pulled     3m16s  kubelet, ip-24-12-9-165.ec2.internal  Successfully pulled image "busybox:1.26.2" in 118.889964ms
  Normal  Created    3m16s  kubelet, ip-24-12-9-165.ec2.internal  Created container grant-permissions
  Normal  Started    3m16s  kubelet, ip-24-12-9-165.ec2.internal  Started container grant-permissions
  Normal  Pulled     3m15s  kubelet, ip-24-12-9-165.ec2.internal  Successfully pulled image "xxxxxx.dkr.ecr.us-east-1.amazonaws.com/prjm/jfrog-artifactory:7.10.6" in 125.332057ms
  Normal  Pulling    3m15s  kubelet, ip-24-12-9-165.ec2.internal  Pulling image "661072482170.dkr.ecr.us-east-1.amazonaws.com/prjm/jfrog-artifactory:7.10.6"
  Normal  Created    3m15s  kubelet, ip-24-12-9-165.ec2.internal  Created container artifactory
  Normal  Started    3m15s  kubelet, ip-24-12-9-165.ec2.internal  Started container artifactory
  Normal  Pulling    3m15s  kubelet, ip-24-12-9-165.ec2.internal  Pulling image "peimanja/artifactory_exporter:v1.9.2"
  Normal  Pulled     3m15s  kubelet, ip-24-12-9-165.ec2.internal  Successfully pulled image "peimanja/artifactory_exporter:v1.9.2" in 103.482978ms
  Normal  Created    3m15s  kubelet, ip-24-12-9-165.ec2.internal  Created container artifactory-exporter
  Normal  Started    3m15s  kubelet, ip-24-12-9-165.ec2.internal  Started container artifactory-exporter


# kubectl describe service/artifactory -n cloud-env-dev-artifactory
Name:              artifactory
Namespace:         cloud-env-dev-artifactory
Labels:            app=artifactory
Annotations:       <none>
Selector:          app=artifactory
Type:              ClusterIP
IP:                10.100.128.56
Port:              artifactory  80/TCP
TargetPort:        8082/TCP
Endpoints:         24.12.9.169:8082
Port:              artifactory-exporter  9531/TCP
TargetPort:        9531/TCP
Endpoints:         24.12.9.169:9531
Session Affinity:  None
Events:            <none>

No metrics with exporter

Hi,
I've configured this exporter via docker. The container starts correctly but I have no metrics on the grafana dashboard. The prometheus endpoint exposes only "systems" metrics not application ones.
In the logs I have this error:

level=error ts=2022-11-16T16:27:02.762Z caller=utils.go:57 msg="There was an error when trying to unmarshal the API Error" err="invalid character 'p' after top-level value ", systematic.

The container start like this:

docker run -d --env-file arti.file -p 9531:9531 peimanja/artifactory_exporter:latest --web.listen-address=":9531" --web.telemetry-path="/metrics" --log.level=debug --artifactory.scrape-uri="https://XXX.XXX"

Can I help me?
Regards
Nicola

Release binary for linux_amd64 does not run on CentOS/RHEL7/8 systems

Downloaded v0.5.1 release bundle for linux_amd64 to use on a CentOS8 system, binary won't execute due to missing library libc.musl-x86_64.so.1

root@ip-10-100-2-9[/usr/local/bin] # ldd artifactory_exporter
linux-vdso.so.1 (0x00007ffd327fb000)
libc.musl-x86_64.so.1 => not found

Apparrently this is a 'common' problem with images built on Alpine Linux - something still being dynamically linked ? This library is not available on any CentOS/RHEL system.

Compiling locally on the CentOS8 system fixed the issue for me, but if the 'released' versions could be fixed to work?

There was an error when trying to unmarshal the API Error

Overview of the Issue

Related to issue 84

Getting error:

level=debug ts=2023-03-03T12:36:15.040Z caller=utils.go:44 msg="Fetching http" path=https://artifactory.<domain>/api/system/license
level=error ts=2023-03-03T12:36:15.043Z caller=utils.go:57 msg="There was an error when trying to unmarshal the API Error" err="invalid character 'p' after top-level value"

Reproduction Steps

Helm install with following config:

  • ARTI_ACCESS_TOKEN as existing Secret
  • External artifactory URL with HTTPS
  • ServiceMonitor enabled
  • logLevel: debug
  • RBAC & PSPs disabled

Operating system and Environment details

Kubernetes GKE: v1.24.8
Helm: v3
Artifactory version: 7.49.5
Exporter image version: v1.12.0
Exporter helm chart version: 0.5.0

Logs

level=debug ts=2023-03-03T12:36:15.040Z caller=utils.go:44 msg="Fetching http" path=https://artifactory.<domain>/api/system/license
level=error ts=2023-03-03T12:36:15.043Z caller=utils.go:57 msg="There was an error when trying to unmarshal the API Error" err="invalid character 'p' after top-level value"

Unmarshall error

I am getting unmarshall error while fetching metrics in utils.go

level=error ts=2020-06-23T19:46:22.727Z caller=utils.go:47 msg="There was an error when trying to unmarshal the API Error" err="invalid character '<' looking for beginning of value"
level=error ts=2020-06-23T19:46:33.165Z caller=utils.go:47 msg="There was an error when trying to unmarshal the API Error" err="invalid character '<' looking for beginning of value"
level=error ts=2020-06-23T19:54:26.917Z caller=utils.go:47 msg="There was an error when trying to unmarshal the API Error" err="invalid character '<' looking for beginning of value"
level=error ts=2020-06-23T19:54:30.882Z caller=utils.go:47 msg="There was an error when trying to unmarshal the API Error" err="invalid character '<' looking for beginning of value"
level=error ts=2020-06-23T22:18:50.157Z caller=utils.go:47 msg="There was an error when trying to unmarshal the API Error" err="invalid character '<' looking for beginning of value"

fetch limited data from artifactory

Hi team,
Thanks for creating this exporter. I have recently used it and able to connect and fetch data but I don't know why its limited to only few parameters (Prometheus query screenshot attached). Any idea about it?

2022-04-01 00_45_26-4  ARTIF1WABAC001 ( Artifactory Acpt )
2022-04-01 00_43_42-Prometheus Time Series Collection and Processing Server

Wrong license endpoint

Overview of the Issue

Based on https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-LicenseInformation the license endpoint is called licenses and not license as defined here: https://github.com/peimanja/artifactory_exporter/blob/master/artifactory/system.go#L12

At least this is for Artifactory 7.x. Maybe it was like that in 6.x

Reproduction Steps

Operating system and Environment details

Logs

Log output of exporter when running and you try to get metrics from the metric path.

Add Federated repository Status Metrics

Feature Description

It would be awesome if the exporter could additionally monitor Federated repo statuses.
https://www.jfrog.com/confluence/display/JFROG/Working+with+Federated+Repositories#WorkingwithFederatedRepositories-monitor_fed_repos

That information is already available via the Artifactory API:

GET api/Federation/status/repo/<example-repo-local>
GET api/Federation/status/mirrorsLag
GET api/federation/status/unavailableMirrors

API Doc:
https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-GetFederatedRepositoryStatus

Use Case(s)

This would be great for monitoring the Federated repository Status to track any issues there.

Support for Artifactory 7.9.1?

From the logs of the exporter it seems to work:

level=debug ts=2020-10-30T10:57:36.117Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/license
level=debug ts=2020-10-30T10:58:36.117Z caller=system.go:66 msg="Fetching license stats"
level=debug ts=2020-10-30T10:58:36.117Z caller=utils.go:21 msg="Fetching http" path=http://localhost:8081/artifactory/api/system/license

In the exporter:

# HELP artifactory_exporter_total_api_errors Current total API errors.
# TYPE artifactory_exporter_total_api_errors counter
artifactory_exporter_total_api_errors 21
# HELP artifactory_exporter_total_scrapes Current total artifactory scrapes.
# TYPE artifactory_exporter_total_scrapes counter
artifactory_exporter_total_scrapes 21
# HELP artifactory_up Was the last scrape of artifactory successful.
# TYPE artifactory_up gauge
artifactory_up 0

And in JFrog Artifactory the access log shows success.

So has the API changed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.