Giter Site home page Giter Site logo

yunohost-apps / grafana_ynh Goto Github PK

View Code? Open in Web Editor NEW
15.0 5.0 12.0 442 KB

Grafana package for YunoHost

Home Page: https://grafana.com/

License: GNU General Public License v3.0

Shell 100.00%
yunohost grafana influxdb netdata yunohost-apps dashboard metrics

grafana_ynh's Introduction

Grafana for YunoHost

Integration level Working status Maintenance status

Install Grafana with YunoHost

Read this README in other languages.

This package allows you to install Grafana quickly and simply on a YunoHost server.
If you don't have YunoHost, please consult the guide to learn how to install it.

Overview

Grafana is a multi-platform open source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources.

YunoHost specific features

  • installs InfluxDB as time series database
  • if the NetData package is installed, configures NetData to feed InfluxDB every minute
  • installs Grafana as dashboard server
  • creates a Grafana Data Source to fetch data from InfluxDB (and hence NetData!)
  • creates a default dashboard to plot some data from NetData (doesn't cover every metric, can be greatly enhanced!)

Shipped version: 10.2.3~ynh2

Demo: https://play.grafana.org

Screenshots

Screenshot of Grafana

Documentation and resources

Developer info

Please send your pull request to the testing branch.

To try the testing branch, please proceed like that:

sudo yunohost app install https://github.com/YunoHost-Apps/grafana_ynh/tree/testing --debug
or
sudo yunohost app upgrade grafana -u https://github.com/YunoHost-Apps/grafana_ynh/tree/testing --debug

More info regarding app packaging: https://yunohost.org/packaging_apps

grafana_ynh's People

Contributors

alexaubin avatar brunospy avatar ericgaspar avatar jimbojoe avatar nemsia avatar rosano avatar salamandar avatar tituspijean avatar tsgeek avatar yalh76 avatar yunohost-bot avatar zamentur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

grafana_ynh's Issues

Graphs shows "no data"

Graphs shows "no data"
After a fresh install of yunohost, I installed the netdata app and then the grafana app, which shows "No data" on graphs.

Versions

  • Hardware: Raspberry Pi 4
  • YunoHost version: 4.0.4 (stable)
  • I have access to my server: Through SSH & webadmin
  • Are you in a special context or did you perform some particular tweaking on your YunoHost instance ?: no

To Reproduce
Steps to reproduce the behavior.

  1. Install netdata app
  2. Go to netdata app --> graphs ok
  3. Intall grafana app
  4. Menu dashboard --> manage --> netdata
  5. "No data" message
    image

In datasource menu, if I press Save and test connection it show InfluxDB Error: database not found: opentsdb

Expected behavior
Graphs with data

Logs

admin@quiwy:~ $ systemctl status netdata   
netdata.service - Real time performance monitoring
Loaded: loaded (/lib/systemd/system/netdata.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-08-24 00:22:00 CEST; 10min ago
Process: 1244 ExecStartPre=/bin/mkdir -p /opt/netdata/var/cache/netdata (code=exited, status=0/SUCCESS)
Process: 1258 ExecStartPre=/bin/chown -R netdata:netdata /opt/netdata/var/cache/netdata (code=exited, status=0/SUCCESS)
Process: 1260 ExecStartPre=/bin/mkdir -p /opt/netdata/var/run/netdata (code=exited, status=0/SUCCESS)
Process: 1261 ExecStartPre=/bin/chown -R netdata:netdata /opt/netdata/var/run/netdata (code=exited, status=0/SUCCESS)
Main PID: 1264 (netdata)
Tasks: 51 (limit: 4915)
CGroup: /system.slice/netdata.service
1264 /opt/netdata/usr/sbin/netdata -P /opt/netdata/var/run/netdata/netdata.pid -D
1540 bash /opt/netdata/usr/libexec/netdata/plugins.d/tc-qos-helper.sh 1
1549 /opt/netdata/usr/libexec/netdata/plugins.d/go.d.plugin 1
1557 /usr/bin/python /opt/netdata/usr/libexec/netdata/plugins.d/python.d.plugin 1
1559 /opt/netdata/usr/libexec/netdata/plugins.d/apps.plugin 1
admin@quiwy:~ $  systemctl status influxdb
influxdb.service - InfluxDB is an open-source, distributed, time series database
Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-08-24 00:21:52 CEST; 11min ago
Docs: man:influxd(1)
Main PID: 704 (influxd)
Tasks: 16 (limit: 4915)
CGroup: /system.slice/influxdb.service
704 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
admin@quiwy:~ $  cat /etc/influxdb/influxdb.conf
### Welcome to the InfluxDB configuration file.

# The values in this file override the default values used by the system if
# a config option is not specified. The commented out lines are the configuration
# field and the default value used. Uncommenting a line and changing the value
# will change the value used at runtime when the process is restarted.

# Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
# The data includes a random ID, os, arch, version, the number of series and other
# usage data. No data from user databases is ever transmitted.
# Change this option to true to enable reporting.
reporting-enabled = false

# Bind address to use for the RPC service for backup and restore.
# bind-address = "127.0.0.1:8088"

###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###

[meta]
  # Where the metadata/raft database is stored
  dir = "/var/lib/influxdb/meta"

  # Automatically create a default retention policy when creating a database.
  # retention-autocreate = true

  # If log messages are printed for the meta service
  # logging-enabled = true

###
### [data]
###
### Controls where the actual shard data for InfluxDB lives and how it is
### flushed from the WAL. "dir" may need to be changed to a suitable place
### for your system, but the WAL settings are an advanced configuration. The
### defaults should work for most systems.
###

[data]
  # The directory where the TSM storage engine stores TSM files.
  dir = "/var/lib/influxdb/data"

  # The directory where the TSM storage engine stores WAL files.
  wal-dir = "/var/lib/influxdb/wal"

  # The amount of time that a write will wait before fsyncing.  A duration
  # greater than 0 can be used to batch up multiple fsync calls.  This is useful for slower
  # disks or when WAL write contention is seen.  A value of 0s fsyncs every write to the WAL.
  # Values in the range of 0-100ms are recommended for non-SSD disks.
  # wal-fsync-delay = "0s"


  # The type of shard index to use for new shards.  The default is an in-memory index that is
  # recreated at startup.  A value of "tsi1" will use a disk based index that supports higher
  # cardinality datasets.
  # index-version = "inmem"

  # Trace logging provides more verbose output around the tsm engine. Turning
  # this on can provide more useful output for debugging tsm engine issues.
  # trace-logging-enabled = false

  # Whether queries should be logged before execution. Very useful for troubleshooting, but will
  # log any sensitive data contained within a query.
  # query-log-enabled = true

  # Settings for the TSM engine

  # CacheMaxMemorySize is the maximum size a shard's cache can
  # reach before it starts rejecting writes.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # cache-max-memory-size = "1g"

  # CacheSnapshotMemorySize is the size at which the engine will
  # snapshot the cache and write it to a TSM file, freeing up memory
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # cache-snapshot-memory-size = "25m"

  # CacheSnapshotWriteColdDuration is the length of time at
  # which the engine will snapshot the cache and write it to
  # a new TSM file if the shard hasn't received writes or deletes
  # cache-snapshot-write-cold-duration = "10m"

  # CompactFullWriteColdDuration is the duration at which the engine
  # will compact all TSM files in a shard if it hasn't received a
  # write or delete
  # compact-full-write-cold-duration = "4h"

  # The maximum number of concurrent full and level compactions that can run at one time.  A
  # value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime.  Any number greater
  # than 0 limits compactions to that value.  This setting does not apply
  # to cache snapshotting.
  # max-concurrent-compactions = 0

  # The threshold, in bytes, when an index write-ahead log file will compact
  # into an index file. Lower sizes will cause log files to be compacted more
  # quickly and result in lower heap usage at the expense of write throughput.
  # Higher sizes will be compacted less frequently, store more series in-memory,
  # and provide higher write throughput.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # max-index-log-file-size = "1m"

  # The maximum series allowed per database before writes are dropped.  This limit can prevent
  # high cardinality issues at the database level.  This limit can be disabled by setting it to
  # 0.
  # max-series-per-database = 1000000

  # The maximum number of tag values per tag that are allowed before writes are dropped.  This limit
  # can prevent high cardinality tag values from being written to a measurement.  This limit can be
  # disabled by setting it to 0.
  # max-values-per-tag = 100000

  # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to
  # TSM files. This setting has been found to be problematic on some kernels, and defaults to off.
  # It might help users who have slow disks in some cases.
  # tsm-use-madv-willneed = false

###
### [coordinator]
###
### Controls the clustering service configuration.
###

[coordinator]
  # The default time a write request will wait until a "timeout" error is returned to the caller.
  # write-timeout = "10s"

  # The maximum number of concurrent queries allowed to be executing at one time.  If a query is
  # executed and exceeds this limit, an error is returned to the caller.  This limit can be disabled
  # by setting it to 0.
  # max-concurrent-queries = 0

  # The maximum time a query will is allowed to execute before being killed by the system.  This limit
  # can help prevent run away queries.  Setting the value to 0 disables the limit.
  # query-timeout = "0s"

  # The time threshold when a query will be logged as a slow query.  This limit can be set to help
  # discover slow or resource intensive queries.  Setting the value to 0 disables the slow query logging.
  # log-queries-after = "0s"

  # The maximum number of points a SELECT can process.  A value of 0 will make
  # the maximum point count unlimited.  This will only be checked every second so queries will not
  # be aborted immediately when hitting the limit.
  # max-select-point = 0

  # The maximum number of series a SELECT can run.  A value of 0 will make the maximum series
  # count unlimited.
  # max-select-series = 0

  # The maxium number of group by time bucket a SELECT can create.  A value of zero will max the maximum
  # number of buckets unlimited.
  # max-select-buckets = 0

###
### [retention]
###
### Controls the enforcement of retention policies for evicting old data.
###

[retention]
  # Determines whether retention policy enforcement enabled.
  # enabled = true

  # The interval of time when retention policy enforcement checks run.
  # check-interval = "30m"

###
### [shard-precreation]
###
### Controls the precreation of shards, so they are available before data arrives.
### Only shards that, after creation, will have both a start- and end-time in the
### future, will ever be created. Shards are never precreated that would be wholly
### or partially in the past.

[shard-precreation]
  # Determines whether shard pre-creation service is enabled.
  # enabled = true

  # The interval of time when the check to pre-create new shards runs.
  # check-interval = "10m"

  # The default period ahead of the endtime of a shard group that its successor
  # group is created.
  # advance-period = "30m"

###
### Controls the system self-monitoring, statistics and diagnostics.
###
### The internal database for monitoring data is created automatically if
### if it does not already exist. The target retention within this database
### is called 'monitor' and is also created with a retention period of 7 days
### and a replication factor of 1, if it does not exist. In all cases the
### this retention policy is configured as the default for the database.

[monitor]
  # Whether to record statistics internally.
  # store-enabled = true

  # The destination database for recorded statistics
  # store-database = "_internal"

  # The interval at which to record statistics
  # store-interval = "10s"

###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###

[http]
  # Determines whether HTTP endpoint is enabled.
  # enabled = true

  # The bind address used by the HTTP service.
  # bind-address = ":8086"

  # Determines whether user authentication is enabled over HTTP/HTTPS.
  # auth-enabled = false

  # The default realm sent back when issuing a basic auth challenge.
  # realm = "InfluxDB"

  # Determines whether HTTP request logging is enabled.
  # log-enabled = true

  # Determines whether the HTTP write request logs should be suppressed when the log is enabled.
  # suppress-write-log = false

  # When HTTP request logging is enabled, this option specifies the path where
  # log entries should be written. If unspecified, the default is to write to stderr, which
  # intermingles HTTP logs with internal InfluxDB logging.
  #
  # If influxd is unable to access the specified path, it will log an error and fall back to writing
  # the request log to stderr.
  # access-log-path = ""

  # Determines whether detailed write logging is enabled.
  # write-tracing = false

  # Determines whether the pprof endpoint is enabled.  This endpoint is used for
  # troubleshooting and monitoring.
  # pprof-enabled = true

  # Enables a pprof endpoint that binds to localhost:6060 immediately on startup.
  # This is only needed to debug startup issues.
  # debug-pprof-enabled = false

  # Determines whether HTTPS is enabled.
  # https-enabled = false

  # The SSL certificate to use when HTTPS is enabled.
  # https-certificate = "/etc/ssl/influxdb.pem"

  # Use a separate private key location.
  # https-private-key = ""

  # The JWT auth shared secret to validate requests using JSON web tokens.
  # shared-secret = ""

  # The default chunk size for result sets that should be chunked.
  # max-row-limit = 0

  # The maximum number of HTTP connections that may be open at once.  New connections that
  # would exceed this limit are dropped.  Setting this value to 0 disables the limit.
  # max-connection-limit = 0

  # Enable http service over unix domain socket
  # unix-socket-enabled = false

  # The path of the unix domain socket.
  # bind-socket = "/var/run/influxdb.sock"

  # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.
  # max-body-size = 25000000

  # The maximum number of writes processed concurrently.
  # Setting this to 0 disables the limit.
  # max-concurrent-write-limit = 0

  # The maximum number of writes queued for processing.
  # Setting this to 0 disables the limit.
  # max-enqueued-write-limit = 0

  # The maximum duration for a write to wait in the queue to be processed.
  # Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit.
  # enqueued-write-timeout = 0


###
### [ifql]
###
### Configures the ifql RPC API.
###

[ifql]
  # Determines whether the RPC service is enabled.
  # enabled = true

  # Determines whether additional logging is enabled.
  # log-enabled = true

  # The bind address used by the ifql RPC service.
  # bind-address = ":8082"


###
### [logging]
###
### Controls how the logger emits logs to the output.
###

[logging]
  # Determines which log encoder to use for logs. Available options
  # are auto, logfmt, and json. auto will use a more a more user-friendly
  # output format if the output terminal is a TTY, but the format is not as
  # easily machine-readable. When the output is a non-TTY, auto will use
  # logfmt.
  # format = "auto"

  # Determines which level of logs will be emitted. The available levels
  # are error, warn, info, and debug. Logs that are equal to or above the
  # specified level will be emitted.
  # level = "info"

  # Suppresses the logo output that is printed when the program is started.
  # The logo is always suppressed if STDOUT is not a TTY.
  # suppress-logo = false

###
### [subscriber]
###
### Controls the subscriptions, which can be used to fork a copy of all data
### received by the InfluxDB host.
###

[subscriber]
  # Determines whether the subscriber service is enabled.
  # enabled = true

  # The default timeout for HTTP writes to subscribers.
  # http-timeout = "30s"

  # Allows insecure HTTPS connections to subscribers.  This is useful when testing with self-
  # signed certificates.
  # insecure-skip-verify = false

  # The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used
  # ca-certs = ""

  # The number of writer goroutines processing the write channel.
  # write-concurrency = 40

  # The number of in-flight writes buffered in the write channel.
  # write-buffer-size = 1000


###
### [[graphite]]
###
### Controls one or many listeners for Graphite data.
###

[[graphite]]
  # Determines whether the graphite endpoint is enabled.
  # enabled = false
  # database = "graphite"
  # retention-policy = ""
  # bind-address = ":2003"
  # protocol = "tcp"
  # consistency-level = "one"

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # number of batches that may be pending in memory
  # batch-pending = 10

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # udp-read-buffer = 0

  ### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
  # separator = "."

  ### Default tags that will be added to all metrics.  These can be overridden at the template level
  ### or by tags extracted from metric
  # tags = ["region=us-east", "zone=1c"]

  ### Each template line requires a template pattern.  It can have an optional
  ### filter before the template and separated by spaces.  It can also have optional extra
  ### tags following the template.  Multiple tags should be separated by commas and no spaces
  ### similar to the line protocol format.  There can be only one default template.
  # templates = [
  #   "*.app env.service.resource.measurement",
  #   # Default template
  #   "server.*",
  # ]

###
### [collectd]
###
### Controls one or many listeners for collectd data.
###

[[collectd]]
  # enabled = false
  # bind-address = ":25826"
  # database = "collectd"
  # retention-policy = ""
  #
  # The collectd service supports either scanning a directory for multiple types
  # db files, or specifying a single db file.
  # typesdb = "/usr/local/share/collectd"
  #
  # security-level = "none"
  # auth-file = "/etc/collectd/auth_file"

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # Number of batches that may be pending in memory
  # batch-pending = 10

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "10s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # read-buffer = 0

  # Multi-value plugins can be handled two ways.
  # "split" will parse and store the multi-value plugin data into separate measurements
  # "join" will parse and store the multi-value plugin as a single multi-value measurement.
  # "split" is the default behavior for backward compatability with previous versions of influxdb.
  # parse-multivalue-plugin = "split"
###
### [opentsdb]
###
### Controls one or many listeners for OpenTSDB data.
###

[[opentsdb]]
  # enabled = true
  # bind-address = ":4242"
  # database = "opentsdb"
  # retention-policy = ""
  # consistency-level = "one"
  # tls-enabled = true
  # certificate= "/etc/ssl/influxdb.pem"

  # Log an error for every malformed point.
  # log-point-errors = true

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Only points
  # metrics received over the telnet protocol undergo batching.

  # Flush if this many points get buffered
  # batch-size = 1000

  # Number of batches that may be pending in memory
  # batch-pending = 5

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

###
### [[udp]]
###
### Controls the listeners for InfluxDB line protocol data via UDP.
###

[[udp]]
  # enabled = false
  # bind-address = ":8089"
  # database = "udp"
  # retention-policy = ""

  # InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h")
  # precision = ""

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # Number of batches that may be pending in memory
  # batch-pending = 10

  # Will flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # read-buffer = 0

###
### [continuous_queries]
###
### Controls how continuous queries are run within InfluxDB.
###

[continuous_queries]
  # Determines whether the continuous query service is enabled.
  # enabled = true

  # Controls whether queries are logged when executed by the CQ service.
  # log-enabled = true

  # Controls whether queries are logged to the self-monitoring data store.
  # query-stats-enabled = false

  # interval for how often continuous queries will be checked if they need to run
  # run-interval = "1s"

###
### [tls]
###
### Global configuration settings for TLS in InfluxDB.
###

[tls]
  # Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants
  # for a list of available ciphers, which depends on the version of Go (use the query
  # SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses
  # the default settings from Go's crypto/tls package.
  # ciphers = [
  #   "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
  #   "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
  # ]

  # Minimum version of the tls protocol that will be negotiated. If not specified, uses the
  # default settings from Go's crypto/tls package.
  # min-version = "tls1.2"

  # Maximum version of the tls protocol that will be negotiated. If not specified, uses the
  # default settings from Go's crypto/tls package.
  # max-version = "tls1.2"

As mentionned in this comment of issue #14 , I tried uncommenting the line enabled = true, now when tryong to connect to database in grafana it says "Data source is working", bu I still ha "No data" on all charts.

Do you have any clue on how to fix this?
Thank you for your help!

Include all InfluxDB databases in backup

Describe the bug

InfluxDB backup does not include all databases.

Context

  • Hardware: AMD 4600G
  • YunoHost version: latest 11.1.7
  • I have access to my server: Through SSH | through the webadmin | direct access via keyboard / screen | ...
  • Are you in a special context or did you perform some particular tweaking on your YunoHost instance?: no

Steps to reproduce

  • create a new DB in influxBD, populate with some GBs of datas
  • launch a backup from the webmin
  • size of the backup is only 30MB, should be several GBs
  • open the backup script and see that only one DB is backuped

Expected behavior

  • Backup every DB

pb whit ldap : LOG15_ERROR

hi,
grafana was working very well, but now i have this error :

Grafana / Server Error
Sadly something went wrong
Failed to sync user

Check the Grafana server logs for the detailed error message.

t=2019-02-26T23:38:56+0100 lvl=info msg="LDAP initial bind failed, %v" logger=ldap LOG15_ERROR= LOG15_ERROR="Normalized odd number of arguments by adding nil"
t=2019-02-26T23:38:56+0100 lvl=eror msg="Failed to sync user" logger=context error="LDAP Result Code 206 \"Empty password not allowed by the client\": ldap: empty password not allowed by the client"

no pb with other apps using ldap,
netdata is ok.
i tried to remove then reinstall, but i have the same issue.

 # yunohost -v
yunohost: 
  repo: stable
  version: 3.4.2.4
yunohost-admin: 
  repo: stable
  version: 3.4.2
moulinette: 
  repo: stable
  version: 3.4.2
ssowat: 
  repo: stable
  version: 3.4.2

# yunohost app info grafana
description: Tableaux de bords de supervision
license: Apache-2.0
name: Grafana
version: 1.2.0~ynh1

thx for your help ;)

Upgrade doesn't work

Upgrade is broken due to new custom port setting at installation.
@nemsia I think we forgot to test that part :-)

A possible plan:

  • set the port in the application settings during installation
  • retrieve that port setting during upgrade; if not set, set to 3000
  • make the string substitution in nginx.conf as you did in install script

Could you possibly make a PR to fix that part?
(Don't forget you can now create a branch in the official repository ;-)
Thanks!

Cannot login after logout

After installation and first login, I tried to logout from Grafana from Grafana top-right-hand corner menu.

It brought me to a page saying:

Server side error :(
Failed to sync user

Logging out from and logging in back to YunoHost does not solve the problem. No means to log in Grafana is available. :(

Ask the user before installing InfluxDB

First of all, thank you for your work on this package.

Users who want to install Grafana are not made aware that InfluxDB is installed at the same time as the package (there's no mention of InfluxDB when the package is installed from Yunohost). Would it be possible to ask them before doing so (with a boolean for example) or at least to mention it ?

Validate quality level 5 conformance

This level needs to be formally assessed as there is one residual warning stated by package_linter:

>>>> RESTORE SCRIPT <<<<
✔ All commands are prefix with "sudo".
✔ Only helpers are used
✔ no 'exit' command found: 'ynh_die' helper is possibly used
✘ At line 27 'ynh_die' or 'exit' command is executed with system modification before.
 This system modification is an issue if a verification exit the script.
 You should move this verification before any system modification.
✔ set -eu is present at beginning of file
✔ Argument retrieval from manifest seems to be done with environement variables

The detected offended system modification is this one:

# Fix permissions
sudo chmod a+rx ./conf_grafana/_common.sh

which only changes rights on an archive restoration temporary file. Btw, this is a workaround that won't be needed any more after this change is merged into YunoHost stable version.

No new data - even when restarting the services

Hello,

Since a few hours, Grafana indicate that there is no data point. But if I take a bigger time range, I can see older data.

I tried to restart the different services (netdata, influxdb and grafana-server) 1 by 1 but the result is the same.
Removing and installing again Grafana doesn't solve the issue.
It seems like influxdb is correctly sending the data to Grafana, but it's just empty or something like that.

Please tell me if you need any additional information.

Thanks :)

Network error: Bad gateway (502)

Hello,

All my netdata graphs are empty because of the error mentioned in the title. And i don't understand what did i do wrong.

Unclean uninstall

Hi,
when i uninstall grafana and install him back. Metrics are still stored and there. How can i do a clean uninstall of grafana ?

Incorrect Redirect

Describe the bug

Website redirects through links does not take into consideration the subfolder (path) that the application is installed under (possible upstream issue)

Context

  • Hardware: Raspberry Pi 4 B at home
  • YunoHost version: 11.2.11.3
  • I have access to my server: Through SSH | through the webadmin
  • Are you in a special context or did you perform some particular tweaking on your YunoHost instance?: no
  • Using, or trying to install package version/branch: 10.2.3~ynh1

Steps to reproduce

-Install Grafana to subdirectory host.example.com/grafana
-navigate to import dashboard
-navigate to "configure a new data source"
-redirects to SSO page due to malformed path

Expected behavior

-opens the "Configure a new Data Source page"

Logs

https://paste.yunohost.org/raw/yicabohute installation logs (likely unrelated)

image

(should direct to host.example.com/grafana/connections/datasources/new)

Netdata conf

hello,

i have problem with the data. netdata send metric to influxdb example: metric in influxdb divide not summary. like cpu.mysld, cpu.httpd

i want make data in influxdb only cpu value ( summary all cpu servicec on the server).

Please help me what i must do. thank you

Update to InfluxDB 2

The latest version of InfluxDB (v2) is a major upgrade, for some reasons shared in #45, I'm trying to update the Yunohost package. Unfortunately, my lack of skills is blocking me.

Would anybody here be able to support me? I have created 2 PR:

Thanks for your great work and your support!

Can't install grafana_ynh when influx2 already installed (by scrutiny_ynh)

Describe the bug

Grafana yunohost package install fails when scrutiny_ynh https://github.com/YunoHost-Apps/scrutiny_ynh is already installed (it installs influx2 debian package)

Context

  • Hardware: Old computer
  • YunoHost version: 11.2.5 (stable).
  • I have access to my server: Through SSH | through the webadmin | direct access via keyboard / screen
  • Are you in a special context or did you perform some particular tweaking on your YunoHost instance?: yes
    • If yes, please explain: various install tweaks
  • Using, or trying to install package version/branch:

Steps to reproduce

Expected behavior

For both applications to be installable side by side. I can see there a number of issues about upgrading to influx2 maybe this is a solution to this problem. see #52

Logs

https://paste.yunohost.org/raw/zevajafupu

Error when trying to restore grafana from a backup

Describe the bug

Error message when restoring influxdb & grafana from a backup

Context

  • Hardware: VPS @ OVH and local virtualbox server
  • YunoHost version: 4.3.2.2
  • I have access to my server: Through SSH & through the webadmin
  • Are you in a special context or did you perform some particular tweaking on your YunoHost instance?: no

Steps to reproduce

  • on a fresh install
  • create one user
  • install app grafana with the admin beeing the only user
  • install app netdata
  • edit netdata exporting.conf to enable export to influxdb (this config is used)
  • as root run : systemctl restart netdata.service
  • edit influxdb.conf to enable the opentsdb database this config is used
  • confirm the netdata data is visible in grafana https://XXXX/grafana/d/yunohost/NetData?refresh=5s&orgId=1
  • perform a full backup
  • on a fresh install with one user (same user name as before)
  • upload the backup archive
  • restore the backup

Expected behavior

  • grafana, influxdb, and the data is restored

Current behavior

The grafana restore fails with an error message, see logs below
https://paste.yunohost.org/raw/uhibanebug

wont install

Context

  • Hardware: x64
  • YunoHost version: 11.2.10.3
  • I have access to my server: yes
  • Using, or trying to install package version/branch: grafana

Steps to reproduce

perform fresh install
log

La page n’est pas redirigée correctement

Describe the bug

I can't connect / use grafana. The page said:

La page n’est pas redirigée correctement

Une erreur est survenue pendant une connexion à #####

La cause de ce problème peut être la désactivation ou le refus des cookies.

(of course I've also tried to navigate in private browsing, to clean cookies etc)

Context

  • Hardware: *VPS bought online *
  • YunoHost version: latest 11.1.21.4
  • I have access to my server: Through SSH | through the webadmin
  • Are you in a special context or did you perform some particular tweaking on your YunoHost instance?: * yes*
    • If yes, please explain: I've imported a backup from another yunohost instance
  • Using, or trying to install package version/branch:
  • If upgrading, current package version: 9.5.3~ynh1

Steps to reproduce

  • I've backup a working grafana instance from an other server
  • I've restored it through the webadmin
  • I've changed the DNS (yesterday) to match the new server. Every other app are fine. Grafana keeps complaining about a wrong redirection.
  • I've changed on the old server the DNS and changed the URL on the old server. On this one, grafana is working fine (even after the modification).
  • I've moved grafana to an other URL, to another DNS (on the same yunohost server) on the new server
  • I've uninstalled grafana, and installed it from scratch, same error
  • I've uninstalled it again, and located all the old remaining grafana conf files, and removed them as well, installed grafana again on another domaine (same server), same error

Logs

Install additional Grafana-Addon

Describe the bug

Trying to install an Grafana Plugin manualy

Context

  • Hardware: Raspberry Pi at home
  • YunoHost version: 4.3.4.1
  • I have access to my server: Through SSH & through the webadmin
  • Are you in a special context or did you perform some particular tweaking on your YunoHost instance?: no
  • Using, or trying to install package version/branch:

Steps to reproduce

try to install an addon manualy, but i'm a bit lost betweet yunohost and the grafana installation. No clue how to connect with the grafana instance or to get the correct read/write access.

Any tipps or suggestions?

https://grafana.com/docs/grafana/latest/plugins/installation/

https://github.com/akenza-io/grafana-connector-v3/releases

use binaries instead of deb?

just an idea

do we want to use the binaries instead of debs?

advantages:

disadvantages:

  • only amd64 and arm64 are available as binaries

Ipv4 graph empty for eth0

Hi
I have ipv4 static ip at eth0. The netdata shows the live logs for it, but grafana shows empty graph for it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.