Giter Site home page Giter Site logo

nri-postgresql's Introduction

New Relic Open Source community plus project banner.

New Relic integration for PostgreSQL

The New Relic integration for PostgreSQL captures critical performance metrics and inventory reported by PostgreSQL instances. Data on the level of instance, database, and collection is collected. Additionally, the integration can be configured to collect metrics on PgBouncer.

Inventory data for the configuration of the instance is collected from the pg_statistics database.

Requirements

A user with the necessary permissions must be present on the database for all metrics to be collected.

Installation and usage

For installation and usage instructions, see our documentation web site.

Building

Golang is required to build the integration. We recommend Golang 1.11 or higher.

After cloning this repository, go to the directory of the PostgreSQL integration and build it:

$ make

The command above executes the tests for the PostgreSQL integration and builds an executable file called nri-postgresql under the bin directory.

To start the integration, run nri-postgresql:

$ ./bin/nri-postgresql

If you want to know more about usage of ./bin/nri-postgresql, pass the -help parameter:

$ ./bin/nri-postgresql -help

External dependencies are managed through the govendor tool. Locking all external dependencies to a specific version (if possible) into the vendor directory is required.

Testing

To run the tests execute:

$ make test

Support

Should you need assistance with New Relic products, you are in good hands with several support diagnostic tools and support channels.

New Relic offers NRDiag, a client-side diagnostic utility that automatically detects common problems with New Relic agents. If NRDiag detects a problem, it suggests troubleshooting steps. NRDiag can also automatically attach troubleshooting data to a New Relic Support ticket.

If the issue has been confirmed as a bug or is a Feature request, please file a Github issue.

Support Channels

Privacy

At New Relic we take your privacy and the security of your information seriously, and are committed to protecting your information. We must emphasize the importance of not sharing personal data in public forums, and ask all users to scrub logs and diagnostic information for sensitive information, whether personal, proprietary, or otherwise.

We define “Personal Data” as any information relating to an identified or identifiable individual, including, for example, your name, phone number, post code or zip code, Device ID, IP address, and email address.

For more information, review New Relic’s General Data Privacy Notice.

Contribute

We encourage your contributions to improve this project! Keep in mind that when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project.

If you have any questions, or to execute our corporate CLA (which is required if your contribution is on behalf of a company), drop us an email at [email protected].

A note about vulnerabilities

As noted in our security policy, New Relic is committed to the privacy and security of our customers and their data. We believe that providing coordinated disclosure by security researchers and engaging with the security community are important means to achieve our security goals.

If you believe you have found a security vulnerability in this project or any of New Relic's products or websites, we welcome and greatly appreciate you reporting it to New Relic through our bug bounty program.

If you would like to contribute to this project, review these guidelines.

To all contributors, we thank you! Without your contribution, this project would not be what it is today.

License

nri-postgresql is licensed under the MIT License.

nri-postgresql's People

Contributors

alejandrodnm avatar alvarocabanas avatar ardias avatar camdencheek avatar carlossscastro avatar cristianciutea avatar davidgit avatar fryckbos avatar gsanchezgavier avatar iserko avatar ivancorrales avatar jfjoly avatar kang-makes avatar lchapman4 avatar marcelschlapfer avatar marcsanmi avatar mariomac avatar matiasburni avatar maxlemieux avatar mlong-nr avatar newrelic-coreint-bot avatar paologallinaharbur avatar rabbitt avatar renovate[bot] avatar roobre avatar rubenruizdegauna avatar sigilioso avatar tangollama avatar xino12 avatar zkendra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nri-postgresql's Issues

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

docker-compose
tests/docker-compose.yml
  • postgres 9.6
  • postgres 16
  • golang 1.22.6-bookworm
dockerfile
build/Dockerfile
  • golang 1.22.6-bookworm
github-actions
.github/workflows/automated_release.yaml
  • newrelic/coreint-automation v3
.github/workflows/on_prerelease.yaml
  • newrelic/coreint-automation v3
.github/workflows/on_push_pr.yaml
  • newrelic/coreint-automation v3
.github/workflows/on_release.yaml
  • newrelic/coreint-automation v3
.github/workflows/repolinter.yml
  • newrelic/coreint-automation v3
.github/workflows/security.yaml
  • newrelic/coreint-automation v3
gomod
go.mod
  • go 1.22.6
  • github.com/blang/semver/v4 v4.0.0
  • github.com/jmoiron/sqlx v1.4.0
  • github.com/lib/pq v1.10.9
  • github.com/newrelic/infra-integrations-sdk v3.8.2+incompatible
  • github.com/stretchr/testify v1.9.0
  • github.com/xeipuuv/gojsonschema v1.2.0
  • gopkg.in/DATA-DOG/go-sqlmock.v1 v1.3.0
  • gopkg.in/yaml.v3 v3.0.1

  • Check this box to trigger a request for Renovate to run again on this repository

Support alternative database names

Description of the problem

The current implementation does not support collection of table statistics for alternative database names other than the default postgres database.

I first detected this issue when the agent was unable to connect to the local database after granting CONNECT, the relevant permissions in the documentation, and verifying that the select statements for the required statistic tables were working as expected.

The bug can be tracked to a few problems in the implementation where the database name is hardcoded in at least two places when establishing the connection in pgsql_connection.go in the defaults.

Even though postgres is not specified in the collection_list as a database, the agent attempts to connect to this database, and throws this error:

[ERR] Metrics collection failed: error creating connection to SQL Server: pq: permission denied for database "postgres"

This will only affect table and index statistics, as the database-level statistics can be gathered from any database on the same system, however the query used to collect table data only returns tables within the database you connected to. You can verify this by running SELECT * FROM pg_statio_user_tables on a multi-database system.

Since the configuration allows specification of multiple databases in the collection_list JSON blob, it may be simpler to connect once for each individual database in the configuration, avoiding the need to configure a separate database parameter in the main YML file, and allowing a single agent to collect stats from multiple databases on the same PostgreSQL instance. This will also remove the implicit dependency on CONNECT privileges on the postgres database.

OS

This issue was first identified on a system running Debian Stretch (9.7) with PostgreSQL 11.1, nri-postgresql 1.0.1-1 (from official New Relic repository), and newrelic-infra 1.2.6.

  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Sample Interval greater 60s yields invalid results

Sample Interval greater 60s yields invalid results. All Metrics 'Per Second' metric are calculated as 0.

Description

The interval in the postgresql-config.yml leads to invalid metric if configured greater than '60s'. If done so, all 'Per Second' metrics are calculated as 0 and are not usable anymore, for example in the Postgresql-Dashboard.

Expected Behavior

The interval can be set to a value greater than 60s, like 120s or 300s, and still produce valid and usable metrics.

Troubleshooting:

  • Metrics like 'Buffers Allocated Per Second' in the 'PostgresqlInstanceSample' rely on the storer in the persist package from the infra-integration-sdk Storer.go to store the last retrived value to calculate the current rate.
  • Problem: The storer only uses the DefaultTTL for data retention which is set to 1 minute. storer.go:20
  • There is no error in the logs because en empty data retrieval from the storer gets interpreted as the 'fist' revival after starting the agent.
    metrics.go:161

Steps to Reproduce

  1. Set 'interval" in postgresql-config.yml to 120s.
  2. Wait 5 minutes.
  3. Look at the reported PostgresqlInstanceSample's in the querry builder on the newrelic website. SELECT * FROM PostgresqlInstanceSample
  4. Confirm that the 'Per Second' values are all 0 and do not represent the actual conditions of the database monitored.

Version parsing error on Ubuntu

Description of the problem

The follow log is printed when trying to parse the version of PostgreSQL when on Ubuntu

stderr="[ERR] Metrics collection failed: error collecting version number: Invalid character(s) found in minor number \"4 (Ubuntu 10\"\n"

Looks as though it is expecting semantic versioning but Ubuntu version return strings in the last position, shown here.

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 0 0 7 0 7

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-starts-with-community-plus-header #

The first 5 lines contain all of the requested patterns. (README.md). The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-forum-topic #

Contains a link to the appropriate forum.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate forum.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

code-of-conduct-should-not-exist-here #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. Note that you will need to adjust any links to the local CODE_OF_CONDUCT file in your documentation to point to the central file (README and CONTRIBUTING will probably have links that need updating). For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Did not find a file matching the specified patterns. All files passed this test.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 2 0 5 0 7

Fail #

readme-starts-with-community-plus-header #

The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/. Below is a list of files or patterns that failed:

  • README.md: The first 5 lines do not contain the pattern(s): Open source Community Plus header (see https://opensource.newrelic.com/oss-category).
    • 🔨 Suggested Fix: prepend [![Community Plus header](https://github.com/newrelic/opensource-website/raw/master/src/images/categories/Community_Plus.png)](https://opensource.newrelic.com/oss-category/#community-plus) to file

code-of-conduct-file-does-not-exist #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Found files. Below is a list of files or patterns that failed:

  • CODE_OF_CONDUCT.md
    • 🔨 Suggested Fix: Remove file

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-discuss-topic #

Contains a link to the appropriate discuss.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate discuss.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

Add important/missing postgres metrics

Description

Missing important postgres metrics.

Acceptance Criteria

Metric parity with datadog integration for postgres.

  • All metric data related to vacuum (including all tuple counts - dead tuples per table)
  • transaction data (including xid wraparound monitoring per database and per instance)
  • wal/replication stats (for the instance)

Design Consideration/Limitations

At present it is not possible to monitor for conditions that would normally be monitored for by a DBA on a Postgres DB.

As postgres grows you want to pay much closer attention to things like

  • Quantity of dead tuples on a table so that you can adjust table specific settings to make autovacuum not take down your db unexpectedly in the middle of the day.
  • How frequently autovacuums/vacuums and autoanalyses/analyses are happening.
  • Whether or not you're about to hit a XID wraparound and have the postgres process hard shutdown.
  • Whether or not you have significant WAL lag that is unexpected on the replicas.

Dependencies

  • It would be wise to improve the ootb dashboard UI element as well.

Additional context

  • NR case 00073909 is open requesting work on this

Estimates

L

[FEATURE] Add 'collection_exclude_list' option to allow specifying databases to be ignored by nri-postgresql

Is your feature request related to a problem? Please describe.

I'm always frustrated when NRI tries to connect to Azure 'internal use only' databases (to which no customer has access to anyway so that would never work) and due to this my pgbadger reports keeps displaying the NRI user trying to connect to PostgreSQL as the number 1 error message I get in terms of amount of tries.

Feature Description

We are using Azure Database for PostgreSQL and this Azure service offering has some additional databases (azure_maintenance and azure_sys) to which no customer user has access to (only the azure_superuser can interact with these databases and only Azure itself can use this azure_superuser).

Due to this, NRI always fails trying to connect to the databases to collect data from it. We would like a config option (let's say 'collection_ignore_list') which we could use to configure these internal use only Azure databases not to be queried by NRI.

Describe Alternatives

We could list all of the databases except these ones from Azure internal use only using the 'collection_list' config option, but then we would need to keep updating this list every time we create a new database.

Additional context

NRI keeps trying to connect and query these internal use only Azure databases every minute and our PostgreSQL logs are full of these errors. Also, we collect these logs and process them using PgBadger so the reports generate by PgBadger are always showing that the number 1 error we have is this one error.

Priority

Really Want

Not able to connect to postgres-open-source-integration

Hi Guys,

tried connecting postgresql with newrelic using below command and tried manuall installation too

curl -Ls https://download.newrelic.com/install/newrelic-cli/scripts/install.sh | bash && sudo NEW_RELIC_API_KEY=
NEW_RELIC_ACCOUNT_ID=
NEW_RELIC_REGION=EU /usr/local/bin/newrelic install -n postgres-open-source-integration

error logs:

ERROR Could not find recipe with name postgres-open-source-integration.

We encountered an issue during the installation: no recipes were installed.
For more information about about what happened, rerun the command using the --debug flag.

newrelic install --debug

If this problem persists, visit the documentation and support page for additional help here at https://one.newrelic.com/-/06vjAeZLKjP.

[root@tools-database integrations.d]# newrelic install --debug
DEBUG could not load string value for key region, returning zero value: no value found at path *.region
DEBUG could not load string value for key insightsInsertKey and profile , returning zero value: no value found at path insightsInsertKey
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
DEBUG performing request method=POST url="https://api.newrelic.com/graphql"
^[[BFATAL could not fetch license key for account 1234: 403 response returned

Tried manual installation in this

https://docs.newrelic.com/docs/integrations/host-integrations/host-integrations-list/postgresql-monitoring-integration/#config

can you please help me in this

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 1 0 6 0 7

Fail #

readme-starts-with-community-header #

The README of a community project should have a community project header at the start of the README. If you already have a community project header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/. Below is a list of files or patterns that failed:

  • README.md: The first 5 lines do not contain the pattern(s): Open source Community header (see https://opensource.newrelic.com/oss-category).
    • 🔨 Suggested Fix: prepend the latest code snippet found at https://github.com/newrelic/opensource-website/wiki/Open-Source-Category-Snippets#code-snippet-1 to file

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-forum-topic #

Contains a link to the appropriate forum.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate forum.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

code-of-conduct-should-not-exist-here #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. Note that you will need to adjust any links to the local CODE_OF_CONDUCT file in your documentation to point to the central file (README and CONTRIBUTING will probably have links that need updating). For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Did not find a file matching the specified patterns. All files passed this test.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 1 0 6 0 7

Fail #

readme-starts-with-community-plus-header #

The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/. Below is a list of files or patterns that failed:

  • README.md: The first 5 lines do not contain the pattern(s): Open source Community Plus header (see https://opensource.newrelic.com/oss-category).
    • 🔨 Suggested Fix: prepend the latest code snippet found at https://github.com/newrelic/opensource-website/wiki/Open-Source-Category-Snippets#code-snippet-2 to file

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-discuss-topic #

Contains a link to the appropriate discuss.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate discuss.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

code-of-conduct-should-not-exist-here #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. Note that you will need to adjust any links to the local CODE_OF_CONDUCT file in your documentation to point to the central file (README and CONTRIBUTING will probably have links that need updating). For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Did not find a file matching the specified patterns. All files passed this test.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

Execute Custom Metrics Query independent of metrics to allow running it at a different interval

Custom Metric Query is too costly to run at the same polling interval as other metrics

Feature Description

I would like to fetch custom metrics query at a different polling interval than standard metrics

Describe Alternatives

Looked into setting a separate configuration file to just execute custom query. But from the code it looks like Custom Metrics Query is run only when Metrics are present and always run as part of collecting all metrics

Additional context

Customer runs a complex query to fetch some custom metrics which is quite costly to be running at the same interval as other metrics

Priority

Must Have

[BUG] Materialized views not showing up

Description of the problem

The query in buildSchemaListForDatabase does not include materialized views, but monitoring stats for materialized views can be found in pg_statio_user_tables.

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

[FEATURE] postgres remote monitor

Description of the problem

The apache, nginx, mysql and other newrelic integrations allow users to specify the agent as a remote monitor so the hostname reflects that of the postgres host rather than the infra host. Right now I have a leader and follow postgres instance and i want to discriminate between the two in newrelic insights, while keeping the sample data for each of these databases completely separate.

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Monitoring query fails on PG 12 - "ERROR: column tbl.relhasoids does not exist"

Description of the problem

When integration is monitoring a PostgreSQL 12.2 installation, the monitoring query fails.

My understanding is that this bug impacts monitoring of any PostgreSQL 12 due to OID changes in PG 12.

Error below is from the PostgreSQL logs.

2020-04-24 17:23:29.786 UTC [8288] ERROR:  column tbl.relhasoids does not exist at character 1941
2020-04-24 17:23:29.786 UTC [8288] STATEMENT:  SELECT -- BLOATQUERY
				current_database() as database, 
				schemaname as schema_name, tblname as table_name, bs*tblpages AS real_size,
				(tblpages-est_tblpages_ff)*bs AS bloat_size,
				CASE WHEN tblpages - est_tblpages_ff > 0
					THEN 100 * (tblpages - est_tblpages_ff)/tblpages::float
					ELSE 0
				END AS bloat_ratio
				-- , (pst).free_percent + (pst).dead_tuple_percent AS real_frag
			FROM (
				SELECT ceil( reltuples / ( (bs-page_hdr)/tpl_size ) ) + ceil( toasttuples / 4 ) AS est_tblpages,
					ceil( reltuples / ( (bs-page_hdr)*fillfactor/(tpl_size*100) ) ) + ceil( toasttuples / 4 ) AS est_tblpages_ff,
					tblpages, fillfactor, bs, tblid, schemaname, tblname, heappages, toastpages, is_na
					-- , stattuple.pgstattuple(tblid) AS pst
				FROM (
					SELECT
						( 4 + tpl_hdr_size + tpl_data_size + (2*ma)
							- CASE WHEN tpl_hdr_size%ma = 0 THEN ma ELSE tpl_hdr_size%ma END
							- CASE WHEN ceil(tpl_data_size)::int%ma = 0 THEN ma ELSE ceil(tpl_data_size)::int%ma END
						) AS tpl_size, bs - page_hdr AS size_per_block, (heappages + toastpages) AS tblpages, heappages,
						toastpages, reltuples, toasttuples, bs, page_hdr, tblid, schemaname, tblname, fillfactor, is_na
					FROM (
						SELECT
							tbl.oid AS tblid, ns.nspname AS schemaname, tbl.relname AS tblname, tbl.reltuples,
							tbl.relpages AS heappages, coalesce(toast.relpages, 0) AS toastpages,
							coalesce(toast.reltuples, 0) AS toasttuples,
							coalesce(substring(
								array_to_string(tbl.reloptions, ' ')
								FROM 'fillfactor=([0-9]+)')::smallint, 100) AS fillfactor,
							current_setting('block_size')::numeric AS bs,
							CASE WHEN version()~'mingw32' OR version()~'64-bit|x86_64|ppc64|ia64|amd64' THEN 8 ELSE 4 END AS ma,
							24 AS page_hdr,
							CASE WHEN current_setting('server_version_num')::integer < 80300 THEN 27 ELSE 23 END
								+ CASE WHEN MAX(coalesce(null_frac,0)) > 0 THEN ( 7 + count(*) ) / 8 ELSE 0::int END
								+ CASE WHEN tbl.relhasoids THEN 4 ELSE 0 END AS tpl_hdr_size,
							sum( (1-coalesce(s.null_frac, 0)) * coalesce(s.avg_width, 1024) ) AS tpl_data_size,
							bool_or(att.atttypid = 'pg_catalog.name'::regtype)
								OR count(att.attname) <> count(s.attname) AS is_na
						FROM pg_attribute AS att
							JOIN pg_class AS tbl ON att.attrelid = tbl.oid
							JOIN pg_namespace AS ns ON ns.oid = tbl.relnamespace
							LEFT JOIN pg_stats AS s ON s.schemaname=ns.nspname
								AND s.tablename = tbl.relname AND s.attname=att.attname
							LEFT JOIN pg_class AS toast ON tbl.reltoastrelid = toast.oid
						WHERE att.attnum > 0 AND NOT att.attisdropped
							AND tbl.relkind = 'r'
						GROUP BY 1,2,3,4,5,6,7,8,9,10, tbl.relhasoids
						ORDER BY 2,3
					) AS s
				) AS s2
			) AS s3
			where not is_na
			and schemaname || '.' || tblname in ('pg_catalog.pg_ts_dict','pg_catalog.pg_transform','pg_catalog.pg_stat_replication','pg_catalog.pg_stat_progress_vacuum','pg_catalog.pg_depend','pg_catalog.pg_publication','pg_catalog.pg_indexes','pg_catalog.pg_statio_all_tables','pg_catalog.pg_attribute','pg_catalog.pg_seclabels','pg_catalog.pg_statio_sys_indexes','pg_catalog.pg_stat_progress_create_index','pg_catalog.pg_partitioned_table','pg_catalog.pg_stat_wal_receiver','pg_catalog.pg_tablespace','pg_catalog.pg_rules','pg_catalog.pg_locks','pg_catalog.pg_available_extension_versions','pg_catalog.pg_stat_user_indexes','pg_catalog.pg_statio_user_indexes','pg_catalog.pg_stat_archiver','pg_catalog.pg_cursors','pg_catalog.pg_auth_members','pg_catalog.pg_replication_origin','pg_catalog.pg_init_privs','pg_catalog.pg_shseclabel','pg_catalog.pg_policies','pg_catalog.pg_stat_user_tables','pg_catalog.pg_opfamily','pg_catalog.pg_stat_sys_tables','pg_catalog.pg_attrdef','pg_catalog.pg_seclabel','pg_catalog.pg_user_mappings','pg_catalog.pg_views','pg_catalog.pg_class','pg_catalog.pg_publication_rel','pg_catalog.pg_stats','pg_catalog.pg_am','pg_catalog.pg_amproc','pg_catalog.pg_matviews','pg_catalog.pg_event_trigger','pg_catalog.pg_ts_config','pg_catalog.pg_subscription_rel','pg_catalog.pg_stat_all_indexes','pg_catalog.pg_operator','pg_catalog.pg_default_acl','pg_catalog.pg_statio_user_tables','pg_catalog.pg_stat_bgwriter','pg_catalog.pg_database','pg_catalog.pg_timezone_names','pg_catalog.pg_stat_subscription','pg_catalog.pg_stat_database_conflicts','pg_catalog.pg_statio_sys_tables','pg_catalog.pg_stat_xact_all_tables','pg_catalog.pg_statio_sys_sequences','pg_catalog.pg_stat_sys_indexes','pg_catalog.pg_amop','pg_catalog.pg_prepared_xacts','pg_catalog.pg_largeobject_metadata','pg_catalog.pg_namespace','pg_catalog.pg_collation','pg_catalog.pg_proc','pg_catalog.pg_stat_user_functions','pg_catalog.pg_stat_database','pg_catalog.pg_ts_config_map','pg_catalog.pg_enum','pg_catalog.pg_policy','pg_catalog.pg_type','pg_catalog.pg_roles','pg_catalog.pg_shdescription','pg_catalog.pg_tables','pg_catalog.pg_stat_progress_cluster','pg_catalog.pg_range','pg_catalog.pg_stat_xact_user_functions','pg_catalog.pg_shdepend','pg_catalog.pg_foreign_table','pg_catalog.pg_sequence','pg_catalog.pg_group','pg_catalog.pg_stats_ext','pg_catalog.pg_stat_xact_sys_tables','pg_catalog.pg_statio_all_indexes','pg_catalog.pg_stat_ssl','pg_catalog.pg_inherits','pg_catalog.pg_opclass','pg_catalog.pg_aggregate','pg_catalog.pg_statistic_ext','pg_catalog.pg_trigger','pg_catalog.pg_stat_xact_user_tables','pg_catalog.pg_index','pg_catalog.pg_language','pg_catalog.pg_extension','pg_catalog.pg_available_extensions','pg_catalog.pg_timezone_abbrevs','pg_catalog.pg_stat_all_tables','pg_catalog.pg_settings','pg_catalog.pg_constraint','pg_catalog.pg_rewrite','pg_catalog.pg_cast','pg_catalog.pg_ts_template','pg_catalog.pg_description','pg_catalog.pg_db_role_setting','pg_catalog.pg_ts_parser','pg_catalog.pg_statio_all_sequences','pg_catalog.pg_foreign_server','pg_catalog.pg_conversion','pg_catalog.pg_pltemplate','pg_catalog.pg_stat_activity','pg_catalog.pg_user','pg_catalog.pg_publication_tables','pg_catalog.pg_subscription','pg_catalog.pg_foreign_data_wrapper','pg_catalog.pg_sequences','pg_catalog.pg_prepared_statements','pg_catalog.pg_statio_user_sequences','pg_catalog.pg_stat_gssapi','pg_catalog.pg_replication_slots','information_schema.column_privileges','information_schema.sequences','information_schema.sql_features','information_schema.routine_privileges','information_schema.view_routine_usage','information_schema.character_sets','information_schema.collations','information_schema.foreign_tables','information_schema.table_privileges','information_schema.role_usage_grants','information_schema.role_udt_grants','information_schema.view_column_usage','information_schema.column_options','information_schema.columns','information_schema.views','information_schema.check_constraint_routine_usage','information_schema.foreign_table_options','information_schema.sql_packages','information_schema.sql_implementation_info','information_schema.usage_privileges','information_schema.user_defined_types','information_schema.attributes','information_schema.column_udt_usage','information_schema.column_column_usage','information_schema.constraint_column_usage','information_schema.constraint_table_usage','information_schema.domains','information_schema.foreign_server_options','information_schema.information_schema_catalog_name','information_schema.collation_character_set_applicability','information_schema.sql_sizing_profiles','information_schema.foreign_servers','information_schema.domain_udt_usage','information_schema.parameters','information_schema.role_table_grants','information_schema.element_types','information_schema.user_mappings','information_schema.check_constraints','information_schema.domain_constraints','information_schema.foreign_data_wrappers','information_schema.triggered_update_columns','information_schema.view_table_usage','information_schema.sql_languages','information_schema.table_constraints','information_schema.udt_privileges','information_schema.foreign_data_wrapper_options','information_schema.applicable_roles','information_schema.enabled_roles','information_schema.routines','information_schema.user_mapping_options','information_schema.referential_constraints','information_schema.role_routine_grants','information_schema.triggers','information_schema.schemata','information_schema.key_column_usage','information_schema.tables','information_schema.data_type_privileges','information_schema.administrable_role_authorizations','information_schema.role_column_grants','information_schema.column_domain_usage','information_schema.sql_sizing')
OS

My understanding is that this breakage affects PostgreSQL 12 on all operating systems.

  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Integer out of range prevents metrics from being sent

We are erunning into issues where values in the pg_stat_* tables that are too big give errors like:

Description

if some of the counter values in the stat tables exceed integer limts the metrics stop being sent

'Jun 25 03:50:17 ip-172-31-35-173 newrelic-infra-service[14552]: time="2021-06-25T03:50:17Z" level=debug msg="Integration command wrote to stderr." instance=postgres integration=com.newrelic.postgresql prefix=config/postgresql stderr="[ERR] Could not execute instance query: sql: Scan error on column index 6, name \"buffers_allocated\": converting driver.Value type int64 (\"15855543149\") to a int: value out of range\n[ERR] Could not execute database query: sql: Scan error on column index 5, name \"block_reads\": converting driver.Value type int64 (\"16289158795\") to a int: value out of range\n[ERR] Could not execute database query: sql: Scan error on column index 2, name \"temporary_bytes_written\": converting driver.Value type int64 (\"45811045239996\") to a int: value out of range\n" working-dir=/var/db/newrelic-infra/newrelic-integrations

This can be temporarily fixed by running commands like

'SELECT pg_stat_reset();
SELECT pg_stat_reset_shared('bgwriter')'

but enabling the collector to handle this would be better to not get surprised by metrics dropping all of sudden

Expected Behavior

I would expect the collector to be able to handle all postgres data types

Steps to Reproduce

Updat the stats table to a value such as 45811045239996

Service Check

Hi all,

One of our clients is asking if the pgbouncer.can_connect metric is available in New Relic like it is in DataDog. I know that this is a service check in DataDog, checking to see if PGBouncer is sending metrics. We could probably setup an alert or something on the New Relic side that does like a count on PgBouncerSample then alert if its 0 over a certain amount of time to mirror that, correct? Is that the correct approach or is there another approach that we should use?

Thanks

Misleading attribute names in PgBouncer metrics

I can not imagine a backwards compatible fix for this, but based on pgBouncer documentation maxwait field in pool stats contains values in seconds, however in nri-postgresql the field name is called maxwaitInMilliseconds.

MaxWait *int64 `db:"maxwait" metric_name:"pgbouncer.pools.maxwaitInMilliseconds" source_type:"gauge"`

From pgBouncer documentation:

maxwait:
How long the first (oldest) client in the queue has waited, in seconds. If this starts increasing, then the current pool of servers does not handle requests quickly enough. The reason may be either an overloaded server or just too small of a pool_size setting.

As maxwait_us attribute is not used at the moment by New Relic, there is no way of monitoring sub-second wait time values for pgBouncer, although there is definitely a value in monitoring the trends.

In addition, there are other attribute names that are incorrectly named, as everything else pgBouncer related pointing to milliseconds actually contains microseconds, for example:

pgbouncer.stats.avgQueryDurationInMilliseconds
From pgBouncer documentation:

avg_query_time:
Average query duration, in microseconds.

These issues however can be worked around at the moment by dividing the values by 1000. If there is no good way of fixing this, at least it should be pointed out in documentation.

PS: While at it, it would be useful if avg_wait_time stats would be sent as a metric, it is useful for monitoring in order to detect problems with connection pooling issues.

BuildCollectionList only returns one index in the TableList for tables with multiple indexes

Description

For tables with multiple indexes only one index appears in the TableList

Expected Behavior

For tables with multiple indexes, in the TableList all of the indexes should be presented

Troubleshooting or NR Diag results

Provide any other relevant log data.
TIP: Scrub logs and diagnostic information for sensitive information

Steps to Reproduce

In src/collection/collection.go, line 170
current code:

170		if _, ok := schemaList[row.TableName.String]; !ok {
171			schemaList[row.SchemaName.String][row.TableName.String] = make([]string, 0)
172     }

New code

170		if _, ok := schemaList[row.SchemaName.String][row.TableName.String]; !ok {
171			schemaList[row.SchemaName.String][row.TableName.String] = make([]string, 0)
172     }

Using the current code, only one index appears, with the new code a list of indexes present.

Your Environment

Additional context

For Maintainers Only or Hero Triaging this bug

Suggested Priority (P1,P2,P3,P4,P5):
Suggested T-Shirt size (S, M, L, XL, Unknown):

Improve sqlx query marshaling

With current solution, when a version of mongodb/pgbouncer adds a new metric, when sqlx returns it and tries to marsal it to the defined struct throws an error "missing destination name three in *[]struct". Updating the code with the new struct field is the only solution.

  • Adding extra returned rows in the Test_PGSQLConnection_Query it can be reproduced.

Study if this could be solved with a better more flexible approach. (reflection?)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.