Giter Site home page Giter Site logo

dpattmann / prometheus-timestream-adapter Goto Github PK

View Code? Open in Web Editor NEW
36.0 6.0 12.0 418 KB

Use AWS Timestream as a remote storage database for Prometheus

License: Apache License 2.0

Go 98.51% Dockerfile 1.49%
aws prometheus prometheus-adapter timestream

prometheus-timestream-adapter's Introduction

prometheus-timestream-adapter

Prometheus-timestream-adapter is a service which receives Prometheus metrics through remote_write, and sends them into AWS Timestream.

Building

go build

Testing

go test

Command line options

Usage of ./prometheus-timestream-adapter:
      --awsRegion string        (default "eu-central-1")
      --databaseName string     (default "prometheus-database")
      --help                   
      --listenAddr string       (default ":9201")
      --logLevel string         (default "error")
      --tableName string        (default "prometheus-table")
      --telemetryPath string    (default "/metric")
      --tls                    
      --tlsCert string          (default "tls.cert")
      --tlsKey string           (default "tls.key")

Configuring Prometheus

To configure Prometheus to send samples to this binary, add the following to your prometheus.yml:

remote_write:
  - url: "http://prometheus-timestream-adapter:9201/write"

:info: AWS Timestream has a very powerful query language and there is a Grafana Plugin supporting Timestream as a datasource. However, there is a basic reader implementation.

remote_read:
  - url: "http://prometheus-timestream-adapter:9201/read"

Access Prometheus Timestream Database

The Session will attempt to load configuration and credentials from the environment, configuration files, and other credential sources.

AWS Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowReadWriteToTable",
            "Effect": "Allow",
            "Action": [
                "timestream:WriteRecords",
                "timestream:Select"
            ],
            "Resource": "arn:aws:timestream:region:AccoundId:database/DatabaseName/table/TableName"
        },
        {
            "Sid": "AllowDescribeEndpoints",
            "Effect": "Allow",
            "Action": "timestream:DescribeEndpoints",
            "Resource": "*"
        },
        {
            "Sid": "AllowValueRead",
            "Effect": "Allow",
            "Action": "timestream:SelectValues",
            "Resource": "*"
        }
    ]
}

FAQ

What does the warning Measure name exceeds the maximum supported length mean?

The maximum number of characters for an AWS Timestream Dimension name is 256 bytes. Is a metric name is bigger than that it can't be written to AWS Timestream.

Timestream Quotas

prometheus-timestream-adapter's People

Contributors

bshelton avatar dependabot[bot] avatar dpattmann avatar garnachod avatar tlinton avatar wildefires avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

prometheus-timestream-adapter's Issues

error: No measure found

Hi,
Having configured the timestream adapter and connecting our staging prometheus to it using the remote_read and write config, I see consistent ingest requests and data being put into prometheus, but in the timestream-adapter log I keep seeing the "No measure found" error on multiple metrics, eg:

{"level":"error","ts":1653042262.707845,"caller":"prometheus-timestream-adapter/handler.go:88","msg":"Error executing query","query":{"queries":[{"start_timestamp_ms":1652350762490,"end_timestamp_ms":1652962198836,"matchers":[{"name":"__name__","value":"elastic_beanstalk_upgrades_pending"},{"name":"monitor","value":"master"}],"hints":{"start_ms":1653041962490,"end_ms":1653042262490}}]},"err":"No measure found"}

Any idea what the cause of this might be? is prometheus somehow querying for metrics it hasnt put in the timestream db yet (I did see the 'Done replaying WAL' being done on the remote write endpoint)

Prometheus remote_READ

Hi Dennis,

First of all thank you for great work regarding this adapter implementation, awesome work!!!

I have couple question..

I've setup and make docker image running on our EKS. Setup AWS IAM role and policy, make assume policy for EKS to access Timestream... setup remoteWrite: in Prometheus.yaml (I'm using sumologic prometheus stack) metrics are visible on AWS Teamstream table, everything is fine.

I already have in Grafana, Prometheus as datasource (getting metrics before I've installed and connected Timestream), and now I want to setup remoteRead: so basically what I want to accomplish is that on one hand Prometheus send data to Timestream and in same time collect/Read that same metric and visuals them in Grafana because we want to use PromQL and we don't want to use AWS Timestream plugin.

What can be done here?

Kind regards

hard limit of 128 dimensions

There is currently a hard limit of the table dimensions on AWS. Therefore it is not possible to remote write and send any entires with tables with more dimensions than this. There doesn't seem to be anything implemented to work around this. Possibly, it would be useful if it's possible to filter and select only certain dimensions or other alternative solutions.

Early remote_read?

Hi,

Can you explain what do you mean by:

There is a very early remote_reader version! AWS Timestream has a very powerful query language and there is a Grafana Plugin supporting Timestream as a datasource. However, there is a very basic reader implementation.

Do you mean that some Prometheus queries may not work?

Thanks in advance.

Fail on Test

Using Ubuntu 20.04.3 LTS
go version go1.17.6 linux/amd64

I just cloned the repo, run go build and afther that, go test.
I get the folllowing error,

--- FAIL: TestTimeSteamAdapter_Read (0.00s)
    --- FAIL: TestTimeSteamAdapter_Read/Read_Timestream_Request (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0xa5630f]

goroutine 37 [running]:
testing.tRunner.func1.2({0xaeee60, 0x12afc60})
	/usr/local/go/src/testing/testing.go:1209 +0x24e
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1212 +0x218
panic({0xaeee60, 0x12afc60})
	/usr/local/go/src/runtime/panic.go:1038 +0x215
github.com/dpattmann/prometheus-timestream-adapter.TimeStreamAdapter.readDimension({{0xbb1606, 0xc}, 0xc00011c678, {0xbaf266, 0x9}, {0xddbf10, 0x12c4f90}, {0xddc090, 0x12c4fa0}}, {0xbaa641, ...})
	/home/ubuntu/prometheus-timestream-adapter/timestream.go:310 +0x1cf
github.com/dpattmann/prometheus-timestream-adapter.TimeStreamAdapter.buildTimeStreamQueryString({{0xbb1606, 0xc}, 0xc00011c678, {0xbaf266, 0x9}, {0xddbf10, 0x12c4f90}, {0xddc090, 0x12c4fa0}}, 0xc000413c50)
	/home/ubuntu/prometheus-timestream-adapter/timestream.go:288 +0x98d
github.com/dpattmann/prometheus-timestream-adapter.TimeStreamAdapter.runReadRequestQuery({{0xbb1606, 0xc}, 0xc00011c678, {0xbaf266, 0x9}, {0xddbf10, 0x12c4f90}, {0xddc090, 0x12c4fa0}}, 0xc000413c50)
	/home/ubuntu/prometheus-timestream-adapter/timestream.go:224 +0xd0
github.com/dpattmann/prometheus-timestream-adapter.TimeStreamAdapter.Read({{0xbb1606, 0xc}, 0xc00011c678, {0xbaf266, 0x9}, {0xddbf10, 0x12c4f90}, {0xddc090, 0x12c4fa0}}, 0xc000442210)
	/home/ubuntu/prometheus-timestream-adapter/timestream.go:207 +0x158
github.com/dpattmann/prometheus-timestream-adapter.TestTimeSteamAdapter_Read.func1(0xc000441040)
	/home/ubuntu/prometheus-timestream-adapter/timestream_test.go:582 +0xde
testing.tRunner(0xc000441040, 0xc0003e4f70)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a
exit status 2
FAIL	github.com/dpattmann/prometheus-timestream-adapter	0.013s

A record already exists with the same time

After configuring remote write, I see the initial entries on the timestream db. After a while, I don't see new entries. Checking the logs with, k logs prometheus-prometheus-0 -n prometheus -c timestream-adapter
I get,

{
  "level":"warn",
  "ts":1643716808.3651953,
  "caller":"prometheus-timestream-adapter/timestream.go:134",
  "msg":"Error sending samples to remote storage",
  "err":"RejectedRecordsException: One or more records have been rejected. See RejectedRecords for details.
{
  RespMetadata: {
    StatusCode: 419,
    RequestID: "FCP7ZH7MFKNMBE2FQWRVVVPAAI"
  },
  Message_: "One or more records have been rejected. See RejectedRecords for details.",
  RejectedRecords: [
    {
      Reason: "A record already exists with the same time, dimensions, measure name, and record version. A higher record version must be specified in order to update the measure value. Specifying record version is supported by the latest SDKs.",
      RecordIndex: 85
    },
    ...
  ]
}",
  "storage":"prometheus-timestream-adapter","num_samples":100
}

Am I configuring something wrong on the prometheus side?

explain how to configure and send data to Amazon Timestream

@dpattmann can you please explain in README with steps for configuring the Amazon Timestream, to send the data to Amazon Timestream.

The documentation is lacking a bit, I successfully built the docker image and run the build the docker image with port number9201.
Added the prometheus.yaml with remote_write and remote_read, and also I have created a database and table in AWS Timestream.

But AWS Timestream is not showing up any data. Can you please provide more instructions?

Published Image

Hi,
Where is this image published? I couldn't find it on Docker Hub.
Thanks

Logs to debug remote write issues

I created an aws timestream db named prometheus-database and a table named prometheus-table.
I modified the region on main.go to match my region.
Build and pushed the docker image to ecr.

I have prometheus running in a kubernetes cluster. I patched the prometheus container,

  ...
  containers: # container for your adapter
    - name: timestream-adapter
      image: xxx.dkr.ecr.us-east-1.amazonaws.com/timestream-adapter
  remote_write:
    url: "http://localhost:9201/write"

The pod is named prometheus-prometheus-0. I see the timescream-adapter running on the pod,
k describe pod prometheus-prometheus-0 -n prometheus

Name:         prometheus-prometheus-0
...
Containers:
...
  timestream-adapter:
    Container ID:   docker://a7ec874984a973bf588acb8810fc9bc20357a8014ba0aa9ca0d3eeae7ca50296
    Image:          285651481985.dkr.ecr.us-east-1.amazonaws.com/timestream-adapter
    Image ID:       docker-pullable://285651481985.dkr.ecr.us-east-1.amazonaws.com/timestream-adapter@sha256:4f73e6a3b127685924056497b6650f76043353a5637a1883dfcfea9d27a9b2cc
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 27 Jan 2022 14:14:36 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zm6mb (ro)
...

I can't see any data in the database. Is there a way to print some logs to checl for some connectivity issues?

k logs prometheus-prometheus-0 -n prometheus -c timestream-adapter
does not print anything

expanded README?

Would like to know a bit more on how to get this running? from the main.go I only see variables for us-east-1 but no AWS account id or credentials, how does it know to forward to timestream?

The documentation is a bit lacking, is there anyway to get this working? Was successful in building docker image and also via go and running it listening on 9201. Created prometheus db and table in timestream but no data shows up.

Remote read

While you are correct in saying that Timestream has a Grafana plugin to allow you to create dashboards it would still be useful to be able to read the data back into Prometheus.

An example would be to allow for alerts via Prometheus and Terraform.

With Timestream only available for remote write these is no way to have alerts that query data older than the local Prometheus retention time, even if Timestream has the data.

Is this something you might consider adding?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.