Giter Site home page Giter Site logo

grafana / xk6-output-timescaledb Goto Github PK

View Code? Open in Web Editor NEW
26.0 127.0 11.0 1.18 MB

k6 extension to output real-time test metrics to TimescaleDB.

License: Apache License 2.0

Go 88.83% Dockerfile 2.85% Shell 3.16% Makefile 5.16%
grafana-dashboard k6 k6-output timescaledb xk6

xk6-output-timescaledb's Issues

Improve Grafana dashboard in samples

Improve upon the currently provided dashboards (samples/).

Another goal for this change would be to align the look-and-feel for the Timescale-backed dashboard (this extension), the xk6-output-influxdb, and xk6-output-prometheus-remote possibly with the output provided by the k6 Cloud App. This would provide a consistent user experience for the varied data sources.

Ideally, this dashboard would be promoted to the Grafana site as the "official" dashboard for displaying k6 metrics backed by TimescaleDB datasources.

This issue may supersede the need for #5.

Can't find modul during preparing image

Hello,

I've been attempting to build a Docker image using your Dockerfile from the xk6-output-timescaledb repository (link: https://github.com/grafana/xk6-output-timescaledb/blob/main/Dockerfile), but I've run into an issue.

During the image build process, the build fails on the step RUN CGO_ENABLED=0 xk6 build --with github.com/grafana/xk6-output-timescaledb=. --output /tmp/k6, with an error indicating the github.com/grafana/xk6-output-timescaledb module cannot be found.

image

Could you assist in understanding how I can successfully build this Docker image with this extension? Is there anything I may have overlooked in the documentation or the structure of your project?

Thank you for your help.

Out of memory for high req rate tests

We have a test running at ~1200 req/s with 1 K vu. It runs from a 16 Gb memory 8 core EC2 of some sort on Ubuntu Linux.

The test itself is a rather simple two-request use case.

It ramps up fine to the max rate over 20 min so a pretty slow ramp up.

Once at max it just gobbles up memory until it runs out.

Running the same test using no output except the console summary consumes some 7% total of the memory.

Running the same test setting the K6_TIMESCALEDB_PUSH_INTERVAL to 500ms or even shorter makes the test work. It still uses a lot of memory but not as much.

We would like to optimize memory usage a bit better so as not to consume such large amounts of memory.

A self sizing reporting window could be one possibility.

With a well scaled db reverting the connection pool and having parallel connections to the db would also work very well. The db has very high parallel throughput but shortening to push interval to something like 200 ms will max out the one core tied to the connection.

This is of course only valid for a monolith running the entire test. Scaling out using several smaller loadgens will not hit this issue unless they are pressed for high request rates.

Tagged thresholds improperly saved in the TimescaleDB's threshold table

As mentioned it the title, when passing tagged metrics in thresholds like so:
'http_req_duration{name:login}': ['p(95)<55000'],

they are not saved properly in the database, namely the metrics are not separated from their tags and the 'tags' column in the threshold table is populated with nulls.

Docker build is broken

The Dockerfile pulls golang:1.17-alpine as the builder for k6.

It references <...>net/netip which is only introduced in golang 1.18

Updating the Dockerfile to use a later image (ex golang:1.19-alpine) fixes the build issue.

Needed to maintain listing in k6 Extensions Registry

We've recently updated the requirements for maintaining an extension within the listing on our site. As such, please address the following items to maintain your listing within the registry:

  • needs to build with a recent version of k6; minimum v0.43 but v0.45 would be great...

For more information on these and other listing requirements, please take a look at the registry requirements.

Batch writing to postgresql (timescale) with pgx isn't really a batch write to the db

Using batch from pgx doesn't write the data to the db in a batch write.

In effect it just piles all the inserts in a a big package (the "batch") and sends the big package in one go.

The db then commences to do all the inserts in one transaction but in sequence. If the sequence of inserts takes too much time, the connection will be severed and the fail ignored by pgx.

Batches are very limited in capacity for high volume use cases and need a better implementation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.