Giter Site home page Giter Site logo

go-heavy's Introduction

go-heavy

A K6 Alternative To Perform Load Testing Written In Golang


Why

There is a need for load testing tools that are meant to be used in tech organisations.

  • The tool must be easy to setup as a part of the infrastructure and easy to use by developers.
  • It should support a directory structure that is oriented around how teams organise their services.
  • It should be easy to write tests and share them across teams.
  • It should provide a way to track service specific performance over time.
  • It should be open source and free to use. Allowing organisations to add to and edit the tool as they see fit.

What

The goal for Go-Heavy is to provide a set of packages that can be used to write load tests in Golang. The packages will provide a way to write tests that are easy to read and write. The packages will also provide a way to run tests in parallel and report the results in a way that is easy to understand.

  • A set of packages to be used to write load tests in Golang.
  • A CLI tool to run the tests and report the results.

How

This is how we expect the tool to be used. But to reiterate, the tool will be made as flexible as possible to allow organisations to use it in a way that suits them.

There is an example directory structure in the repo that shows how the tool can be used.

-- example
|-- utils
|-- common
|-- services
|   |-- service1
|   |   |-- worflow1.go
|   |   |-- workflow2.go
|   |   |-- .heavy-config.go
|   |   |-- .env
|   |   |-- env.default
  • The utils and common directories contains helper functions that can be used across services.
  • The services directory contains directories for each service.
  • Each service directory contains the tests for that service.
  • Each service directory contains a .heavy-config.go file that contains the configuration for the service.
  • Each service directory contains a .env file that contains the environment variables for the service.
  • Each service directory contains a env.default file that contains the default environment variables for the service.

Once tests have been written, the CLI tool can be used to run individual tests or all tests in a service. The CLI tool will also provide a way to run tests in parallel and report the results in a way that is easy to understand.

There are four important concepts about how the tests are organised:

  • Test
  • Workflow
  • Step
  • Request

Test

A single test, is a single request by the user to perform an action. This action could be running a workflow, or running all the workflows in a service.

Workflow

A workflow is a set of steps that are performed in a sequence. These workflows are setup as individual files in the service directory.

Step

A step is a set of code that achieve is a single task. A step can be as simple as a single request, or as complex as a set of requests that are performed in a sequence.

Request

A request is a single HTTP request that is performed as a part of a workflow.


Features

CLI

  • Run an individual workflow.
  • Run all workflows in a service.
  • Allow to choose the number of unique users. (Optional)
  • Allow to choose one of the following:
    • Allow to choose the number of users running in parallel.
    • Allow to set an WPS (Workflows per second).
  • Allow to choose how long to run the workflow for.
  • After a test is run, the CLI tool should provide a summary of the results.
  • After a test is run, the CLI tool should allow the user to export the results to a file.

Packages

  • A custom HTTP client that tracks metrics and logs automatically.
  • An interface to define each test.
    • Init Function
    • Main Body
    • Cleanup Function
    • Path to the environment variables file.

Reports

CLI Report

  • Summary

    • Test ID
    • Total run time.
    • Number of Successful Workflow runs.
    • Number of Failed Workflow runs.
  • Workflow Name

    • Slowest run time.
    • Fastest run time.
    • Average run time.
    • Total number of runs.
    • Average number of runs per second.
    • Status Code Distribution
      • 200
      • 400
      • 500
      • 300
      • 100
    • Latency Histogram
      • P10
      • P25
      • P50
      • P75
      • P90
      • P95
      • P99
      • P100
    • Latency Distribution
      • Auto generated buckets - Count of requests in each bucket.

Complete Report

  • Summary

    • Test ID
    • Total run time.
    • Number of Successful Workflow runs.
    • Number of Failed Workflow runs.
  • Workflow Name

    • Slowest run time.

    • Fastest run time.

    • Average run time.

    • Total number of runs.

    • Average number of runs per second.

    • Status Code Distribution

      • 200
      • 400
      • 500
      • 300
      • 100
    • Latency Histogram

      • P10
      • P25
      • P50
      • P75
      • P90
      • P95
      • P99
      • P100
    • Latency Distribution

      • Auto generated buckets - Count of requests in each bucket.
    • Steps
      • Step Name
        • Slowest run time.

        • Fastest run time.

        • Average run time.

        • Total number of runs.

        • Average number of runs per second.

        • Status Code Distribution

          • 200
          • 400
          • 500
          • 300
          • 100
        • Latency Histogram

          • P10
          • P25
          • P50
          • P75
          • P90
          • P95
          • P99
          • P100
        • Latency Distribution

          • Auto generated buckets - Count of requests in each bucket.
        • Requests
          • Request Endpoint and Method
            • Maximum latency.
            • Minimum latency.
            • Average latency.
            • Total number of requests.
            • Average number of requests per second.
            • Status Code Distribution
              • 200
              • 400
              • 500
              • 300
              • 100
            • Latency Histogram
              • P10
              • P25
              • P50
              • P75
              • P90
              • P95
              • P99
              • P100
            • Latency Distribution
              • Auto generated buckets - Count of requests in each bucket.

Configuration

Environment Variables

Parallelism

Docker

CI/CD


Installation

CLI Tool

Packages


Contributing

Code of Conduct

Contributing Guide

License


Roadmap

Version 1.0.0

  • A CLI tool that supports the following:

    • Run an individual workflow.
    • Run all workflows in a service.
    • Allow to choose the number of unique users. (Optional)
    • Allow to choose one of the following:
      • Allow to choose the number of users running in parallel.
      • Allow to set an WPS (Workflows per second).
    • Allow to choose how long to run the workflow for.
  • Packages:

    • A custom HTTP client that tracks metrics and logs automatically.
    • An interface to define each test.
      • Init Function
      • Main Body
      • Cleanup Function
      • Path to the environment variables file.

Version 2.0.0

  • CLI:

    • Import OpenAPI spec and generate tests.
    • Auto generate directory structure
    • A ramp-up function (optional)
    • Export logs to a file.
      • Unique Request ID - which is also set in the headers
      • Timestamp
      • Request
      • Response
      • Status Code
    • Stored test-run-configurations.
    • Diff two tests' performance.
      • A summary of the differences in metrics.
      • Every response that is different between the two tests.
    • After a test is run, the CLI tool should allow the user to dig through the results by providing a mongosh like interface.
  • WebUI

go-heavy's People

Contributors

sameeran-segwise avatar sameeranb avatar

Stargazers

 avatar Clayton Kehoe avatar Raghav Sharma avatar Manas Gupta avatar Sourav Karjee avatar

Watchers

 avatar Sourav Karjee avatar

go-heavy's Issues

Documentation for Configuration

We must add documentation regarding how to configure the test setup. This includes the CLI configurations, heavy-config.yaml and the configurations possible during the test run itself.

Assertion

We need to add a way to add custom assertions in the tests. The test should fail if any of the assertions are not met.

Gather Issues

The projects scope is vast. We need to document as many tasks and issues as we can find.

When adding new issues, please ensure the following:

  • Do not assign anyone
  • Add appropriate labels
  • Add it the Go-Heavy: v1.0.0 project if the task is deemed important to the first production ready version version of this project. If not, do not add a project.
  • Add the milestone if the above condition is true.
  • Add an appropriate description to the task

Simple HTTP GET Example Test

Complete writing one example test that used a custom HTTP router. And be able to execute the test using the CLI

Refine Test Reports

We need to decide on a test report format that shows all the relevant information. We need to study the requirements of load tests in the industry and come up with a format that contains the right information, and enough of that information to know what went wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.