Giter Site home page Giter Site logo

mbogus / kube-amqp-autoscale Goto Github PK

View Code? Open in Web Editor NEW
63.0 63.0 15.0 60 KB

Dynamically scale kubernetes resources using the length of an AMQP queue (number of messages available for retrieval from the queue) to determine the load

License: Apache License 2.0

Makefile 2.52% Go 97.48%
autoscale autoscaler autoscaling kubernetes

kube-amqp-autoscale's People

Contributors

gabrielpjordao avatar gdvalle avatar joskfg avatar leogamas avatar mbogus avatar otherpirate avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kube-amqp-autoscale's Issues

SIGSEGV

Hello. I ran kube-amqp-autoscale on k8s. There is log:

Starting Kubernetes AMQP Autoscaler 0.1-SNAPSHOT (f3086f4)
System with 12 CPUs and environment with 12 max processes
Not enough metrics to calculate new size, required at least 0.75 was 0.42 metrics ratio
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1378a8b]
goroutine 53 [running]:
main.scaleDeployments(0xc4201220c0, 0x16c808d, 0x7, 0x7fff025ca7ec, 0x15, 0x7fff00000000, 0xc4202a1ac0, 0x7fff025ca92a, 0x34)
/go/src/autoscaler/kube.go:61 +0xfb
main.scaleKind(0xc4201220c0, 0x16cb0dd, 0xa, 0x16c808d, 0x7, 0x7fff025ca7ec, 0x15, 0x0, 0xc4202a1ac0, 0xc420433101, ...)
/go/src/autoscaler/kube.go:54 +0x30b
main.scale(0x16cb0dd, 0xa, 0x16c808d, 0x7, 0x7fff025ca7ec, 0x15, 0xc400000000, 0xc4204a5e58, 0xc4202a1a20, 0x14dd540)
/go/src/autoscaler/kube.go:44 +0xca
main.main.func3(0xc400000000, 0x0, 0x3fed555555555555)
/go/src/autoscaler/main.go:168 +0x192
main.autoscale(0xc4203d3a60, 0xc4203d3a80, 0xc420169140)
/go/src/autoscaler/actuator.go:46 +0x1f4
created by main.main
/go/src/autoscaler/main.go:176 +0x520

Feature request: downscale stabilization

Hello, I'm trying to minimize frequent fluctuations in the replicas count in our setup. The rate of incoming messages is fluctuating quite a lot. The usual scenario is: sudden peak in incoming messages, this leads to a peak in queue length, autoscaler kicks in and starts scaling up every interval, the messages are quickly processed, the scaling up repeats upto eval-intervals, after that autoscaler starts to scale down... until another peak comes in and the cycle repeats.

The size of fluctuations can be somewhat mitigated by increase-limit and decrease-limit, but it does not help with the rate at which the replica count is constantly changing.

Increasing eval-intervals does not help either, because then a very short-lived but big peak in queue length leads to scaling up long after the messages have been processed.

I think a feature similar to --horizontal-pod-autoscaler-downscale-stabilization from standard HPA could help.

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

Finally, just before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the --horizontal-pod-autoscaler-downscale-stabilization flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly fluctuating metric values.

What do you think? Does it make sense to add something like this to kube-amqp-autoscale?

Autoscaling killing pods

I managed to implement the autoscaler so that it scales based on the number of messages in the queue. However, initially, it scales appropriately and consumes the messages on the queue. Once these messages are consumed and no longer on the queue, the autoscaler scales the pods down and the messages are put back on the queue. This is more or less an endless cycle.

My process works:

  1. n jobs are put on queue
  2. n workers are spun up
  3. n jobs consumed by n workers.

However if the queue becomes n-1, pods also scale to n-1. And so my jobs never really get processed.

Any suggestions on how I can work around this? Cause currently, it seems like this may not work for me.

Error saving metrics

Could you please advice what can be wrong:

Error saving metrics: parse 'amqp://guest:[email protected]:5673//': first path segment in URL cannot contain colon
Not enough metrics to calculate new size, required at least 0.75 was 0.00 metrics ratio

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.