Kubernetes autoscaler using webhooks.
This works in two pieces :
- The Kubernetes autoscaler part is interacting with the API of the cluster and sending webhooks
- The 'client' part is receiving the hooks and triggering the scaling. You can find example of this part inside the examples folder.
$ python main.py [options]
Option | Description | Default |
---|---|---|
--kubeconfig |
Path to kubeconfig YAML file. Leave blank if running in Kubernetes to use service account | |
--scale-out-webhook |
URI to be called when a scaling out need is detected by the autoscaler | |
--scale-in-webhook |
URI to be called when a scaling in need is detected by the autoscaler | |
--pool-name-regex |
Regex used to identify agents in the pool(s), the regex should not match masters. kubectl get nodes to find your worker node names | agent |
--drain |
Wether the autoscaler should drain and cordon nodes before passing them to the scale-in webhook | |
--sleep |
Time (in seconds) to sleep between scaling loops | 60 |
-v |
Sets the verbosity. Specify multiple times for more log output, e.g. -vvv |
|
--debug |
Do not catch errors. Explicitly crash instead | |
--ignore-pools |
Names of the pools that the autoscaler should ignore, separated by a comma | |
--spare-agents |
Number of agent per pool that should always stay up | 1 |
--idle-threshold |
Maximum duration (in seconds) an agent can stay idle before being deleted | |
--over-provision |
Number of extra agents to create when scaling up | 0 |
[
{
name: "pool1",
target_nodes: ["node1", "node2"],
current_agent_count: 5,
desired_agent_count: 3
}
]
[
{
name: "pool1",
current_agent_count: 3,
desired_agent_count: 5
}
]
helm install helm-chart --name k8s-wh-as \
--set options.scaleoutwebhook=<SCALE-OUT-URL>,options.scaleinwebhook=<SCALE-IN-URL>,options.poolnameregex=<REGEX-NODES-TO-WATCH>
$ make build-run
#in the container
$ python main.py [arguments]