Giter Site home page Giter Site logo

2024-03-meetup's Introduction

Install operators:

helmfile sync

Create namespace:

NAMESPACE=app
kubectl create namespace $NAMESPACE
kns $NAMESPACE

Deploy database, message queue, and textsynth:

kubectl apply -f postgres
kubectl apply -f rabbitmq
kubectl apply -f textsynth
# Ignore the warning about the Compose file :)

Run Benthos processors:

helmfile sync -f benthos/helmfile.yaml -n $NAMESPACE

Check queue:

kubectl exec mq-server-0 -- rabbitmqctl list_queues

Check database:

kubectl cnpg psql db -- messages -c "select * from messages;"

Scale up textsynth and consumer:

kubectl scale deployment consumer,textsynth --replicas 100

This should trigger node autoscaling. It will take a few minutes for the new nodes to come up. Wait a bit. Eventually, the queue should start to come down. Yay!

But if we look with kubectl top pods, many of these pods are idle. We have overprovisioned textsynth. Let's try to do better with autoscaling.

Enable autoscaling on textsynth:

kubectl autoscale deployment textsynth --max=100

Now we wait a bit. After a few minutes, textsynth should be scaled down until we reach a kind of "cruise speed" where we have "just the right amount" of textsynth pods to handle the load.

But... If we look with kubectl top pods again, we'll see that some pods are still idle. This is because of unfair load balancing. We're going to change that by having exactly one textsynth pod per benthos consumer, and have each benthos consumer talk to its "own" textsynth. We'll achieve that by running textsynth as a sidecar right next to the benthos consumer.

Switch to sidecar architecture and KEDA autoscaler:

helmfile sync -f benthos/helmfile.yaml -n $NAMESPACE -e sidecar

This will scale according to the queue depth, and it should also stabilize after a while.

Check the results:

kubectl get so,hpa,deploy

After a while the number of nodes should also go down on its own.

Scale to zero:

kubectl scale deployment benthos-generator --replicas=0

If we shutdown the generator, eventually, the queue will drain and then, the autoscaler should scale down the consumer to zero as well.

2024-03-meetup's People

Contributors

jpetazzo avatar papihack avatar priximmo avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.