Comments (4)
Hi @zergscut2017. Let me try to answer the questions - please correct me if I have understood them wrong:
Before proceeding to answer this, I would to clarify that litmus has an (optional) 2-level filter for applications subjected to chaos. First, they are filtered by the label and these apps are then further scanned for the chaos annotation in order to arrive at the final deployment. This is done to exercise granular control on the blast radius/isolate chaos to specific app deployment/workload and also to enable GitOps control to turn ON/OFF chaos in an easier way.
Q1. What is meant by the error: "annotate only desired app for chaos"
-
(A1) At this point, as of the 1.12.0 release of litmus, if the
.spec.annotationCheck
set totrue
in the ChaosEngine, the operator expects only 1 deployment, out of multiple deployments carrying the same/common label (specified within the.spec.appinfo.applabel
) to be annotated for chaos (w/litmuschaos.io/chaos: true
).That is, if you have deployments A & B with the label
app: test-app
that you have provided in the applabel section, the
operator expects that just one of them is annotated and recognizes it as the chaos candidate. -
Fix
- This limitation will be removed in the 1.12.2 patch release of the operator releasing on 22/01. Post this, you can have
multiple deployments (A & B) sharing both labels and annotations --- the experiment will pick random pods belonging
to these (parent) deployments during chaos injection.
- This limitation will be removed in the 1.12.2 patch release of the operator releasing on 22/01. Post this, you can have
-
Current Workaround:
-
(i) The workaround for this today, in case you need to annotate multiple deploys for chaos, please provide unique labels
to those deployments & create separate chaosengines for the same. -
(ii) An alternative to this & probably the simpler solution is to disable the annotationCheck (
.spec.annotationCheck: false
) in the ChaosEngine & provide the common label that you have used (from above ex:app: test-app
) in the different deploys in the.spec.appinfo.applabel
section. Please ensure that this label is propagated to the pod template spec as well and is not only at the deploy level. When annotationCheck set to false, we don't scan deploy/parent resources and directly filter pods.
-
Q2. "I would like to trigger delete-pod in part of the deployment and control with annotation or label. How should I do that? Will litmus support such a scenario?"
A2. When you say "part of the deployment" in the above sentence, I assume it can be either of the following:
-
(a) Some specific deployments in the cluster
- In this case, you can follow either (i) by creating a dedicated chaosengine for each app deployment w/ annotationCheck set to true OR (ii) provide a common label to them and disable the annotationCheck so that pods are picked randomly from these deployments
-
(b) Some replicas within a deployment
- In this case, you can make use of the PODS_AFFECTED_PERC or the TARGET_PODS environment vars with either (i) or (ii) above, to target a part/portion of the filtered pod targets/replicas
Q3. "And if I would like to mix different testing in part of k8s resource in the same namespace, how should I do that?
A3. By different testing, I assume you are referring to different experiments being executed on different "resources" (deployments/statefulsets etc.,) within the same namespace. This is possible. You just need to create dedicated chaosengines for these by mapping the relevant app resource w/ the experiment.
There is the possibility of running them in parallel or some sequence/order using another abstraction called chaos workflows. But I wouldn't want to delve into that right now without knowing if my assumptions are correct and the answers make sense to you.
from chaos-operator.
Hi @ksatchit ,
Thanks for the feedback.
So for now, I could disable the annotation check till 1.12.2 which will come in couple days. Souds great.
And for Q3, my SUT(system under test) contains quite many uServices. So I would like to have combination of different chaos testings, for example:
test A, B, C towards Service 1,3,5
test B, C, F towards service 2,4,6
test D, E, H toward service 1, 4,6,7
So basically your understanding is correct. And I guess chaos workflow might fit my situation. Where can I get more detail about it?
from chaos-operator.
@zergscut2017 - the litmus team as a group has been busy/caught up with the chaos carnival event that was held last week. Sincere apologies on the delayed reply. The initial multi-deploy annnotation support was included in 1.12.2. Now, you could can use 1.13.0 which has been released as we speak and fully enables you to run that way.
For chaos workflows, the portal is a good way to run them. Some discussions around this - how you can use them et al is explained in this slack thread: https://kubernetes.slack.com/archives/CNXNB0ZTN/p1612222659028900
from chaos-operator.
Closing this issue as the main requirement for multi-deployment annotation support is fulfilled. Feel free to re-open the issue if you find a problem with this.
from chaos-operator.
Related Issues (20)
- Feature Request: ChaosEngine to support multiple app labels HOT 1
- Feature Request: Add info about the resources under chaos HOT 2
- Wrong default value of probe success percentage for experiments with `Awaited` status. Value is 100 instead of 0 HOT 3
- Jobs are a deleted after helper pod terminated HOT 1
- get litmus chaosresult by chaos-operator without .status.experimentStatus.verdict HOT 2
- Does Litmus demo environment supports AKS ? HOT 1
- Docker Container Image Vulnerability Check - 2021-07-30 HOT 2
- Namespaced mode for Chaos Scheduler HOT 1
- Not able to run network latency experiments with cri-o HOT 7
- Running Litmus workflows in Openshift throws "cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on:" HOT 4
- Unable to set serviceAccount on experiment helper pod.
- prometheus probe is not working HOT 1
- Unable to create the helper pod when specifying multiple TARGET_NODES for experiment. HOT 1
- Vulnerabilities in Chaos-Operator Docker Image
- Broken link in documentation: Test the changes in developer.md HOT 3
- failed to make chaos in the pod without tty HOT 1
- Feature Request: `ChaosInfra` resource to manage the lifecycle of Chaos Infrastructures HOT 2
- Support tolerations for source cmd probe
- Docker Container Image Vulnerability - CVE-2023-44487 HOT 3
- Change Logging Level HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chaos-operator.