Giter Site home page Giter Site logo

gopaddle-io / configurator Goto Github PK

View Code? Open in Web Editor NEW
112.0 112.0 45.0 152.05 MB

Synchronize and Version Control ConfigMaps & Secrets across Deployment Rollouts.

License: Other

Go 91.51% Shell 3.39% Dockerfile 0.71% Makefile 2.80% Mustache 1.59%
containers crd deployment docker go golang hacktoberfest hacktoberfest2021 helm k8s kubernetes secrets

configurator's People

Contributors

adeesh-devanand avatar arijitwork avatar bharathappali avatar gopaddle-io avatar logeshkrish avatar renugadevi-2613 avatar surendar-b avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

configurator's Issues

Validate Configurator with Flux Controller for GitOps workflows

Is your feature request related to a problem? Please describe.
None

Describe the solution you'd like
Flux is a tool used in automating GitOps based CI/CD. Flux watches for changes in GitHub branch and pulls the changes in to the Kubernetes environment. We need to validate whether Configurator works seamlessly with Flux in a GitOps workflow.
Scenarios to validate:

  1. Install Flux controller in the cluster
  2. Create YAML specifications to create two ConfigMap and reference/mount the ConfigMaps in a deployment
  3. Checkin deployment and configmap YAMLs to git and automate the GitOps workflow
  4. Install Configurator in the cluster
  5. Configurator should initialise all the ConfigMaps and create CustomConfigMap (CCM) revisions for both the ConfigMaps, as soon as the Configurator controller is installed. Execute the command kubectl get ccm to list all the CCM revisions
  6. Modify the Contents of both the ConfigMap YAML files and checkin the YAML file to Git
  7. Flux pulls the new ConfigMap changes to the cluster
  8. Configurator must have created a new CCM revision for both the ConfigMaps, reflecting the ConfigMap updates. Execute the command kubectl get ccm to list the new CCM revisions.
  9. Remove the first ConfigMap mount in the deployment YAML and checkin in the YAML to Git
  10. Flux recognises the change in the mount and updates the cluster.
  11. Wait for 15 minutes and check the CCM revisions in the cluster using kubectl get ccm. Except for the latest version, the rest of the CCM versions for the first ConfigMap should be removed. There should be two revisions for the second configmap.
  12. Remove the deployment YAML file from the Git repo
  13. Flux recognises the removal of the deployment and update the cluster
  14. Wait for 15 minutes and check the CCM revisions in the cluster using kubectl get ccm. Except for the latest version, the rest of the CCM versions for both the ConfigMaps should be removed.

Describe alternatives you've considered
None

Additional context
None

Modify makefile so that we dont push the docker image to docker hub everytime we build

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
Take the docker registry parameter as an input to make file and push the image to the user defined registry rather than using the Bluemeric docker registry. We need to have qualification process before pushing the image to Bluemeric docker registry.

Describe alternatives you've considered
None

Additional context
Add any other context or screenshots about the feature request here.

Website: Include Contributor list on Configurator home page

I tried including the list of Contributors on the Configurator home page but got stuck. If anyone has any experience using Github API's or JavaSctipt, the help would be much appreciated.

This is what I'hve reached at so far:

<script type="module"> 
import { Octokit } from "https://cdn.skypack.dev/@octokit/core"; 
const octokit = new Octokit(); 
await octokit.request('GET /repos/gopaddle-io/configurator/stats/contributors', { owner: 'gopaddle-io', repo: 'configurator' }).then((resp) => { resp.data.forEach((r) => { console.log(r.author.login); }) });
</script>

I'm guessing that the GET call is working all right but how do I take the data and display it.

Here's the link to the code if you can fix it.

Here's the link to the Home page

I wanted to display the contributor list under "Thanks to all contributors for their effort"

Screenshot 2021-10-06 201429

Thank you for your help!!

PS: I'm not really a web developer or JS practioner.

Configurator Support for Daemon Set

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
Currently Configurator supports Deployments and Stateful set. We need to extend support for Daemon set as well.

Describe alternatives you've considered
None

Additional context
None

Creating a secret creates multiple secret revisions

Describe the bug
If a secret deployed in cluster, multiple revisions of customSecret gets created. Because of this multiple rolling updates are trigger on a deployment that references the secret.

To Reproduce

  1. Deploy configurator in k8s cluster
  2. Create a secret
  3. Get the revision for that secret. You can note that multiple revisions of secret for created.
  4. Use the secret name in deployment under: spec.template.spec.volumes.secret.secret.secretName
  5. Apply the deployment yaml file
  6. configurator annotation is added in the spec.template.metadata.annotation level
  7. Rolling update gets triggered multiple times because of the multiple secret revisions

Propagate Role/RoleBinding and ClusterRole/ClusterRoleBinding from ConfigMap to CustomConfigMap

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
Provide a capability to set Cluster wide (ClusterRole and ClusterRoleBinding) and namespace specific (Role and RoleBinding) to ConfigMaps and propagate those permissions to the newly create CustomConfigMaps and CustomSecrets.
Follow the blog for more information on creating Roles and RoleBindings.

Steps to validate:

  1. Create a namespace
  2. Create a ConfigMap in that namespace
  3. Create a Secret in that namespace
  4. Create a Service Account
  5. Create a ClusterRole with verbs 'get' and 'list' on resources 'configmap' and 'secret'
  6. Create a ClusterRoleBinding to reference the ClusterRole created in step 4
  7. Create a Role with verbs 'get' and 'list' on resources 'configmap' and 'secret'
  8. Create a RoleBinding to reference the Role created in step 7 along with the namespace created in step 1
  9. Install Configurator
  10. Get the ClusterRole and Role created in steps 4 and 7. The CustomConfigMap and CustomSecret resources should have the permissions for 'get' and 'list'.

Describe alternatives you've considered
None

Additional context
None

Installing Configurator from the GitHub local helm repo fails with YAML error

Describe the bug
While installing Configurator from the GitHub local helm repo fails with YAML error.

To Reproduce
Steps to reproduce the behavior:
1.Add helm repo

$ helm repo add gopaddle_configurator https://gopaddle-io.github.io/configurator/helm/

2.Install the configurator from gopaddle_configurator helm repo using the command in below:

$ helm install configurator gopaddle_configurator/configurator --version 0.4.0-alpha.

3.The above command fails with error:

Error: INSTALLATION FAILED: YAML parse error on configurator/templates/admission-deployment.yaml: error converting YAML to JSON: yaml: line 38: found unexpected end of stream

Current implementation will break independent application lifecycle

Configurator's current implementation relies on keeping tabs about which ConfigMap (CM) holds the latest data from its owner CustomConfigMap (CCM).

In order to use CMs managed by Configurator, an independent application needs to wait for a managed CM to be created, which includes a random suffix on its name. This CM with a strangely suffixed name needs to be used by the application in its first initial setup. After being deployed, Configurator will patch the Deploy/Sts of an independent application, changing the name of the CM that was initially used.

This behavior can certainly break the lifecycle of the application using the CM. In fact, the application needs to be aware of the first CM name, and that needs to become part of its initial setup. Most applications, specially ones managed by a GitOps tool like FluxCD or ArgoCD, can interfere with patched resources and bring the CM reference back to its original name (the one the app was originally deployed with).

I think the current implementation that relies on random CM names being patched in Deploy/Sts objects will not prevail by intefereing with a broader application lifecycle. There must be a presumption that CM names are controlled by an independent application, and those names need to be kept as they were intiially deployed, so that applications do not get entangled with the behavior that Configurator provides.

Unable to run local debug session. No command line argument to inform a local kubeconfig file.

Current implementation does not allow developers to easily run a local instance to debug in IDEs, such as VSCode. The controller is only aware of in-cluster configuration.

Ideally, a command line argument could be provided to the controller so that a valid KUBECONFIG file could be used locally for development and debugging purposes.

./controller --kubeconfig=/path/to/my/kube.config

Support for CRD creation in K8s v1.22

Describe the bug
Configurator current CRD creation is not supported k8s v1.22. The API version apiextensions.k8s.io/v1beta1 of CRDS is removed need to support apiextensions.k8s.io/v1.

Additional context
Add any other context about the problem here.
In k8s v1.22 structural schema is mandatory. So we cannot directly define a CRD schema for customConfigMap as data: object. We need to specify its type. Since we don't have a control over the exact type of the object, we must include x-kubernetes-preserve-unknown-fields: true looks like:

kind: CustomResourceDefinition
metadata:
  name: customconfigmaps.configurator.gopaddle.io
. . .
. . .
schema:
      openAPIV3Schema:
        type: object
        properties:
          apiVersion:
            type: string
          kind:
            type: string
          metadata:
            type: object
          spec:
            type: object
            properties:
              configMapName:
                type: string
              data:
                x-kubernetes-preserve-unknown-fields: true
                type: object
              binaryData:
                x-kubernetes-preserve-unknown-fields: true
                type: object

reference: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema

Document the usage of CustomConfigMaps and Secrets while deploying through Helm

Is your feature request related to a problem? Please describe.
None

Describe the solution you'd like
With Helm 3, it is easier to deploy CRDs. We need to investigate and suggest best practices around deploying CustomConfigMaps and Secrets via Helm Charts.

Describe alternatives you've considered
None

Additional context
None

Verify if a CCM is created for a ConfigMap which is not used in any deployment

Describe the bug
If the cluster has ConfigMap that is not used by any of the deployment, configurator not is creating a version (CCM). This needs to be verified.
To Reproduce

1.Create a ConfigMaps to be used in a deployment

apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-config-1
data:
   # file-like keys  
   user-interface.properties: |
      color.good=purple
      color.bad=yellow
    $ kubectl apply -f demo-config-1.yaml

2.Create a ConfigMap that will not be used in a deployment

apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-config-2
data:
  # file-like keys
  game.properties: |
    android.apk=free fire,bgmi
    computer=stambled guys   
    $ kubectl apply -f demo-config-2.yaml
  1. Create deployment with ConfigMap mounted. (Node:- except 'demo-config' use some other CM)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: service-nginx
  template:
    metadata:
      labels:
        app: service-nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14
          ports:
            - containerPort: 80
          volumeMounts:
           - name: nginx
             mountPath: "/config"
             readOnly: true
      volumes:
      - name: nginx
        configMap:
          name: demo-config-1
$ kubectl apply -f deployment.yaml
  1. Install configurator
helm install configurator gopaddle_configurator/configurator --version 0.4.0-alpha
  1. Verify annotation added in 'spec.template.metadata.annotation' level in deployment.
$ kubectl get deployment demo-deployment -o yaml
template:
    metadata:
      annotations:
        ccm-demo-config-1: qds24
  1. Verify the new version created for the 'demo-config-1' and 'demo-config-2'.
       $ kubectl get customconfigmap
  1. delete the demo-config-2
kubectl delete configmap demo-config-2
  1. Create demo-config-2 again
kubectl apply -f demo-config-2.yaml
  1. List the custom configmaps
kubectl get customconfigmap

Expected behavior
In steps 6 & 9 configurator must create a new version for demo-config-2 ConfigMap

Create prometheus events as and when a new configMap version is created or purged

Is your feature request related to a problem? Please describe.
Enhancement

Describe the solution you'd like
It will be useful to visualise the changes to the custom resources. These events/metrics can be utilised in Grafana dashboard.

List of expected events and the attributes in the events:
1. CCM creation {namespace, CM name, CCM name, version, created time}
2. CCM purge {namespace, CM name, CCM name, version, updated time, reason for creation}
3. CM Create {namespace, CM name, created time}
4. CM update {namespace, CM name, updated time}
5. CM delete {namespace, CM name, deleted time}
6. Initiating Configurator {initiated time, invitation params, exit status, error message}

Configurator v0.0.2 uses kubebuilder. Kubebuilder metrics event create and test reference link can be found [here] (!https://book.kubebuilder.io/reference/metrics.html)

CustomSecret breaks the data format while creating Secrets

Describe the bug
If we provide stringData in customSecret spec, it creates customSecret with the given field and values. But while creating the secrets from the customSecret, configurator adds stringData values into 'data' field instead of 'stringData' field and the content is added as base64 instead of plain text.

To Reproduce
Steps to reproduce the behavior:

  1. create a customSecret with stringData field
  2. apply the yaml
  3. it creates a secret. get the yaml of the secert you can see stringData added into Data field in secret. (note:- stringData added to Data in base64 format)

Move the purge functionality to a kube job

Describe the bug
Cleaning up unused customConfigMap/customSecret from the cluster. Purge functionality will do the cleaning every 15 mins. Now the purge process is part of the configurator code.
Expected behavior
Move the purge process as a separate one and create the kubernetes job resource to clean the CCM/CS every 15 mins.

Support for envFrom configMapRef and secretRef

Describe the bug
If configmap or secret is used along with envFrom, configurator specific annotations are not added. Because of this deployment rolling update breaks.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy configurator in k8s cluster
  2. Create configMap
  3. Use the configMap name in deployment under: spec.template.spec.Container.envFrom.configMapRef.Name
  4. Apply the deployment yaml file
  5. Configurator annotation is not added in the spec.template.metadata.annotations level

Failed on creating pod for resources other than deployment and statefulsets

Describe the bug
Configurator pod validation breaks the creation of the pod if the annotation ccm-: is missing in the metadata.annotation. This happens if the resource that is deployed does not create a revision history. Configurator does not add annotations to these resources even if a configMap is used by this resource.

To Reproduce
Steps to reproduce the behavior:

  1. Create a job resource with configMap
  2. Apply to cluster it create the job.
  3. kubectl get pod -n it will fail with not found error
  4. describe the job resource
    kubectl describe job -n
  Events:
  Type     Reason        Age                  From            Message
  ----     ------        ----                 ----            -------
  Warning  FailedCreate  77s (x5 over 3m47s)  job-controller  Error creating: admission webhook "podcontroller.configurator.gopaddle.io" denied the request: customconfigmaps.configurator.gopaddle.io "testconf-" not found

it failed with the above warning

Rollback breaks the ConfigMap labels in deployments

Describe the bug
if we once rollback the changes in deployment. Deployment labels and configmaps are still pointing to old labels. This is breaking the rollback functionality.

To Reproduce
Steps to reproduce the behavior:

  1. deploy configurator
  2. create customConfigMap
  3. copy the newly created configMapName and add it in the deployment label and volume
  4. change the customConfigMap configurator create new configMap update the deployment.
  5. rollback to the previous revision of the deployment. it pointing to the first version of deployment.
  6. change the customConfigMap it create new configMap but the rolling update did not worked.

Screenshots
None

Create Helm Chart for easy deployment of Configurator

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
Create a Helm chart for easy deployment of Configurator.

Describe alternatives you've considered
Currently Configurator is installed through a sequence of YAML files. Instead, create a simple helm chart the does a single command install of configurator.

Additional context
None

updateMethod: ignoreWhenShared - support for additional options.

Describe the bug
Configurator adds the annotation in ConfigMap ‘updateMethod: ignoreWhenShared’ as the default updateMethod . IgnoreWhenShared does not perform a rollingUpdate on a deployment when a ConfigMap it uses is shared by multiple deployments. Add support for additional flags like 'updateWhenShared' to update the deployment when a ConfigMap it uses is shared by multiple deployments.

To Reproduce
Steps to reproduce the behavior:
1.Create ConfigMap.

$ kubectl apply -f demo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
    name: demo-config
data:
  # property-like keys; each key maps to a simple value
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"
  # file-like keys
  game.properties: |
    android.apk=free fire,bgmi
    computer=stambled guys    
  user-interface.properties: |
    color.good=purple
    color.bad=yellow

2.Get the ConfigMap.updateMethod is added in ConfigMap annotation level

$ kubectl get cm demo-config -o yaml
annotations:
    currentCustomConfigMapVersion: s1vw7
    customConfigMap-name: demo-config-s1vw7
    updateMethod: ignoreWhenShared

3.Create deployment with ‘demo-config’ mounted

$ kubectl apply -f deployment1.yaml

4.Get the ConfigMap. (Node:- annotation having the ‘deployments’)

$ kubectl get cm demo-config -o yaml 
annotations:
   currentCustomConfigMapVersion: s1vw7
   customConfigMap-name: demo-config-s1vw7
   deployments: demo-deployment
   updateMethod: ignoreWhenShared

5.Update the ConfigMap it will trigger the rolling update of that deployment.

$ Kubectl get replicaset
NAME                              DESIRED   CURRENT   READY   AGE
demo-deployment-5fbd8ccf54        1         1         1       2m58s
$ Kubectl edit cm demo-config
ConfigMap/demo-config edited
$ Kubectl get replicaset
NAME                              DESIRED   CURRENT   READY   AGE
demo-deployment-5fbd8ccf54        0         0         0       4m18s
demo-deployment-dcb75689b         1         1         1       25s

6.Create a deployment with same ConfigMap.

$ kubectl apply -f deployment2.yaml

7.Get the ConfigMap. Check the annotation it is having 2 deployments as comma separated.

$ kubectl get cm demo-config -o yaml
annotations:
   currentCustomConfigMapVersion: oa06p
   customConfigMap-name: demo-config-oa06p
   deployments: demo-deployment,demo2-deployment
   updateMethod: ignoreWhenShared

8.Edit the ConfigMap. Now rollingUpdate will not happen.

$ Kubectl get replicaset
NAME                              DESIRED   CURRENT   READY   AGE
demo-deployment-5fbd8ccf54        0         0         0       13m
demo-deployment-dcb75689b         1         1         1       9m43s
demo2-deployment-dcb75689b        1         1         1       2m37s
$ Kubectl edit cm demo-config 
ConfigMap/demo-config edited
$ Kubectl get replicaset
NAME                              DESIRED   CURRENT   READY   AGE
demo-deployment-5fbd8ccf54        0         0         0       14m
demo-deployment-dcb75689b         1         1         1       10m
demo2-deployment-dcb75689b        1         1         1       3m22s

Expected behavior
Support to add support for ‘updateWhenShared’ updateType.

Unused CustomConfigMap revisions are not purged

Describe the bug
Configurator purges all the unused CustomConfigMap (CCM) revisions that were once referenced by a deployment or statefulset revision. Based on the revision history configured for a deployment/statefulset, Kubernetes automatically removes the unused deployment/statefulset revisions. If a CCM version was referenced in one of those unused deployment/statefulset revisions, then Configurator purges those unused CCMs periodically (once in every 15 mins). However, if a CCM has never been referenced in any of the deployment/statefulset revisions from the time of its creation, then Configurator does not purge those CCM revisions.

To Reproduce
Steps to reproduce the behavior:
1.Create a file named ConfigMap.yaml with the below contents and create a ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
    name: demo-config
data:
  # property-like keys; each key maps to a simple value
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"

  # file-like keys
  game.properties: |
    android.apk=free fire,bgmi
    computer=stambled guys    
  user-interface.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=tru
$ kubectl apply -f ConfigMap.yaml

2.Edit the contents of the ConfigMap and save the ConfigMap. This will create a new CCM revision.

$ kubectl edit cm demo-config

3.List the CCM. It will show the 2 CCM versions.

$ kubectl get ccm 
NAME                     AGE
demo-config-ifoiu        28s
demo-config-sg1cl        3m30s

4.Wait for 15 minutes. After 15 minutes, the automatic purge functionality is invoked. But it doesn't delete the unused CCM versions.
5.List the CCM again. Note that there are 2 CCM revisions still.

$ kubectl get ccm 
NAME                     AGE
demo-config-ifoiu        14m
demo-config-sg1cl        17m

6.Create a new ConfigMap with the below contents.

$ vi ConfigMap-2.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: demo2-config
data:
  names: |
    paul
    peter
$ kubectl apply -f ConfigMap-2.yaml

7.Create a deployment using the new ConfigMap, and set the max revision to be maintained as 1 in ‘spec.revisionHistoryLimit’ level.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
spec:
  replicas: 1
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app: service-nginx
  template:
    metadata:
      labels:
        app: service-nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14
          ports:
            - containerPort: 80
          volumeMounts:
           - name: nginx
             mountPath: "/config"
             readOnly: true
      volumes:
      - name: nginx
        configMap:
          name: demo2-config
$ kubectl apply -f deployment.yaml

8.List the versions of ccm using

$kubectl get ccm 
NAME                     AGE
demo-config-ifoiu        16m
demo-config-sg1cl        19m
demo2-config-4c897       52s

$ kubectl get replicaset
NAME                                              DESIRED   CURRENT   READY   AGE
demo-deployment-57d8c679b7                        1         1         1       109s

9.Get new ConfigMap demo2-config as yaml. In annotation level deployment reference will be available

$ kubectl get cm demo2-config -o yaml
apiVersion: v1
data:
  names: |
    paul
    peter
kind: ConfigMap
metadata:
  annotations:
    currentCustomConfigMapVersion: 21iu3
    customConfigMap-name: demo2-config-21iu3
    deployments: demo-deployment
    updateMethod: ignoreWhenShared
  creationTimestamp: "2022-02-25T08:33:29Z"
  name: demo2-config
  namespace: default
  resourceVersion: "59171"
  uid: 371d2f53-8d8a-4c9b-999f-21482502655c

10.Edit the new ConfigMap using $ kubectl edit cm demo2-config
11.Check for replicaset

$ kubectl get replicaset
NAME                            DESIRED   CURRENT   READY   AGE
demo-deployment-57d8c679b7        0         0         0    4m19s
demo-deployment-5b6c7b5f57        1         1         1     40s

12.Wait for 15 mins. Check the older version is removed. The latest will be the second CCM and it will be retained. The first CCM version will be purged.

$ kubectl get replicaset demo-deployment-57d8c679b7  -o yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "1"
    deployment.kubernetes.io/max-replicas: "2"
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2022-02-25T08:36:08Z"
  generation: 2
  labels:
    app: service-nginx
    pod-template-hash: 57d8c679b7
  name: demo-deployment-57d8c679b7
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: demo-deployment
    uid: 97589500-6569-4c61-a9c1-803769c0a864
  resourceVersion: "56630"
  uid: 52de3436-53a0-451c-aaf4-a36632ee1799
spec:
  replicas: 0
  selector:
    matchLabels:
      app: service-nginx
      pod-template-hash: 57d8c679b7
  template:
    metadata:
      annotations:
        ccm-demo2-config: 4c897
        config-sync-controller: configurator
$ kubectl get replicaset demo-deployment-5b6c7b5f57 -o yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "1"
    deployment.kubernetes.io/max-replicas: "2"
    deployment.kubernetes.io/revision: "2"
  creationTimestamp: "2022-02-25T08:39:47Z"
  generation: 1
  labels:
    app: service-nginx
    pod-template-hash: 5b6c7b5f57
  name: demo-deployment-5b6c7b5f57
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: demo-deployment
    uid: 97589500-6569-4c61-a9c1-803769c0a864
  resourceVersion: "56625"
  uid: 09796e75-e414-4342-a522-cec9c05b2f41
spec:
  replicas: 1
  selector:
    matchLabels:
      app: service-nginx
      pod-template-hash: 5b6c7b5f57
  template:
    metadata:
      annotations:
        ccm-demo2-config: we90z
        config-sync-controller: configurator
$  kubectl get ccm 
NAME                     AGE
demo-config-ifoiu        27m
demo-config-sg1cl        30m
demo2-config-4c897       11m
demo2-config-we90z       5m29s
$ kubectl edit cm demo2-config
configmap/demo2-config edited
$ kubectl get ccm 
NAME                     AGE
demo-config-ifoiu        29m
demo-config-sg1cl        32m
demo2-config-21iu3       33s
demo2-config-4c897       13m
demo2-config-we90z       7m8s
$ kubectl get replicaset
NAME                           DESIRED   CURRENT   READY   AGE
demo-deployment-5b6c7b5f57        0         0        0    7m35s
demo-deployment-8767c6b9b         1         1        1     60s
$ kubectl get replicaset demo-deployment-8767c6b9b -o yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "1"
    deployment.kubernetes.io/max-replicas: "2"
    deployment.kubernetes.io/revision: "3"
  creationTimestamp: "2022-02-25T08:46:22Z"
  generation: 1
  labels:
    app: service-nginx
    pod-template-hash: 8767c6b9b
  name: demo-deployment-8767c6b9b
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: demo-deployment
    uid: 97589500-6569-4c61-a9c1-803769c0a864
  resourceVersion: "59196"
  uid: 6d7052e6-34be-4363-8970-e13843d914df
spec:
  replicas: 1
  selector:
    matchLabels:
      app: service-nginx
      pod-template-hash: 8767c6b9b
  template:
    metadata:
      annotations:
        ccm-demo2-config: 21iu3
        config-sync-controller: configurator
$ kubectl get ccm 
NAME                     AGE
demo-config-ifoiu        41m
demo-config-sg1cl        44m
demo2-config-21iu3       12m
demo2-config-we90z       19m

Expected behavior
In step 5 : If a ConfigMap is not used in any deployment we must retain minimum 1 revision ie., latest revision

Failed on creating deployment with annotations - with '/' in the annotation key

Describe the bug
Configurator deployment mutation breaks the deployment creation if the deployment has an annotation in spec.template.metadata.annotations level like 'test/abc.io: abcdef'.
To Reproduce
Steps to reproduce the behavior:

  1. create the deployment with template level annotation like 'test/abc.io: abcdef'
  2. apply the deployment to the cluster
  3. using kubectl apply -f filename.yaml. it will fail with below error
    Error from server (InternalError): error when creating "sampledeploy.yaml": Internal error occurred: add operation does not apply: doc is missing path: "/spec/template/metadata/annotations/test/abc.io": missing value

Reflected XSS (CVE-2021-24316)

Describe the bug
Hi Team,
We just found an Mediumish WordPress Theme <= 1.0.47 - Unauthenticated Reflected XSS & XFS in https://blog.gopaddle.io/

To Reproduce
Steps to reproduce the behavior:
1.Open Browser and Go To site: https://blog.gopaddle.io/?post_type=post&s=
2.Inject XSS to param (s), and using payloads: "><script>alert(document.domain)</script>
3.Click Run and then XSS will trigger.

POC
https://blog.gopaddle.io/?post_type=post&s=%22%3E%3Cscript%3Ealert(document.domain)%3C/script%3E

Screenshots
poc

reference:

Impact
As you know, with a reflected XSS, a malicious user could trick a user into browsing to a URL which would trigger the XSS and steal the user's cookie, capture keyboard strokes, etc and eventually take over a user's account.

Regards
pikpikcu

Couldnt patch a Deployment with template level annotation

Describe the bug
If a deployment has a template level annotation key like 'prometheus.io/path', then patching that deployment fails.
To Reproduce
Steps to reproduce the behavior:

  1. Deploy configurator in k8s cluster
  2. Add annotation to a deployment at spec.template.metadata.annotations level, say key 'prometheus.io/path' with some value
  3. Apply the deployment specification to the cluster.

API call to get the purge jobs list

Describe the bug
Create an API call to get the purge job execution details that returns an array of the below information.
1. executed time
2. exit status
3. error message
This issue has a dependency on on issue #87

Failed on creating pod when configurator-controllerwebhook was down

Describe the bug
When the node restarted, configurator-controllerwehook went down. After some time cluster brings up the node, but the controller did not get restarted. validation admission controller call failed with below error in the deployment event.

Events:
  Type     Reason        Age                  From            Message
  ----     ------        ----                 ----            -------
  Warning  FailedCreate  75s (x5 over 3m45s)  job-controller  Error creating: Internal error occurred: failed calling webhook "podcontroller.configurator.gopaddle.io": Post "https://configurator-controllerwebhook.gopaddle-servers.svc:443/podcontroller?timeout=10s": no endpoints available for service "configurator-controllerwebhook"

Invalid helm repo URL

Describe the bug
At the time of installing the configurator using helm chart, the helm repo URL mentioned in https://blog.gopaddle.io/2021/09/08/using-helm-charts-with-configurator-a-versioning-sync-service-for-kubernetes-ConfigMaps/, is invalid.

To Reproduce
Steps to reproduce the behavior:

  1. Add the configurator to helm repo using the below command:
$ helm repo add gopaddle_configurator https://gopaddle-io.github.io/configurator/helm/ 

The above command fails with error:

Error: looks like "https://gopaddle-io.github.io/configurator/helm/" is not a valid 	chart repository or cannot be reached: failed to fetch https://gopaddle-io.github.io/configurator/helm/index.yaml : 404 Not Found 

Automate weekly updates

Require automation for uploading weekly meetings to the Configurator website:

Anyone with experience automating file updation on GitHub could help out.

Probelm statement:
Look for changes on request in a YouTube playlist and send a pull request after adding and changing a specific file.

Issue is open to any newcomers who are willing to explore the GitHub API and try out some JavaScript.

Join the Discord Server for a detailed explanation.

During the init process, configurator does not add the annotation 'config-sync-controller' to an existing deployment

Describe the bug
Currently, whenever a new resource is created, say when deployment or a stateful set is created, the configurator checks whether this resource type is version controlled. If so, the configurator adds the annotation 'config-sync-controller' to the resource. For example, deployments and stateful sets are version controlled, whereas jobs are not version controlled. Hence the configurator does not add the annotation 'config-sync-controller' to a job resource. Similarly, the configurator is expected to add this annotation to all the versioned resources at the time of the init process as well.
To Reproduce
Steps to reproduce the behavior:

  1. Create a ConfigMap
$ vi ConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
    name: demo-config
data:
  # property-like keys; each key maps to a simple value
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"

  # file-like keys
  game.properties: |
    android.apk=free fire,bgmi
    computer=stambled guys    
  user-interface.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=tru
$ kubectl apply -f ConfigMap.yaml
  1. Create a deployment referencing the ConfigMap to apply deployment using
$ vi apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: service-nginx
  template:
    metadata:
      labels:
        app: service-nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14
          ports:
            - containerPort: 80
          volumeMounts:
           - name: nginx
             mountPath: "/config"
             readOnly: true
      volumes:
      - name: nginx
        configMap:
          name: demo-config
$ kubectl apply -f deployment.yml
  1. Check the annotations in the deployment.(Note:- there are no additional annotations.)
$ kubectl get deploy demo-deployment -o yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: "2022-02-25T07:31:24Z"
  generation: 2
  name: demo-deployment
  namespace: default
  resourceVersion: "29830"
  uid: f9130536-e869-4e97-ae64-3d0dd6df8981
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: service-nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        ccm-demo2-config: ejxsw
        config-sync-controller: configurator
      creationTimestamp: null
      labels:
        app: service-nginx
    spec:
      containers:
      - image: nginx:1.14
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /config
          name: nginx
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: demo2-config
        name: nginx
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-02-25T07:31:29Z"
    lastUpdateTime: "2022-02-25T07:31:29Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-02-25T07:31:24Z"
    lastUpdateTime: "2022-02-25T07:32:51Z"
    message: ReplicaSet "demo-deployment-76bb6dbb75" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
  1. Install configurator
  2. Check the annotation in the existing deployment ( Note:- configurator has added an annotation 'ccm-demo-config', but hasn't added the annotation 'config-sync-controller').
$ kubectl get deploy demo-deployment -o yaml
'annotations:
   ccm-demo-config: xsszs'
  1. Create a new deployment referencing the ConfigMap apply the deployment.
$ vi apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: service-nginx
  template:
    metadata:
      labels:
        app: service-nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14
          ports:
            - containerPort: 80
          volumeMounts:
           - name: nginx
             mountPath: "/config"
             readOnly: true
      volumes:
      - name: nginx
        configMap:
          name: demo-config
$ kubectl apply -f deployment.yaml
  1. Check the annotations in the newly created deployment. ( Note:- configurator has added 2 annotations - 'ccm-demo-config' and 'config-sync-controller' and in 'spec.template.metadata.annotation'
$ kubectl get deploy demo-deployment -o yaml
annotations:
       ccm-demo-config: xsszs
      Config-sync-controller: configurator

Expected behavior
In step 5, the configurator must add the annotation 'config-sync-controller' to existing deployment at the time of the init process.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.