Giter Site home page Giter Site logo

gruntwork-io / helm-kubernetes-services Goto Github PK

View Code? Open in Web Editor NEW
190.0 23.0 150.0 891 KB

Helm charts that can be used to package your applications into production ready deployments for Kubernetes. https://www.gruntwork.io

License: Apache License 2.0

Go 86.96% Smarty 11.46% Mustache 1.58%

helm-kubernetes-services's Introduction

Kubernetes Service

Features

  • Deploy your application containers on to Kubernetes

  • Zero-downtime rolling deployments

  • Auto scaling and auto healing

  • Configuration management and Secrets management

    • Secrets as Environment/Volumes/Secret Store CSI

  • Ingress and Service endpoints

Learn

ℹ️
This repo is a part of the Gruntwork Infrastructure as Code Library, a collection of reusable, battle-tested, production ready infrastructure code. If you’ve never used the Infrastructure as Code Library before, make sure to read How to use the Gruntwork Infrastructure as Code Library!

Core concepts

Repo organization

  • charts: the main implementation code for this repo, broken down into multiple standalone, orthogonal Helm charts.

  • examples: This folder contains working examples of how to use the submodules.

  • test: Automated tests for the modules and examples.

Deploy

Non-production deployment (quick start for learning)

If you just want to try this repo out for experimenting and learning, check out the following resources:

  • examples folder: The examples folder contains sample code optimized for learning, experimenting, and testing (but not production usage).

Production deployment

If you want to deploy this repo in production, check out the following resources:

Support

If you need help with this repo or anything else related to infrastructure or DevOps, Gruntwork offers Commercial Support via Slack, email, and phone/video. If you’re already a Gruntwork customer, hop on Slack and ask away! If not, subscribe now. If you’re not sure, feel free to email us at [email protected].

Contributions

Contributions to this repo are very welcome and appreciated! If you find a bug or want to add a new feature or even contribute an entirely new module, we are very happy to accept pull requests, provide feedback, and run your changes through our automated test suite.

License

Please see LICENSE for details on how the code in this repo is licensed.

helm-kubernetes-services's People

Contributors

aechgg avatar autero1 avatar bobalong79 avatar bouge avatar dataviruset avatar denis256 avatar dependabot[bot] avatar eak12913 avatar finchr avatar greysond avatar gruntwork-ci avatar hngerebara avatar ina-stoyanova avatar jjneely avatar jonnymatts-global avatar martinhutton avatar mateimicu avatar n-h-n avatar omar-devolute avatar paul-pop avatar ralucas avatar reynoldsme avatar rhoboat avatar robmorgan avatar ryehowell avatar ryucaelum avatar tdharris avatar thiagosalvatore avatar yorinasub17 avatar zackproser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-kubernetes-services's Issues

Can not mount ConfigMap and Secret as volume if they have same name

Describe the bug
If you have a ConfigMap and Secret that has the same name, you can't mount both as a volume due to a volume name collision. This is because the template will name both volumes as NAME-volume.

The proposed fix is to support specifying a custom volume name to avoid the collision. E.g.,

configMaps:
  myconfig:
    as: volume
    mountPath: /etc/myconfig/config
    volumeName: myconfig-configmap-volume

To Reproduce

  • Create a ConfigMap and Secret that has the same name (e.g., myconfig).
  • Try to mount both as volumes:
configMaps:
  myconfig:
    as: volume
    mountPath: /etc/myconfig/config

secrets:
  myconfig:
    as: volume
    mountPath: /etc/myconfig/secrets

Expected behavior
Mount both the ConfigMap and Secret as a volume.

Actual behavior
Error:

Helm install failed: Deployment.apps "myapp" is invalid: spec.template.spec.volumes[1].name: Duplicate value: "myconfig-volume"

Support Vertical Pod Autoscalers

Describe the solution you'd like
It would be great to have support for Vertical Pod Autoscalers (VPAs) in the k8s-service module just like horizontal pod autoscalers are supported today.

Describe alternatives you've considered
The only alternative would be to manage the VPA as a separate object, which while possible, feels like it deviates from the Gruntwork model.

Additional context
https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler
https://docs.aws.amazon.com/eks/latest/userguide/vertical-pod-autoscaler.html

Latest releases are not present in the repository

Describe the bug
The latest release in this repository has v0.2.21, but Helm cannot find it. The helm search repo gruntwork --versions command shows me the latest available chart version v0.2.19

To Reproduce

$ helm repo add gruntwork https://helmcharts.gruntwork.io
$ helm repo update
$ helm search repo gruntwork --versions
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
gruntwork/k8s-service   v0.2.19                         A Helm chart to package your application contai...

Helm update fails with:

Error: chart "k8s-service" matching v0.2.21 not found in gruntwork index. (try 'helm repo update'): no chart version found for k8s-service-v0.2.21

Expected behavior

$ helm search repo gruntwork --versions
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
gruntwork/k8s-service   v0.2.21                         A Helm chart to package your application contai...

Nice to have

  • Terminal output
  • Screenshots

Additional context
Add any other context about the problem here.

Add sessionAffinity ClientIP to k8s-service

Describe the solution you'd like
We are needing to ensure that connections from a particular client are passed to the same Pod each time for some period. We would like to approach this with service.spec.sessionAffinity = ClientIP (the default is None).

I believe the service.yaml spec definition would need to be enhanced to optionally include sessionAffinity:

apiVersion: v1
kind: Service
metadata:
  ...
spec:
  selector:
    ...
  ports:
    ...
  sessionAffinity: ClientIP

And to specify a timeout:

  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800

Describe alternatives you've considered
We've considered manually adjusting as needed, but would like to be able to set within the helm chart values.

Additional context

  • See Kubernetes Service documentation of sessionAffinity:

    "If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

  • See aws-cloud-map-mcs-controller-for-k8s spec definition for more details on sessionAffinity and sessionAffinityConfig.

Add support for flexible environment variable settings

e.g being able to specify:

    spec:
      containers:
      - image: foobar:latest
        name: go-test-monitor-container
        imagePullPolicy: Always
        env:
          - name: DD_AGENT_HOST
            valueFrom:
              fieldRef:
                fieldPath: status.hostIP
          - name: DD_ENTITY_ID
            valueFrom:
              fieldRef:
                fieldPath: metadata.uid 

Missing HPA capability for Kubernetes 1.23

Describe the solution you'd like
When the HPA of the chart is enabled, it does not deploy on Kubernetes versions >= 1.23 because of the deprecation of the autoscaling/v2beta2 API.

Describe alternatives you've considered
Use autoscaling/v2 API for Kubernetes versions >= 1.23

Additional context

[k8s-service chart] Image pull secrets not mapped correctly for service account

imagePullSecrets is supposed to be a simple list of strings (the names of the secrets) but in the service account template it expects a map.

Once we add one item to the imagePullSecrets value we get:

Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(ServiceAccount.imagePullSecrets[0]): invalid type for io.k8s.api.core.v1.LocalObjectReference: got "string", expected "map"

Service account:
https://github.com/gruntwork-io/helm-kubernetes-services/blob/master/charts/k8s-service/templates/serviceaccount.yaml#L16-L17

Deployment:
https://github.com/gruntwork-io/helm-kubernetes-services/blob/master/charts/k8s-service/templates/deployment.yaml#L259-L262

k8s-service should support canary deployments

Right now the k8s-service helm chart does not include support for a canary version. This feature request is to enhance the existing chart to deploy a new canary Pod that maps to the n+1 version of the container that is hooked up to the Service and Ingress resources.

Needs some thinking about the design, in particular the input variables.

Add support for rolling deployment strategies

Currently, the k8s-service helm chart does not provide the ability to configure deployment strategies on the Deployment resource. This is a feature request to add support for that.

k8s-job

This is the Job version of k8s-service.

Support Startup Probes

Describe the solution you'd like
Support Startup Probes for the deployment spec, just like there's already support for Liveness and Readiness Probes.

Describe alternatives you've considered
Modifying the chart locally or using something like Kustomize to add the field into the deployment spec afterwards, but this feels unnecessarily complicated as I feel startup probe support could benefit many users of this chart.

Additional context
Official Kubernetes documentation of the feature

ServiceAccount annotations block should be in metadata block

Based on the ServiceAccount v1 spec, the annotations block needs to be within metadata: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#serviceaccount-v1-core

Currently it's outside and failing with the following:

> helm upgrade --values values.yml --namespace ns --atomic --timeout 5m --set-string containerImage.tag=1.0.0 service gruntwork/k8s-service

Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(ServiceAccount): unknown field "annotations" in io.k8s.api.core.v1.ServiceAccount

Surfacing Additional Pod Labels

I would like propose adding an additional field for the user to add additional labels to the pod deployed by the helm chart.

Use case is I am working with EKS Fargate and it allows you to mark what should be deployed to a fargate managed node using the namespace or a combination of namespace and labels. Potentially we'd like to enable mixing EC2 and Fargate deployed pods inside the same namespace using labels to denote that a k8s-service should be deployed to fargate, while still using the gruntwork helm chart to lower operational overheads.

I was imagining this could take the form of an additionalPodLabels: {} and additionalDeploymentLabels: {} argument in values.yaml with the accompanying mapping being passed to the appropriate sections of deployment.yaml. With respect to my usecase this would mean we could say to fargate "deploy anything with label pair x to fargate" and then in our gruntwork chart we can add that label in values.yaml.

Any thoughts would be appreciated as If I'm missing something obvious that everyone else does to add additional labels to helm deployments using the chart that makes this redundant then would be useful to know.

Add support for job workers alongside deployment

Describe the solution you'd like
#24 is for tracking a new helm chart for packaging applications into a Job or CronJob, assuming that is the main deployment model of the app (e.g., a batch processing job, or periodic scanner app that scans for vulnerabilities).

However, there are use cases for a Job or CronJob to augment a service deployment, such as for running a one time database migration on upgrades. For this use case, it is better to embed the Job/CronJob resource in the k8s-deployment helm chart.

This ticket is for tracking the integration between the k8s-job and k8s-service charts so that k8s-service can be used to invoke k8s-job to deploy one time tasks for the app.

PodDisruptionBudget created even if minPodsAvailable=0

TL;TR: I'm having a problem with PDB-s being re-created, looks like after every update to my pods, despite setting minPodsAvailable=0. I also posted my story as part of this question to K8s forum: How to find what event creates PodDisruptionBudget?

Problem

The current default minPodsAvailable which I also use in my copy of the chart is Zero:

# NOTE: setting this to 0 will skip creating the PodDisruptionBudget resource.
minPodsAvailable: 0

Then, no PDB should be created according to:

{{- /*
If there is a specification for minimum number of Pods that should be available, create a PodDisruptionBudget
*/ -}}
{{- if .Values.minPodsAvailable -}}

That is because, after Helm docs (rookie here :)), the condition is:

evaluated as false if the value is numeric zero

Now, if I grep all my code, YAML, templates, etc. for minPodsAvailable and minAvailable, I see no other occurrences of the two than the referenced above pdb.yaml and values.yaml.

Why Kubernetes keeps re-creating PodDisruptionBudget-s for my pods deployed with the chart?

Workaround

I found a workaround this problem by patching the pdb.yaml

- {{- if .Values.minPodsAvailable -}}
+ {{- if kindIs "float64" .Values.minPodsAvailable -}}

to let the chart create PodDisruptionBudget for minPodsAvailable = 0 but it will create it with with minAvailable=0.

Then, it 'works' like there was no PDB, so pod can be evicted without problems during nodes draining.

k8s-service: Add ability to use subPath to mount an individual file to a pod via configMaps

Currently, the k8s-service helm chart does not provide the ability to define a file in a ConfigMap and mount it into an existing directory containing other files in the pod, e.g.

ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
data:
  app-config.json: |
    {
      "environment": "uat"
    }

Chart configMaps

configMaps:
  my-configmap:
    as: volume
    mountPath: /usr/share/nginx/html/assets/json/app-config.json
    subPath: app-config.json
    items:
      app-config.json:
        filePath: app-config.json

This is a feature request to add support for that by utilizing volumeMount.subPath

Support Scaling policies for HPAs

Describe the solution you'd like
the k8s-service module supports creating Horizontal Pod Autoscalers for Deployments created by the module. It would be helpful to be able to configure Scaling policies that control the speed of scale up and scale down events.

Describe alternatives you've considered
The only other option would be to stop using the built in HPA support, and manage HPAs via a different module.

Additional context
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#scaling-policies

Adding a sideCarContainer gives an error about mapping values

I'm trying to deploy an app that is build out of 2 containers. The main app I can deploy just fine, but when I add a "sideCarContainers:" section I get an error like this:

error: error converting YAML to JSON: yaml: line 47: mapping values are not allowed in this context

This is the section I added, commenting this out makes the error disappear:

sideCarContainers:
  appengine:
    image: eu.gcr.io/our-gce-project/container-name:0.1.0

Deploying this container using a separate helm chart works just fine.

We use this helm chart using the Terraform helm provider. I've been trying to get more debugging information, but that has failed up to now.

As a workaround/test I manually edited the resulting deployment yaml in GKE and added the 2nd container, and that also worked fine.

Let me know what I can do to further debug this or what more information you require.

Support priorityClassName

Describe the solution you'd like
Support for priorityClassName (stable as of k8s v1.14) to assign a PriorityClass to deployments creating using the k8s-service helm chart.

Describe alternatives you've considered
I am not aware of a viable alternative method of adding priorityClassName to k8s-service based deployments other than adding support to the helm chart.

Additional context
priorityClassName documentation

Add support for Statefulset deployments

I would like to add support for Statefulset deployments to this helm chart. I believe this can be done with relatively few (and non-breaking?) changes:

  • Add an optional field called workloadType to values.yaml, where the default value is deployment but statefulset is also an option.
  • Add a _statefulset_spec.tpl template which gets used when workloadType is set to statefulset. In turn, _deployment_spec_tpl only gets used when workloadType is set to deployment.
  • Add an optional updateStrategy field to values.yaml which gets injected into updateStrategy on statefulset workloads (workloadType: statefulset). deploymentStrategy continues getting injected into strategy on deployment workloads.
  • Add an optional name field under service due to Statefulset workload requirements. service could still have enabled: false if the service resource is created outside of the chart.

I consider these changes possibly non-breaking because as long as the default value of workloadType is deployment, the chart should behave exactly as it does currently. Also, I don't expect any downtime implications from these changes.

An alternative would be creating different helm charts for statefulset deployments, but I think time will be better spent upgrading this Chart instead.

That being said, is there anything that I'm not seeing that could become a problem with this approach? I'm planning on coding this up soon, but I'd appreciate any feedback 👍🏼.

Validation error on io.k8s.api.networking.v1.IngressBackend

Describe the bug
With the introduction of capabilities introspection (#114) the spec for the ingress backend has not been updated accordingly to match the new API versions (in my case networking.k8s.io/v1) so an error is thrown for invalid spec:

Error: UPGRADE FAILED: error validating "": error validating data: 
[ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, 
ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]

The correct spec for this version can be found at https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressBackend

To Reproduce
Enable an ingress (see example below of values.yml) and run on Kubernetes >= 1.19.

containerPorts:
  http:
    port: 80

service:
  ports:
    app:
      port: 80

ingress:
  enabled: true
  path: /*
  servicePort: app
  hosts:
    - test.foo.com

Expected behavior
The ingress should be managed correctly by the chart for the given API version and be created in the cluster.

Nice to have

  • Terminal output

Additional context
There might be other breaking changes that were introduced in networking.k8s.io/v1, the only one I stumbled upon is the backend.

The change we need to make here is to move from:

          - path: ...
            backend:
              serviceName: ...
              servicePort: ... # this accepts both a number and a string

to:

          - path: ...
            backend:
              service:
                name: ...  # previous serviceName
                port:      # previous servicePort - the new values are mutually exclusive so the chart probably needs to be updated to accept either port name or port number
                  name: ...
                  number: ...

Add support for CSI volume mounts

Describe the solution you'd like
Many service providers are starting to integrate with the Secrets Store CSI driver to provide a mechanism for injecting external secrets into the Pods. For example, AWS has an official provider for using this mechanism to inject Secrets Manager Secrets into Pods as volumes.

We should update the k8s-service chart to allow one to inject such a volume into the Pod.

Add support for disk-based emptyDir volumes

We would like to create and use emptyDir volumes that are backed by disk. #91 added the ability to create emptyDir volumes as scratchPaths, however the implementation is hard-coded to use tmpfs via the medium: "Memory" option provided to the emptyDir spec.

What would be the best way to approach adding support for disk-based emptyDirs?

Prometheus Operator CRD

It would be useful if the chart could support the optional creation of a ServiceMonitor CRD to allow easy integration with Prometheus Operator.

Would this be something you'd consider including if a PR was created?

k8s-service

Provide a module that allows users to package their apps as docker containers and deploy them on Kubernetes following best practices set by Gruntwork. This should be THE way to deploy your apps into production grade Kubernetes deployments.

Blog post series on running infra on Kubernetes based on experience with these modules

  • How to fire up K8S (using IaC)
  • How to define one service as code (presumably using helm?)
  • How to deploy multiple services
  • How to define reusable service “modules”
  • How to deploy multiple environments
  • How to set up CI / CD for K8S (immutable versioned artifacts promoted across envs)
  • How to handle cross-cutting concerns like logging, monitoring, anti virus
  • Security concerns: TLS, SSH, VPN, firewalls, network ACLs, multiple layers of defense, secrets mgmt, etc

Add support for Chart Hooks

Is there a way to define cronjobs in this setup.
also we have some pre-install preupgrade jobs that we would like to run is there a mechanism for that ?

Are these features in pipeline or they already available and Im not able to find them ?

Allow setting securityContext.fsGroup to workaround IRSA

Describe the bug

IAM Role for Service Accounts has a bug where non-root Docker containers are not able to read the kubernetes token when it is projected due to file permissions. To work around this, you need to be able to configure the fsGroup property. See aws/amazon-eks-pod-identity-webhook#8 for more information.

To Reproduce
Use k8s-service helm chart with IAM Role for Service Accounts using Kubernetes version <1.19, and a docker container that does not run as root.

Expected behavior
The container can assume the bound IAM Role.

Actual behavior
The container is not able to assume the bound IAM Role.

[k8s-services chart] Add support for custom Kubernetes objects

A lot of the times you need to work with custom Kubernetes objects or other objects/extensions of the Kubernetes ecosystem that are not yet supported in this chart.

Would be good to allow injecting an entire Kubernetes object (or several) inside the chart.

My use case is probably pretty standard. I use an external secret management store and I want to map the secrets found there into K8s secrets. For that, I use external secrets which defines its own object called ExternalSecret. Currently, I need to apply this YML file before installing the chart which can be problematic.

Here' an example of how I would see it working:

# Contents of templates/customobjects.yaml
{{- /*
If the operator configures the customObjects input variable, then create custom resources based on the given
definitions. If a list of definitions is provided, separate them using the YAML separator so they can all be executed
from the same template file.
*/ -}}

{{- if .Values.customObjects.enabled -}}
{{- range $name, $value := .Values.customObjects.definitions }}
---
{{ $value }}
{{- end }}
{{- end }}
# Sample values.yml passed in
customObjects:
  enabled: true
  definitions:
    custom_configmap: |
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: example
      data:
        flag: true
    custom_secret: |
      apiVersion: v1
      kind: Secret
      metadata:
        name: example
      type: Opaque
      data:
        username: dXNlcm5hbWU=

I raised a draft PR to show how this works (I've tested it and it's all good). If you're happy with the approach I can write some tests and get it merged.

k8s-service examples

  • Deploying nginx
  • Packaging and deploying a sample node.js app with a custom port

Ingress TLS block does not match the spec

Ingress TLS block does not match the k8s Ingress spec and fails on deploy

Example values:

ingress:
  enabled: true
  path: /
  hosts:
    - test-app.com
  servicePort: 80
  tls:
    - secretName: test-app-tls
      hosts:
      - test-app.com

Incorrectly rendered template

# Source: k8s-service/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: release-name-test-app
  labels:
    gruntwork.io/app-name: test-app
    # These labels are required by helm. You can read more about required labels in the chart best
practices guide:
    # https://docs.helm.sh/chart_best_practices/#standard-labels
    app.kubernetes.io/name: test-app
    helm.sh/chart: k8s-service-0.0.1-replace
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
spec:
  tls:
    hosts:
    - test-app.com
    secretName: test-app-tls

  rules:
    - host: "test-app.com"
      http:
        paths:
          - path: /
            backend:
              serviceName: release-name-test-app
              servicePort: 80

Allow for setting securytcontext for a container

We are using the k8s-service chart with great succes. But we recently stumbled on a issue. One of the container images that we need to deploy needs to run in privileged mode.

This means that we need to be able to set:
securityContext:
privileged: true
in the deployment record.

But currently the chart does not allow you to specifiy this or any other securityContext settings. Could this be added to the chart?

Thanks, Tim

Add support for configuring Persistent Volumes

We recently began a project in which we planned to configure a EFS instance as a persistent volume using the AWS EFS CSI Driver.

However, we quickly found that although we can terraform the PersistentVolume and it's PersistentVolumeClaim, we have no way to use the Gruntworks Helm Chart.

it seems that volumes are supported on the existing helm chart, but only for mounting configs and secrets. We would like to be able to utilize persistentVolumes on our k8s pods via our values.yaml, since as far as we can tell, there's currently no way to utilize PersistentVolumes with the helm chart provided by this repo.

Thanks,

k8s-daemonset

This should be the DaemonSet version of k8s-service.

Add support for hostAliases

Describe the solution you'd like
We are looking at adding entries to /etc/hosts of our pods. We'd like to take advantage of the hostAliases configuration we can add to a deployment (container spec)

Describe alternatives you've considered
We have considered not using Kubernetes configuration and adding those entries to our Docker images directly, but it is much less

Additional context
Example of Kubernetes configuration

spec:
  [...]
  template:
    [...]
    spec:
      hostAliases:
        - ip: 10.1.2.3
          hostnames:
            - domain.name1.com
            - domain.name2.com

How it could look like in terraform

   hostAliases = {
      "10.1.2.3" = ["domain1.com", "domain2.com"]
      "10.1.2.4" = ["domain3.com"]
      "10.1.2.5" = ["domain4.com"]
  }

Secrets management

Find a way to seamlessly integrate SealedSecrets.

Or consider gruntkms or sops such that secrets are decrypted as they are mounted into pods as an alternative. Need some way to ensure the IAM role scope is inherited from the Pod config.

feature: Inject Secret Store CSI driver to deployment_spec

Related

Add support for CSI volume mounts - #118

Description

When we want to use a Secret Store CSI driver, we need to include a 'csi' block in the 'secrets' section of the values.yaml. This will inject some content into the Volume and Environment Variables sections of the deployment.

I created a template test to check if the YAML is correctly built ✅

We now accept 'secret.as' to have the type 'csi'. This will add an environment variable to the container for each secret and also include the 'csi' block in the volume.

My issue (and the reason, in my humble opinion, why nobody picked up this ticket) arises when creating the integration test.

In our organization's use-case, we are using AWS Secret Manager, but the implementation of this injection should be generic.

For that reason, within the test, we would need to provision Minikube with Consul Vault and the Secret Store CSI driver Helm charts before we can add a custom resource that includes the secret provider class.

It feels like it's beyond the scope to add custom resources, but it's necessary to be able to test the pod was created correctly and the secrets mounted as env vars

integration test hasn't been added ❌

waiting for confirmation that provisioning consul+Vault on minikube and adding the secret store crd is the correct approach for adding an integration test. Another thing, is it necessary to provide an integration test for this case?

Support for custom Args on deployment spec

Describe the solution you'd like

This helm template already offers us the possibility of overriding the ENTRYPOINT from the Dockerfile by using the containerCommand input variable. It is super nice, but there are situations in which I don't want to override the entrypoint, but I want to pass custom arguments to my container using the args provided by kubernetes (the CMD in the Dockerfile). For reference: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments

Is there a reason why it is not implemented? It should be something simple to do and I'm happy to open a PR with that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.