Giter Site home page Giter Site logo

hashicorp / vault-csi-provider Goto Github PK

View Code? Open in Web Editor NEW
292.0 42.0 50.0 27.25 MB

HashiCorp Vault Provider for Secret Store CSI Driver

License: Other

Go 70.84% Dockerfile 1.14% Makefile 3.74% Shell 20.39% HCL 3.89%
vault kubernetes csi provider secret

vault-csi-provider's Introduction

HashiCorp Vault Provider for Secrets Store CSI Driver

โš ๏ธ Please note: We take Vault's security and our users' trust very seriously. If you believe you have found a security issue in Vault CSI Provider, please responsibly disclose by contacting us at [email protected].

HashiCorp Vault provider for the Secrets Store CSI driver allows you to get secrets stored in Vault and use the Secrets Store CSI driver interface to mount them into Kubernetes pods.

Installation

Prerequisites

Using helm

The recommended installation method is via helm 3:

helm repo add hashicorp https://helm.releases.hashicorp.com
# Just installs Vault CSI provider. Adjust `server.enabled` and `injector.enabled`
# if you also want helm to install Vault and the Vault Agent injector.
helm install vault hashicorp/vault \
  --set "server.enabled=false" \
  --set "injector.enabled=false" \
  --set "csi.enabled=true"

Using yaml

You can also install using the deployment config in the deployment folder:

kubectl apply -f deployment/vault-csi-provider.yaml

Usage

See the learn tutorial and documentation pages for full details of deploying, configuring and using Vault CSI provider. The integration tests in test/bats/provider.bats also provide a good set of fully worked and tested examples to build on.

Troubleshooting

To troubleshoot issues with Vault CSI provider, look at logs from the Vault CSI provider pod running on the same node as your application pod:

kubectl get pods -o wide
# find the Vault CSI provider pod running on the same node as your application pod

kubectl logs vault-csi-provider-7x44t

Pass -debug=true to the provider to get more detailed logs. When installing via helm, you can use --set "csi.debug=true".

Developing

The Makefile has targets to automate building and testing:

make build test

The project also uses some linting and formatting tools. To install the tools:

make bootstrap

You can then run the additional checks:

make fmt lint mod

To run a full set of integration tests on a local kind cluster, ensure you have the following additional dependencies installed:

You can then run:

make setup-kind e2e-image e2e-setup e2e-test

Finally tidy up the resources created in the kind cluster with:

make e2e-teardown

vault-csi-provider's People

Contributors

99 avatar a-riva avatar anubhavmishra avatar aramase avatar benashz avatar brocando avatar carnei-ro avatar dependabot[bot] avatar developer-guy avatar hashicorp-copywrite[bot] avatar hashicorp-tsccr[bot] avatar hc-github-team-es-release-engineering avatar isugimpy avatar jasonodonnell avatar jeanneryan avatar malnick avatar manedurphy avatar mdeggies avatar modrake avatar pacoxu avatar ritazh avatar sarahethompson avatar sarahhenkens avatar swenson avatar tam7t avatar thyton avatar tomhjp avatar tvoran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vault-csi-provider's Issues

hashicorp/secrets-store-csi-driver-provider-vault:0.0.5 contains Mach-O binary

I tried to use the latest version of the vault provider using https://github.com/hashicorp/secrets-store-csi-driver-provider-vault/blob/master/deployment/provider-vault-installer.yaml. However, 0.0.5 doesn't work with an error.

time="2020-05-06T04:26:15Z" level=error msg="error invoking provider, err: fork/exec /etc/kubernetes/secrets-store-csi-providers/vault/provider-vault: exec format error, output:  for pod: d6a36b10-7155-46cb-aba6-fb40b9f4cfd8, ns: default"

0.0.5 includes Mach-O binary instead of ELF.

$ docker run --rm -v /tmp:/tmp -e TARGET_DIR=/tmp hashicorp/secrets-store-csi-driver-provider-vault:0.0.5
install done, daemonset sleeping
...

$ file /tmp/vault/provider-vault
/tmp/vault/provider-vault: Mach-O 64-bit x86_64 executable

Since 0.0.4 is pushed with ELF binary. 0.0.5 should be the same format.

Error with full path to secret

When i create secrets in path /secret (fullpath to secrets is: /secret/database1.conf) and mount it - all is ok.
But when i create secrets in path /secret/database (fullpath to secrets is: /secret/database/test-db.conf ) i get error:
Warning FailedMount 3m55s kubelet, k8-2 MountVolume.SetUp failed for volume "secrets-store" : rpc error: code = Unknown desc = error mounting secret time="2020-05-14T09:43:54Z" level=fatal msg="[error] : secrets-store csi driver failed to write /database/test-db.conf at /var/lib/kubelet/pods/14a6336a-4dc0-4d28-8cc1-23ffbca3aac0/volumes/kubernetes.io~csi/secrets-store/mount: open /var/lib/kubelet/pods/14a6336a-4dc0-4d28-8cc1-23ffbca3aac0/volumes/kubernetes.io~csi/secrets-store/mount/database/test-db.conf: no such file or directory"

in vault policy i have been added:
path "secret/data/database/test-db.conf" {
  capabilities = ["read", "list"]
}

How can i use secrets with long name of dirs?

Support for paths with and without the forward-slash

The use of the forward-slash before the object path caused me some confusion. It feels different from a lot of the other places. I would like for the provider to support with a forward-slash and without a forward-slash.

...
    objects:  |
      array:
        - |
          objectPath: "/foo"                    # secret path in the Vault Key-Value store e.g. vault kv put secret/foo bar=hello
          objectName: "bar"
          objectVersion: ""
...
    objects:  |
      array:
        - |
          objectPath: "foo"                    # secret path in the Vault Key-Value store e.g. vault kv put secret/foo bar=hello
          objectName: "bar"
          objectVersion: ""

claim "iss" is invalid (GKE)

Hey folks,

I wanted to try out Vault's CSIย plugin (GKE cluster).
I deployed a brand new Vault instance and the driver and enabled the CSI provider

NAME                                    READY   STATUS              RESTARTS   AGE
csi-secrets-store-csi-driver-bxnw9      3/3     Running             0          44m
csi-secrets-store-csi-driver-h2jvm      3/3     Running             0          44m
csi-secrets-store-csi-driver-hrcbk      3/3     Running             0          44m
vault-0                                 1/1     Running             0          66m
vault-1                                 1/1     Running             0          66m
vault-2                                 1/1     Running             0          66m
vault-agent-injector-85df65c9b7-ddhw9   1/1     Running             0          66m
vault-csi-provider-69ttj                1/1     Running             0          66m
vault-csi-provider-8bsxq                1/1     Running             0          66m
vault-csi-provider-gqwlx                1/1     Running             0          66m

I followed the procedure un created a secretProviderCLass

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: vault-database
spec:
  provider: vault
  parameters:
    vaultAddress: "https://vault.vault:8200"
    vaultSkipTLSVerify: "true"
    roleName: "database"
    objects: |
      - objectName: "db-password"
        secretPath: "secret/data/db-pass"
        secretKey: "password"

Then the pod creation fails with the error claim "iss" is invalid"
I found this issue #87 So I tried to change the issuer as suggested

vault write auth/kubernetes/config token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --issuer=https://container.googleapis.com/v1/projects/<project/zones/<zone>/clusters/<cluster_name>

Unfortunately I still got the same error. Could you please give me a hand ?ย :)

Use manifest_staging dir for deployment manifest updates

This PR adds support for health handler. As part of this the livenessprobe and readinessprobe are added to the deployment manifests. However 0.0.7 image doesn't support the health handler yet.

If users try to install the vault provider by running

kubectl apply -f https://raw.githubusercontent.com/hashicorp/secrets-store-csi-driver-provider-vault/master/deployment/provider-vault-installer.yaml

the vault provider will fail to start with livenessprobe errors.

We use manifest_staging dir in the Secrets Store CSI Driver repo to host the manifest changes prior to release. The manifests are then promoted to the official deployment dir at the time of release. We should follow a similar approach for the vault provider so the master manifests are always valid.

K8s secrets are not refreshed from vault

Hello!

I was following this tuto https://www.vaultproject.io/docs/platform/k8s/csi/examples as I wanted to create k8s secrets from vault secrets. Then used I used secretObjects property and it works fine.
The issue is: when the secrets values change in vault, the new values are not refreshed in the k8s secret. I tried to restart the pod or re-apply SecretProviderClass, it doesn't work.
On the other hand, this works properly with the voulumeMounts. /mnt/secrets-store/my-secret will have the up to date value when to pod restarts.

I know this is still the beta version so maybe it's known, but as I didn't find a roadmap or so I thought it's worth creating an issue.

Thanks for the project, it's really nice!

Permission denied when trying to create a pod

What steps did you take and what happened:
Installed csi-secrets-store-csi-driver and vault-csi-provider using helm and they are running in csi namespace.
Also did the kubernetes authentication.
Created the SecretProviderClass and the pod to read the secret from vault on namespace app. The pod keeps on "ContainerCreating" because it fails to create the volume and returns "permission denied".
image

Do I need to add more permissions to the service account?

What did you expect to happen:
Pod running and able to read the secret.

Anything else you would like to add:
Kubernetes cluster in AWS EKS.
Vault cluster hosted on AWS EC2.

Which provider are you using:
HashiCorp Vault

Environment:

Secrets Store CSI Driver version: (use the image tag): secrets-store-csi-driver-0.2.0
Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-dispatcher", GitCommit:"a39a896b5018d0c800124a36757433c660fd0880", GitTreeState:"clean", BuildDate:"2021-01-28T22:06:27Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-eks-d88609", GitCommit:"d886092805d5cc3a47ed5cf0c43de38ce442dfcb", GitTreeState:"clean", BuildDate:"2021-07-31T00:29:12Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}

nodePublishSecretRef support for authentication as requesting pod

Is your feature request related to a problem? Please describe.
The current implementation requires ClusterRole to request ServiceAccount token to the API server. The CSI driver already has an implementation that gets the ServiceAccount token of nodePublishSecretRef through MountRequest. It may not be preferred in some environments for providers to require Cluster-wide privileges as a design. It also makes sense to access the API server within secrets-store-csi-driver and the provider should be simplified. We've already understand that each method has pros and cons according to the original discussion.

Describe the solution you'd like
Add option to read the secret of MountRequest gRPC call and use token as Kubernetes Auth method JWT.

Describe alternatives you've considered
Kubernetes +v1.20 supports feature gate CSIServiceAccountToken that requests a token through the kubelet and it is considered to be the standard method for secrets-store-csi-driver. However, this is the method since v1.20, and we are discussing the method for users of v1.19 or eariler.

Explain any additional use-cases

Additional context
Original discussion thread #64 (comment)

Support for external vault

Hey Team,

I have a use case where I want to manage a single vault server for multiple Kubernetes clusters running on different networks with their own CSI provider. I am wondering how the vault CSI provider is going to authenticate the external vault servers.

Do we need to provide vault auth token in the CRD resource itself? or need to configure vault CSI provider daemonset itself?
There are other parameters aswell as shown here.

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: vault-database
spec:
  provider: vault
  parameters:
    vaultAddress: "http://vault:8200"
    roleName: ""
    objects: |
      - objectName: ""
        secretPath: ""
        secretKey: ""

Unable to build cross-platform binary

The make build step doesn't load the variables specified in the makefile as ENV variables for the make step itself. I worked around it by redefining the build step.

build: setup
	GOOS=$(GOOS) GOARCH=$(GOARCH) CGO_ENABLED=0 go build -a -ldflags $(LDFLAGS) -o _output/secrets-store-csi-driver-provider-vault_$(GOOS)_$(GOARCH)_$(IMAGE_VERSION) .

I don't spend a lot of time building Go code in Makefiles so I want to share that information but not sure about a PR.

Vault token (and credentials) active renew of lease during lifetime of Pod to avoid TTL expiration?

The vault-agent-in-sidecar does active renewal of token (and/ credentials) lease for the lifetime of the Pod.
cf https://www.vaultproject.io/docs/platform/k8s/injector#renewals-and-updating-secrets and https://www.vaultproject.io/docs/agent/template#renewals-and-updating-secrets

That's especially useful as it allows using workload without any modification: it can read the secrets as env var at startup as usual and never be bothered to dynamically rotate them.

I could not find anything like that in this secret csi provider.
(the closest was #64 talking about token bound to the pod lifetime, but as I understand it it's about kubernetes token, not vault token)

Am I missing something? Is it something that is planned to be added in the future?

Thanks!

Support for HTTPS_PROXY with Proxy certificates

Lets assume you are in a restricted network which runs Kubernetes in a public Cloud and you have to talk with the on-prem Vault via a HTTPS interception Proxy.

I haven't seen a call of "ProxyFromEnvironment" so I assume it's not yet part of the provider.

Is there any plan to add this feature to the provider?

Support for Azure auth method for integration with AKS

We are using AKS, and currently evaluating CSI driver approach for integration with Hashicorp Vault and Azure Key Vault.

Does Vault Provider support Azure auth method, so that we can use Azure AD service principal / jwt token for authenticating to Vault.

multitenant support

if I understand correctly, this csi driver uses the service account in which the csi-secrets-store controller is running to authenticate to vault using the kubernetes authentication method.
This means that all of the secrets will be retrieved with the same account regardless of which namespace the csi secret volume it defined.
This does not play well in a multi-tenant environment in which each namespace is assigned to a different tenant and each tenant needs to be strictly isolated.
Can you confirm that this is the case? If this is the case, are there workarounds to this problem?
Ideally the controller should use the a service account from the namespace in which the csi driver is defined. the service account to use should be passed as a parameter, if not passed it should default to the default service account.

Project status

Is this project still alive?

I would love to use it in production but the lack of activity of this repo makes me wonder what the roadmap looks like.

Remove hardcoded /secret path

You should be able to use secrets from kv engines that are mounted at different paths than /secret.

Code reference

Currently the documentation has a comment describing an example as the objectPath

      objectPath: "/foo"                    # secret path in the Vault Key-Value store e.g. vault kv put secret/foo bar=hello

But as this diverges from the path you would normally use with the Vault API, it could be confusing.

/etc/kubernetes/secrets-store-csi-providers/vault/provider-vault: no such file or directory

Following the steps in the README leaves me in the following state:

Warning  FailedMount  1s (x5 over 10s)  kubelet, minikube  MountVolume.SetUp failed for volume "secrets-store-inline" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Unknown desc = stat /etc/kubernetes/secrets-store-csi-providers/vault/provider-vault: no such file or directory

I verified that this file by shelling into the secrets-store container of the csi-secrets-store-secrets-store-csi-driver-* pod.

$ kubectl exec -it csi-secrets-store-secrets-store-csi-driver-npdwf -c secrets-store -- /bin/sh
/ # cd /etc/kubernetes/secrets-store-csi-providers
/ # ls
/ #

The vault/vault-provider executable can not be found.

To fix this I applied the provider-vault-installer.yml found within this repo.

$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: csi-secrets-store-provider-vault
  name: csi-secrets-store-provider-vault
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: csi-secrets-store-provider-vault
  template:
    metadata:
      labels:
        app: csi-secrets-store-provider-vault
    spec:
      tolerations:
      containers:
        - name: provider-vault-installer
          image: hashicorp/secrets-store-csi-driver-provider-vault:0.0.4
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 50m
              memory: 100Mi
            limits:
              cpu: 50m
              memory: 100Mi
          env:
            # set TARGET_DIR env var and mount the same directory to to the container
            - name: TARGET_DIR
              value: "/etc/kubernetes/secrets-store-csi-providers"
          volumeMounts:
            - mountPath: "/etc/kubernetes/secrets-store-csi-providers"
              name: providervol
      volumes:
        - name: providervol
          hostPath:
              path: "/etc/kubernetes/secrets-store-csi-providers"
      nodeSelector:
        beta.kubernetes.io/os: linux
EOF
daemonset.apps/csi-secrets-store-provider-vault created

When I return to the secrets-store container of the csi-secrets-store-secrets-store-csi-driver-* pod. The executable is present.

$ kubectl exec -it csi-secrets-store-secrets-store-csi-driver-npdwf -c secrets-store -- /bin/sh
/ # cd /etc/kubernetes/secrets-store-csi-providers
/ # ls
vault
/ # cd vault
/# ls
provider-vault

The previous secret will be overwritten by same ObjectName

Tested revision: 34a978c51db6b048873167c9e2003c145d2f5f23

If the same ObjectName is included in a SecretProviderClass, the previous secret will be overwritten.

    objects: |
      array:
        - |
          objectPath: "/secret/foo"
          objectName: "bar"
          objectVersion: ""
        - |
          objectPath: "/secret/foo1"
          objectName: "bar"
          objectVersion: ""

Source: https://github.com/kubernetes-sigs/secrets-store-csi-driver/blob/master/test/bats/tests/vault_v1alpha1_secretproviderclass.yaml

I confimed it by running e2e-vault of secrets-store-csi-driver.

$ k exec -it nginx-secrets-store-inline -- ls -l /mnt/secrets-store
total 4
-rw-r--r-- 1 root root 6 Jun 17 05:56 bar

$ k exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/bar
hello1

Ideally, It should support consul template to output in a flexible format like vault.hashicorp.com/agent-inject-template of Agent Sidecar Injector. but as a short-term fix, I suggest using a slash in the ObjectPath as a hyphen and a file name with a combined ObjectName to generate a unique name. For instance. secret-foo-bar. or should create directory based on ObjectPath (#39). I will make a PR if it is accepted.

Feature Request: use namespace from requester

I have scoped my Vault to the secrets path match the namespace in the Kubernetes.
I mean, I have a KV secret version 2 like /sandbox/k8s_sres/my-app/db", than my namespace in Kubernetes is named "my-app". I forged a templated policy like this:

path "+/data/+/{{identity.entity.aliases.${ACCESSOR}.metadata.service_account_namespace}}*" {
  capabilities = ["read", "list"]
}
path "+/metadata/+/{{identity.entity.aliases.${ACCESSOR}.metadata.service_account_namespace}}*" {
  capabilities = ["read", "list"]
}

Then I have created a k8s auth backend role to bound to a specific service account name, but allowing all namespaces (I call it k8s_read_own_namespace_role)

But all requests to login into Vault come from the namespace where csi-secrets-store is running, than I can not use my template policy to match the namespace metadata from the requester.

EKS: Kubernetes 1.21 404 error

Hi, I'm currently running EKS 1.21 with secrets store CSI as well as vault deployed with CSI and injector. I have updated the Kubernetes auth login to be compatible with the changes in 1.21 by settting the issuer which is somehting like this on EKS: https://oidc.eks.eu-west-1.amazonaws.com/id/REDACTED123456.

I have a vault cluster deployed in the vault namespace which is reachable at http://vault.vault:8200.

I have a KV store v2 with the following keys;

ssv/operator, inside I have to K/V, PK = <data> and SK=<data>

I have a policy giving access to ssv/* with read and list.

I have a vault role ssv-node binding service account node in namespace ssv giving access to the ssv/ K/V store.

I have tested with an injector on a pod and it works fine.

I have the following secret provider class:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: ssv-node
  namespace: ssv
spec:
  provider: vault
  parameters:
    vaultAddress: "http://vault.vault:8200"
    roleName: "ssv-node"
    objects: |
      - objectName: "operator-key"
        secretPath: "ssv/operator"
        secretKey: "SK"
    secretObjects:
    - data:
     - key: "SK"
       objectName: "operator-key"
     secretName: ssv-node
     type: Opaque

and then the following inside my pods:

    - csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: ssv-node

...
      - mountPath: secrets                                                                                                                                                                                      
        name: secrets-store-inline  

In the pod description I get the following error:

 MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store object
s for pod ssv/node-0, err: rpc error: code = Unknown desc = error making mount request: couldn't read secret "operator-key": Error making API request.                                                           
                                                                                                        
URL: GET http://vault.vault:8200/v1/ssv/operator
Code: 404. Errors:                                 

CSI secrets driver does not exist

Following the instructions to install the CSI secrets driver creates a CSIDriver named secrets-store.csi.k8s.io and not secrets-store.csi.k8s.com.

The CSI Driver mentioned throughout a lot of the examples is secrets-store.csi.k8s.com.

Like the pod that is defined in the README.

kind: Pod
apiVersion: v1
metadata:
  name: nginx-secrets-store-inline
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - name: secrets-store-inline
      mountPath: "/mnt/secrets-store"
      readOnly: true
  volumes:
    - name: secrets-store-inline
      csi:
        driver: secrets-store.csi.k8s.com
        readOnly: true
        volumeAttributes:
          secretProviderClass: "vault-foo"

Generic support for other secrets engines

Currently, this Vault CSI provider only supports static secrets from the KV secrets engine. And assumes that secrets are versioned and that /data/ should be automatically added to paths. Because Vault has many other secrets engines (which provide dynamic credentials that are much better in terms of security), the Vault CSI provider should provide a generic interface to support them.

Basically, one would want something like a vault-agent that could update a Secret (without using hacks such as annotations and webhooks that inject sidecar containers).

Missing vault path for kubernetes authentication

Hi,

we're currently testing external kubernetes access to our vault. 6 Kubernetes Cluster should connect to one HashiCorp Vault. Each Cluster has in the vault his own auth method installed with the path option like below with their own configs:

vault auth enable -path=kubernetes-test-euwest kubernetes
vault auth enable -path=kubernetes-test-eunorth kubernetes

Unfortunately there is no option to set up the path in the vault-csi-provider. So all requests from the CSI to our vault goes to the default "kubernetes" path.

Errormessage of our Pod:

Warning FailedMount 118s (x14 over 15m) kubelet MountVolume.SetUp failed for volume "testapp-secret-store" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod playground/testapp-5bdd4f5f6b-zgw5j, err: rpc error: code = Unknown desc = error making mount request: failed to login: Error making API request.

URL: POST https://testtest.testtest.testtest.org/v1/auth/kubernetes/login
Code: 500. Errors:

  • claim "iss" is invalid

Note, the URL should be:
https://testtest.testtest.testtest.org/v1/auth/kubernetes-test-euwest/login

Switch provider-vault to use logrus instead of glog

With the extraction of providers and moving out of tree, each provider has been configured to write the log to a file which can then be streamed using a side car provider-log container.

This can be difficult for users while debugging multiple providers. In an effort to make it easier, the logs will be written to stdout and stderr by the providers. These logs will then be surfaced by the driver. The following changes have already gone in to achieve this UX -

Please use the azure-provider PR as reference to switch the logger to logrus and write to stdout/stderr for provider-vault.

Once we have the new image for provider-vault including these changes we can cut a new release for secrets-store-csi-driver with the latest providers.

cc @malnick @ritazh

not a general issue. Couldn't find any docs.

Apologies for raising this here, (it might be nothing wrong with the provider) however, I couldn't find any documentation of how to set up TLS support properly for vault-csi-provider.

Inside a Kubernetes cluster I have vault set up via helm chart with the following config:


# Vault Helm Chart Value Overrides
global:
  enabled: true
  tlsDisable: false

csi:
  enabled: true
  image:
    repository: hashicorp/vault-csi-provider
    tag: latest
  volumes:
     - name: tls
       secret:
         secretName: vault-csi-tls

  volumeMounts:
    - name: tls
      mountPath: /vault/tls
      readOnly: true
      
  resources:
    requests:
      cpu: 50m
      memory: 128Mi
    limits:
      cpu: 50m
      memory: 128Mi

  daemonSet:
    resources:
      requests:
        cpu: 100m
        memory: 256Mi
      limits:
        cpu: 100m
        memory: 256Mi

injector:
  enabled: true
  # Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
  image:
    repository: "hashicorp/vault-k8s"
    tag: "latest"

  resources:
      requests:
        memory: 128Mi
        cpu: 125m
      limits:
        memory: 256Mi
        cpu: 250m

server:
  image:
    repository: "vault"
    tag: "1.7.0"
    # Overrides the default Image Pull Policy
    pullPolicy: IfNotPresent
  resources:
    requests:
      memory: 128Mi
      cpu: 125m
    limits:
      memory: 512Mi
      cpu: 250m

  # For HA configuration and because we need to manually init the vault,
  # we need to define custom readiness/liveness Probe settings
  readinessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
  livenessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true"
    initialDelaySeconds: 60

  # extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
  # used to include variables required for auto-unseal.
  extraEnvironmentVars:
    VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
    GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/<project-name>-adeb5c46bc2b.json

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Vault in the path .
  extraVolumes:
    - type: secret
      name: vault-server-tls
    - type: secret
      name: 'kms-creds'

  # This configures the Vault Statefulset to create a PVC for audit logs.
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  auditStorage:
    enabled: true

  standalone:
    enabled: false

  # Run Vault in "HA" mode.
  ha:
    enabled: true
    replicas: 5
    raft:
      enabled: true
      setNodeId: true

      config: |
        ui = true
        listener "tcp" {
          address = "[::]:8200"
          cluster_address = "[::]:8201"
          tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
          tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          tls_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
        }

        seal "gcpckms" {
          project     = "test"
          region      = "global"
          key_ring    = "test"
          crypto_key  = "test-key"
        }

        storage "raft" {
          path = "/vault/data"
            retry_join {
            leader_api_addr = "https://vault-0.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
            leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
          retry_join {
            leader_api_addr = "https://vault-1.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
            leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
          retry_join {
            leader_api_addr = "https://vault-2.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
            leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
          retry_join {
              leader_api_addr = "https://vault-3.vault-internal:8200"
              leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
              leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
              leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
          retry_join {
              leader_api_addr = "https://vault-4.vault-internal:8200"
              leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
              leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
              leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
        }

        service_registration "kubernetes" {}

I did set up TLS for both vault and csi provider following this guide:

https://www.vaultproject.io/docs/platform/k8s/helm/examples/standalone-tls

In another namespace where I have my Secret provider manifest defined:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  namespace: heimdall-dev
  name: heimdall-config
spec:
  provider: vault
  secretObjects:
  - secretName: heimdall-dev-secrets
    type: Opaque
    data:
    - objectName: DATABASE_URL # References dbUsername below
      key: DATABASE_URL          # Key within k8s secret for this value
    - objectName: test
      key: test
  parameters:
    roleName: "heimdall-dev"
    vaultAddress: "https://vault.vault:8200"
    vaultCACertPath: "/vault/tls/vault.crt"
    objects: |
      - objectName: "DATABASE_URL"
        secretPath: "heimdall-dev/config/env"
        secretKey: "DATABASE_URL"
      - objectName: "test"
        secretPath: "heimdall-dev/config/env"
        secretKey: "test"

Volume mount in the Deployment:

                  volumeMounts:                     
                      - name: heimdall-config
                        mountPath: /mnt/secrets-store
                        readOnly: true

Volume

            volumes:
                - name: heimdall-config
                  csi:
                    driver: secrets-store.csi.k8s.io
                    readOnly: true
                    volumeAttributes:
                        providerName: vault
                        secretProviderClass: heimdall-config

When firing the above up I get this error from the deployment:

MountVolume.SetUp failed for volume "heimdall-config" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod heimdall-dev/heimdall-5c86564fcb-qq5px, err: rpc error: code = Unknown desc = error making mount request: failed to login: Post "https://vault.vault:8200/v1/auth/kubernetes/login": x509: certificate signed by unknown authority

and these from the vault:

2021-04-23T10:54:17.831Z [INFO] http: TLS handshake error from 10.20.2.24:60484: remote error: tls: bad certificate
2021-04-23T10:54:20.144Z [INFO] http: TLS handshake error from 10.20.2.24:60538: remote error: tls: bad certificate

My cluster is hosted inside GCP, I was thinking that the leaf certificates should be good because they are based on the Kubernetes intermediary CA and we send the car to that, which has a proper root, and everything should work like in the movies! However, it's a bad movie by now. We have cert manager inside our cluster, is there a way to point that to csi provider? or vault for that matter?

Could you guys point me to a general guide/direction?

Thanks a lot!

Mutating Web Hook or CSI Provider

From a Hashicorp or vault-csi-provider maintainer perspective, what is the lean for someone looking to automate vault secret injection?

We are preparing to invest heavily in automating one or the other and given that operator and CSI patterns are both 1st class citizens, I would love to hear what the community and Hashi have to say on where I should invest dev time to make this real?

Kubernetes gives a much lower bar to trade this out in the future, however, the list of opportunities is long and I want to bet on the contender that gets me farther before revisiting.

Thanks in advance!

Support for secret encoding?

It seems that a common way of storing non-Unicode secrets in Vault is to base64-encode them. Is there currently a way to mount these through the CSI provider without adding a manual decoding process?
If not, the Azure driver supports an objectEncoding property (see Azure/secrets-store-csi-driver-provider-azure#236). Would you accept a PR that adds a similar feature?

Configure Vault connection details in Provider pod

Currently, Vault connection details are specified in the SecretProviderClass, e.g.:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: vault-kv
spec:
  provider: vault
  parameters:
    roleName: "kv-role"
    vaultAddress: https://vault:8200
    vaultCACertPath: /mnt/tls/ca.crt
    vaultTLSClientCertPath: /mnt/tls/client.crt
    vaultTLSClientKeyPath: /mnt/tls/client.key
    objects: |
      - objectName: "secret-1"
        secretPath: "secret/data/kv1"
        secretKey: "bar1"

However, this awkwardly punctures the separation of concerns between the application namespace and the CSI provider's namespace. The application-namespaced SPC has to know about the contents of the provider pod's file system in order to configure which TLS certificates to use. The argument for vaultAddress being out of place is less clear, but would also arguably make more sense to be configured on the Vault provider side rather than on the application side.

If we move Vault connection details into the provider pod's configuration, in addition to a better separation of concerns, it will also allow us to safely deploy Vault Agent as a side car for the provider. That would then give us lots of nice features like caching and automatic lease renewal almost for free.

REQUEST: Use distroless image in final stage

FROM docker.mirror.hashicorp.services/alpine:3.13

I'd like the request we move the final stage of the image to a distroless image - or even an image that has no shell.

It would also be great to do some image scanning for known CVE vulnerabilities as it appears there are some in this image that are marked HIGH and CRITICAL respectively.

Add --version flag to show current provider version

Add new --version flag that shows the current provider version, minimum supported driver version.

// providerVersion holds current provider version
type providerVersion struct {
	// Version is the current provider version
	Version string `json:"version"`
	// BuildDate is the date provider binary was built
	BuildDate string `json:"buildDate"`
	// MinDriverVersion is minimum driver version the provider works with
	// this can be used later for bidirectional compatibility checks between driver-provider
	MinDriverVersion string `json:"minDriverVersion"`
}

Unable to run make test

The dependencies to run the test are not in the makefile or defined in the README.

$ go get sigs.k8s.io/secrets-store-csi-driver

unable to get vault provider to work

2021-05-12T18:06:58.138Z [DEBUG] server.provider: setting Vault namespace: namespace=unix
2021-05-12T18:06:58.138Z [DEBUG] server.provider: performing vault login
2021-05-12T18:06:58.138Z [DEBUG] server.provider: creating service account token bound to pod: namespace=vault serviceAccountName=webapp-sa podName=webapp podUID=34db3d17-a08b-462a-bf64-ca6c6627b3b2
2021-05-12T18:06:58.140Z [INFO]  server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=2.189259ms grpc.code=Unknown err="error making mount request: failed to create a service account token for requesting pod {webapp 34db3d17-a08b-462a-bf64-ca6c6627b3b2 vault webapp-sa}: the server could not find the requested resource"
2021-05-12T18:07:06.167Z [INFO]  server: Processing unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount
2021-05-12T18:07:06.167Z [DEBUG] server: Request contents: req="attributes:"{\"csi.storage.k8s.io/pod.name\":\"webapp\",\"csi.storage.k8s.io/pod.namespace\":\"vault\",\"csi.storage.k8s.io/pod.uid\":\"34db3d17-a08b-462a-bf64-ca6c6627b3b2\",\"csi.storage.k8s.io/serviceAccount.name\":\"webapp-sa\",\"objects\":\"- objectName: \\\"db-password\\\"\\n  secretPath: \\\"concourse/unixservices/test/db-pass\\\"\\n  secretKey: \\\"password\\\"\\n\",\"roleName\":\"database\",\"vaultAddress\":\"https://vault:8200\",\"vaultKubernetesMountPath\":\"test\",\"vaultNamespace\":\"unix\",\"vaultSkipTLSVerify\":\"true\"}" secrets:"null" target_path:"/var/lib/kubelet/pods/34db3d17-a08b-462a-bf64-ca6c6627b3b2/volumes/kubernetes.io~csi/secrets-store-inline/mount" permission:"420""

I am seeing the above with version 0.2.0, the issue happens during the JWT creation https://github.com/hashicorp/vault-csi-provider/blob/master/internal/provider/provider.go#L45-L76.
it does have the proper clusterrole with serviceaccounts/token create permissions. We are using external vault and i can login to vault using the pods serviceaccount

vault write auth/test/login role=database jwt='xxx'
Key                                       Value
---                                       -----
token                                     s.X
token_accessor                            s.Y
token_duration                            20m
token_renewable                           true
token_policies                            ["default" "internal-app"]
identity_policies                         []
policies                                  ["default" "internal-app"]
token_meta_role                           database
token_meta_service_account_name           webapp-sa
token_meta_service_account_namespace      vault
token_meta_service_account_secret_name    webapp-sa-token-w2tqp
token_meta_service_account_uid            b1d5ff84-a3ff-4c89-90c8-93724674fe26

role contents
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: vault-csi-provider-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - serviceaccounts/token

Remove automatic addition of data to the path with using KV2 secrets engines

When defining the objectPath for the SecretProviderClass I expected that a KV2 secret path would need the entire path specified /secret/data/database/config versus what is supported /secret/database/config. I saw that the the provider detects the secret engine type and automatically adds data. This is inconsistent with the behavior of how the Vault Helm chart annotations work.

Unable to run make test because golangci-lint is not specified in requirements

Make test fails because golangci-lint is NOT installed.

$ make test
rm -rf _output
Setup...
go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/franklinwebber/Library/Caches/go-build"
GOENV="/Users/franklinwebber/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/franklinwebber/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.14/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.14/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/lynnfrank/hashicorp/secrets-store-csi-driver-provider-vault/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/ct/ywz_985n5cjd9fjjrz3p33kw0000gn/T/go-build851528317=/tmp/go-build -gno-record-gcc-switches -fno-common"
==> Running static validations and linters <==
golangci-lint run
make: golangci-lint: No such file or directory
make: *** [test-style] Error 1

It should be defined as a requirement or installed with the setup.

https://github.com/golangci/golangci-lint

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.