Giter Site home page Giter Site logo

1password / connect-helm-charts Goto Github PK

View Code? Open in Web Editor NEW
79.0 12.0 71.0 379 KB

Official 1Password Helm Charts

Home Page: https://developer.1password.com

License: MIT License

Smarty 100.00%
1password 1password-connect kubernetes secrets-management service-accounts helm helm-charts k8s

connect-helm-charts's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

connect-helm-charts's Issues

Please add support for (anti)affinity for all deployments, etc

Summary

Please include the option to define pod placement using affinity:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

Use cases

We have specific node pools dedicated to different resource requirements, and we place our pods using node / pod affinity. This is much more flexible than using nodeSelector for our purposes.

Proposed solution

Change these (and any I missed):

{{- with .Values.operator.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
tolerations:
{{ toYaml .Values.operator.tolerations | indent 8 }}

{{- with .Values.connect.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}

to:

      {{- with .Values.<deployment>.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.<deployment>.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.<deployment>.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

Is there a workaround to accomplish this today?

n/a

References & Prior Work

Operator attempts to list all namespaces, fails, after 1Pass item update

Your environment

Chart Version: 2.1.0

Helm Version:

Kubernetes Version: v1.20.4

What happened?

Operator logs endless permission error listing namespaces after a change to a 1Password vault item previously created as a secret on the cluster.

E0808 23:25:10.319571       1 reflector.go:178] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:1password-connect:onepassword-connect-operator" cannot list resource "namespaces" in API group "" at the cluster scope

What did you expect to happen?

Operator should have updated the secret successfully and restarted the deployment using it.

Steps to reproduce

  1. Deploy the v2.1.0 helm chart with both connect server and operator enabled, and the operator configured to watch the 1password-connect and default namespaces.
  2. Create the deployment resource below in the default namespace.
  3. Verify the k8s Secret associated with the example-secret 1Password item is created and available both in env vars and on disk in the deployment pod.
  4. Make a change to the value associated with some-key in the 1Password UI (e.g., some-value -> some-value-changed).
  5. Note the continual error messages logged by the operator (see above), that the k8s secret is not updated/recreated wit the new value, and the deployment is not rolled.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app.kubernetes.io/name: example-deployment
  annotations:
    operator.1password.io/item-path: 'vaults/sandbox-dev-k8s/items/example-secret'
    operator.1password.io/item-name: 'example-secret'
    operator.1password.io/auto-restart: 'true'
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: example-deployment
  template:
    metadata:
      labels:
        app.kubernetes.io/name: example-deployment
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
      containers:
        - name: example-deployment
          image: alpine:3
          command: ['/bin/sh', '-c', '--']
          args: ['while true; do sleep 30; done;']
          resources:
            limits:
              memory: 64Mi
            requests:
              memory: 64Mi
          # Example mounting contents of key/value pairs from the 1Password entry as env vars
          env:
            - name: SECRET_SOME_KEY
              valueFrom:
                secretKeyRef:
                  name: example-secret
                  key: some-key
            - name: SECRET_ANOTHER_KEY
              valueFrom:
                secretKeyRef:
                  name: example-secret
                  key: another-key
          # Example mounting contents of key/value pairs from the 1Password entry as files on disk
          volumeMounts:
            - name: secret
              mountPath: '/mnt/secrets'
              readOnly: true
      volumes:
        - name: secret
          secret:
            secretName: example-secret

Notes & Logs

I verified the helm release created a ClusterRole with list permission for the namespaces resource. I also verified the helm release created a RoleBinding in the watched namespaces.

Should the operator be attempting to list all namespaces when the helm release is configured with an explicit set of namespaces? Is listing all namespaces an allowed operation for RoleBindings (perhaps only for ClusterRoleBindings)?

incorrect placement in computed values

Your environment

Chart Version: 1.8.0

Helm Version: v3.9.0

Kubernetes Version: 1.21

What happened?

I am using the chart as dependency in my helm chart. My helm chart includes the following Chart.yaml

apiVersion: v2
name: 1password-connect-local
description: A Helm chart for deploying 1Password Connect and the 1Password Connect Kubernetes Operator
type: application
version: 0.1.0

appVersion: "1.5.4"

dependencies:
- name: connect
  version: 1.8.0
  repository: https://1password.github.io/connect-helm-charts

If i execute :

helm dependency build .
helm -n 1password install mychart   --create-namespace --dry-run --debug .

i get the following computed values:

connect:
  acceptanceTests:
    enabled: false
    fixtures: {}
  annotations: {}
  api:
    httpPort: 8080
    httpsPort: 8443
    imageRepository: 1password/connect-api
    name: connect-api
    resources: {}
  applicationName: onepassword-connect
  connect:
    annotations: {}
    api:
      httpPort: 8080
      httpsPort: 8443
      imageRepository: 1password/connect-api
      name: connect-api
      resources: {}
    applicationName: onepassword-connect
    credentialsKey: 1password-credentials.json
    credentialsName: op-credentials
    dataVolume:
      name: shared-data
      type: emptyDir
      values: {}
    imagePullPolicy: IfNotPresent
    labels: {}
    nodeSelector: {}
    podAnnotations: {}
    podLabels: {}

As you see the connect dictionary exists two times and AcceptanceTests are under one of the connect dictionaries instead of root

SQLITE ERROR when using PVC

Your environment

Chart Version: 1.10.0

Helm Version: 3.10.1

Kubernetes Version: v1.23.9

What happened?

I tried using the following to set up a PVC and attach it to 1password-connect pod so that 1password doesn't write the node (which btw was causing intermittent disk pressure warnings)

# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: onepassword-connect-shared-data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
# values.yaml
connect:
  serviceType: ClusterIP
  dataVolume:
    type: persistentVolumeClaim
    values:
      claimName: onepassword-connect-shared-data
      readOnly: false

When the connect-sync container starts up I see the following:

{"log_message":"(I) disabling bus peer auto-discovery","timestamp":"2023-01-18T02:07:57.431543892Z","level":3}
{"log_message":"(W) did not initialize bus connection to peer localhost:11220. If the peer is currently booting, it may initialize the connection while starting. Details: failed to transport.CreateConnection: [transport-websocket] failed to Dial endpoint: dial tcp 127.0.0.1:11220: connect: connection refused. ","timestamp":"2023-01-18T02:07:57.435251551Z","level":2}
{"log_message":"(W) configured to use HTTP with no TLS","timestamp":"2023-01-18T02:07:57.435325947Z","level":2}
{"log_message":"(I) no existing database found, will initialize at /home/opuser/.op/data/1password.sqlite","timestamp":"2023-01-18T02:07:57.435664243Z","level":3}
Error: Server: (failed to OpenDefault), Wrapped: (failed to open db), unable to open database file: no such file or directory
Usage:
  connect-sync [flags]

Flags:
  -h, --help      help for connect-sync
  -v, --version   version for connect-sync

What did you expect to happen?

1password starts up without issue

Steps to reproduce

  1. Create a PVC
  2. Attach it to the pod by setting the config as described above

CRDs do not get upgraded

Hi ๐Ÿ‘‹๐Ÿผ The way crds are handled in this repo Helm does not upgrade them when the user upgrades the chart.

https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations

There is no support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. Furthermore, there is currently no community consensus around how to handle CRDs and their lifecycle. As this evolves, Helm will add support for those use cases.

What I've seen most chart developers do it have a installCRDs option in the helm values and template them out, take a look at cert-manager and rook-ceph charts for an example on how they handle CRDs.

I would suggest a option in the values.yaml and to move the crds folder into the templates folder and wrap it in an if statement.

operator:
  create: false
  installCRDs: true
  autoRestart: false

Secret injector deployment incorrectly marked as hook

Your environment

Chart Version: latest

Helm Version: latest

Kubernetes Version: 1.26

What happened?

The helm deploy is stuck, because the deployment is marked as a hook here: https://github.com/1Password/connect-helm-charts/blob/main/charts/secrets-injector/templates/deployment.yaml#L10

But since the hook never completes (because it's a deployment), the deploy never completes and helm gets stuck.

Removing the hook annotation fixes it.

What did you expect to happen?

The helm chart to work

Steps to reproduce

Try deploying the helm chart. I did it through Argo. It times out on waiting for the hook to complete.

connect operator deployment fails

I got this error:

Warning   FailedCreate       \
 replicaset/onepassword-connect-operator-95f9f56b7   \
Error creating: pods "onepassword-connect-operator-95f9f56b7-" \
is forbidden: error looking up service account default/onepassword-connect-operator: serviceaccount \
 "onepassword-connect-operator" not found

Proxy Support or custom env

Summary

When operating behind a corporate web proxy, I need to be able to set environment variables on my containers like https_proxy, http_proxy, no_proxy. These instruct most Linux applications that they should send requests via a specified proxy on their way out to the internet.

Use cases

When operating behind a corporate web proxy, all traffic is required to go via the web proxy for security reasons and no other route out of the network exists. I feel like there would be a non zero number of 1Password connect users who are wanting to make this scenario work. I'm quite surprised I'm the first issue mention.

Proposed solution

The helm template for the connect-deployment doesn't allow for any values to be added to environment variables for the containers. I needed to add the 3 mentioned above.
You could either allow in the values, for custom environment variables to be added and append these to the bottom of the env variable lists for the containers, or do something more structured for just my use case. I think allowing custom env variables is fine, but I realise this could have security implications that I am missing.

Is there a workaround to accomplish this today?

As as result of not being able to accomplish this using your helm chart, I've had to use your containers in a generic application chart which has a lot more management overhead for me.

References & Prior Work

I've never done a template like this myself, but I have applied my own env to a number of other helm charts and its been able to handle it ok. I went digging and the first to pop up in my config was datadog, which (although a messy complicated chart) has this include statement in the template which gives an idea of how it could work.
https://github.com/DataDog/helm-charts/blob/e3133172449038caaca4c18342fecd2976be377a/charts/datadog/templates/cluster-agent-deployment.yaml#L297

Setting a namespace to watch prevents the cluster role bind from being created.

What happened?

No cluster role bind was created causing the operator pod to get stuck in a crash backoff loop.

What did you expect to happen?

The operator pod should work as intended.

Steps to reproduce

helm install connect 1password/connect \
    -n 1pass \
    --create-namespace \
    --set operator.create=true \
    --set operator.watchNamespace={default}

Notes & Logs

This seems to be the offending line:

https://github.com/1Password/connect-helm-charts/blob/main/charts/connect/templates/clusterrolebinding.yaml#L2

It doesn't create the binding when you specify a namespace to watch. If this is intended, then close this issue.

Error installing helm-charts - is invalid: spec.ports[0].nodePort: Invalid value: 31080: provided port is already allocated

Hey

Been testing out the new onepassword-connect k8s operator. Firstly, what a fantastic ideas - Our team are really keen to get this going.

Our architecture has several isolated namespaces inside a single gke cluster so our plan was to set up 1password connect and operator inside of each namespace.

Trying to install this into a single namespace dev initially results in an error.

helm repo add 1password https://raw.githubusercontent.com/1Password/connect-helm-charts/main

helm upgrade --install connect 1password/connect --namespace=dev --set-file connect.credentials=$SERVICES_BASE_PATH/gke-1password-dev-credentials.json --set operator.create=true --set operator.token.name=gke-1password-dev-access-token --set operator.token.value=REDACTED --set namespace=dev
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
"1password" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "soluto" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "1password" chart repository
...Successfully got an update from the "stable" chart repository
...Unable to get an update from the "chartmuseum" chart repository (https://chartmuseum-gke.vizibl.co):
        Get "https://chartmuseum-gke.vizibl.co/index.yaml": context deadline exceeded
Update Complete. โŽˆHappy Helming!โŽˆ
Release "connect" does not exist. Installing it now.
Error: Service "onepassword-connect" is invalid: spec.ports[0].nodePort: Invalid value: 31080: provided port is already allocated

Error: Service "onepassword-connect" is invalid: spec.ports[0].nodePort: Invalid value: 31080: provided port is already allocated

Hope someone can help us resolve this issue. Also, if there's any advice on how best to manage this architecture where there's an instance of 1password connect per namespace, that would be greatly appreciated.

Bump connect version to 1.5.7

Your environment

Chart Version:

Helm Version:

Kubernetes Version:

What happened?

What did you expect to happen?

Steps to reproduce

Notes & Logs

Wrongly generated serviceName when tls is enabled

Your environment

Chart Version: latest

Helm Version: 3.11.2

Kubernetes Version: 1.25.4

What happened?

onepassword-connect-https service does not exist as the service template generate onepassword-connect service using http and https if tls is enabled

What did you expect to happen?

onepassword-connect service should be used by the ingress when tls is enabled

Steps to reproduce

  1. enable ingress and use the default values

Notes & Logs

Incorrect default operator.pollingInterval value

Your environment

Chart Version: 1.10.0

Helm Version: 3.11.3

Kubernetes Version: 1.23

What happened?

The POLLING_INTERVAL environment is set to 10 on the onepassword-connect-operator pod.

What did you expect to happen?

The polling interval should be set to 600 per the chart README.

Steps to reproduce

  1. Install operator helm chart, without specifying a value for operator.pollingInterval
  2. Observe the helm chart has a value of 10 for its pollingInterval
$ helm get values connect --all --namespace secrets-management --output json | jq .operator.pollingInterval
10
  1. Observe the running container has a value of 10 for its POLLING_INTERVAL environment variable.
$ kubectl get pod --namespace secrets-management --selector name=onepassword-connect --output json | jq '.items[].spec.containers[].env[] | select(.name == "POLLING_INTERVAL")'
{
  "name": "POLLING_INTERVAL",
  "value": "10"
}

Notes & Logs

Here is where the default chart value is getting set:

The default value should be 600 per the chart README here:

| operator.pollingInterval | integer | `600` | How often the 1Password Operator will poll for secrets updates. |

The operator README also states that the default value should 600 here:
https://github.com/1Password/onepassword-operator/blob/fe930fef052d71516854568bf1042cb93af594a1/README.md?plain=1#L89

The default for the helm chart and the operator itself should not necessarily be same, but the fact that there's agreement between their READMEs suggests to me that this is a mistake in the default value, rather than a mistake in the helm chart README.

Use existing secrets

Summary

Add the option to use existing secrets instead of creating new ones.

Use cases

1Password connect and operator are installed via Flux (GitOps). This means the configuration is in Git and cannot contain any secrets. Initial secrets, like the ones used by 1Password, can be created by admins when the cluster is bootstrapped.

Proposed solution

Add a flag to connect and operator to use an existing secret. This flag can be checked in connect-credentials.yaml and operator-token.yaml.

Chart does not work in Openshift Container Platform

Your environment

Chart Version: 1.7.1

Helm Version: 3.8.0

Kubernetes Version: OCP 4.10.10 (Kubernetes v1.23.5+9ce5071

What happened?

Security error when trying to create pods due to using runAsUser with a user id number that's not allowed

What did you expect to happen?

Chart installed correctly

Steps to reproduce

I'm currently using ArgoCD to deploy this via their App-of-apps pattern. So this is a resource for the Application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: one-password-connect
  namespace: openshift-gitops
  labels:
    app.kubernetes.io/instance: argocd
spec:
  project: default
  source:
    repoURL: https://1password.github.io/connect-helm-charts/
    targetRevision: 1.7.1
    path: apps/one-password-connect
    helm:
      releaseName: onepassword-connect
      values: |
        connect:
          credentials: |
            <redacted>
          tls:
            enabled: false
            secret: op-connect-tls
        operator:
          create: true
          token:
            value: <redacted>
      version: v3
    chart: connect
  destination:
    namespace: onepassword
    server: 'https://kubernetes.default.svc'
  syncPolicy:
    syncOptions:
    - CreateNamespace=true

Notes & Logs

Speific error from logs

pods "onepassword-connect-96776974f-" is forbidden: unable to validate
against any security context constraint: [provider "anyuid": Forbidden:
not usable by user or serviceaccount,
spec.containers[0].securityContext.runAsUser: Invalid value: 999: must
be in the ranges: [1000680000, 1000689999],
spec.containers[1].securityContext.runAsUser: Invalid value: 999: must
be in the ranges: [1000680000, 1000689999], provider "nonroot":
Forbidden: not usable by user or serviceaccount, provider
"hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
provider "machine-api-termination-handler": Forbidden: not usable by
user or serviceaccount, provider "hostnetwork": Forbidden: not usable by
user or serviceaccount, provider "hostaccess": Forbidden: not usable by
user or serviceaccount, provider "node-exporter": Forbidden: not usable
by user or serviceaccount, provider "privileged": Forbidden: not usable
by user or serviceaccount]

Stackoverflow that led me to this line of thinking: https://stackoverflow.com/questions/69433216/helm-is-failing-in-openshift-due-to-security-context-error

Openshift Security Documentation that might be helpful: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html

New release

Hi folks :)

Could we have a new tagged release in order to capture the changes involving .Release.Namespace?

Thank you!

Using with Helm

I am currently using standard Helm and a template like:

{{- $def := index .Values "default" -}}
apiVersion: v1
data:
  STRIPE_PUBLIC_KEY: {{ default $def.STRIPE_PUBLIC_KEY | b64enc }}
  STRIPE_SECRET_KEY: {{ default $def.STRIPE_SECRET_KEY | b64enc }}
  # .... etc ....
kind: Secret
metadata:
  name: api-env
type: Opaque

Then the deployment simply does:

envFrom:
- secretRef:
  name: api-env

Finally, in the values.yaml we specify the secrets like:

default:
  STRIPE_PUBLIC_KEY: foobar
  STRIPE_SECRET_KEY: secret-foobar

How would migrating to 1Password-operator in our Kubernetes cluster work?

Cannot deploy operator to multiple namespaces

Your environment

Operator Version: 1.0.0

Connect Server Version: 1.0.0

Kubernetes Version:1.18.16-gke.502

What happened?

helm install connect 1password/connect --namespace=dev \
            --set-file connect.credentials=dev-credentials.json \
            --set operator.serviceAccount.create=true \
            --set operator.clusterRole.create=true \
            --set operator.roleBinding.create=true \
            --set operator.create=true,operator.token.name=gke-1password-dev-access-token \
            --set namespace=dev
            
    kubectl create secret generic gke-1password-dev-access-token --from-literal=token=$OP_ACCESS_TOKEN \
            --namespace=dev
            
helm install connect 1password/connect --namespace=qa \
            --skip-crds \
            --set-file connect.credentials=qa-credentials.json \
            --set operator.serviceAccount.create=true \
            --set operator.clusterRole.create=true \
            --set operator.roleBinding.create=true \
            --set operator.create=true,operator.token.name=gke-1password-qa-access-token \
            --set namespace=qa
            
 Error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "onepassworditems.onepassword.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "qa": current value is "dev"         

What did you expect to happen?

The second install command with skip-crds would not create the CRD again

Steps to reproduce

Create two environments for connect on 1password
generate a token and credentials for each
create two namespces
deploy connect with the operator to the first namespace
attempt to create the operator in a second namespace

Notes & Logs

Not entirely sure if this is me missing something obvious or wether it's the checks for those annotiation labels causing the issues. Essentially we have 4 isolated namespaces in our cluster. We want to have 4 environments each with it's own access-token and credentials which has access to a single vault for (one per namespace)

Passing credentials via credentials_base64 breaks the installation

Your environment

Chart Version: 1.7.1

Helm Version: 3.8.1

Kubernetes Version: v1.23.4

What happened?

I am using the op cli tool to get the credentials.json from a 1Password vault and register it to an ansible variable. Then I'm base64 encoding the json and pass it to the helm-charts connect.credentials_base64 value.

Because of the stringData: type in the secret, the base64 value gets encoded again.

What did you expect to happen?

If a _base64 variable is provided, then it should be able to take values without using --set-file flags. This would make it compatible with different scenarios, where you don't want to store the credentials json in an actual file or retrieve it from a script.

Steps to reproduce

  1. Pass a base64 encoded credentials.json string to the connect.credentials_base64 value

Please release a new helm chart version, 1.4.0 is currently 3 months stale

Your environment

Chart Version: 1.4.0
Helm Version: 3.6.3
Kubernetes Version: 1.2.0

What happened?

The current release of the helm chart contains several issues that have since been resolved and merged to main.

What did you expect to happen?

Regular releases of the helm chart occur to eliminate known, fixed issues and provide new features.

Steps to reproduce

  1. helm repo add onepassword-connect https://1password.github.io/connect-helm-charts
  2. helm search repo onepassword-connect
  3. Observe that the last release is version 1.4.0
  4. Observe that the tag for 1.4.0 in github shows the latest commit was merged in June 2021: https://github.com/1Password/connect-helm-charts/commits/connect-1.4.0

Notes & Logs

This is a particular issue for me as main contains changes that allow the operator to work across namespaces.

add templated Service annotations

Summary

add templated metadata.annotations to the Service

Use cases

useful for anyone that uses Service annotations to control things like load balancer controllers, etc.

Proposed solution

like what has been done for metadata.labels in the Service template, but for annotations

Is there a workaround to accomplish this today?

manually add to the manifest, post deploy

More frequently patched images for onepassword-connect

Would it be possible for the connect-api images to be patched more frequently, or a distroless version of it to be made available, so that its supply chain CVEs are kept to a minimum? Currently trivy is detecting 174 CVEs on the latest version:
image

The latest tag was updated 3 months ago:
image

latest helm chart contains operator 1.1.0

The onepassword-operator in the helm chart is falling behind.

helm repo add 1password https://1password.github.io/connect-helm-charts        
helm repo update
helm template connect 1password/connect --set operator.create=true | grep "image: 1password"

result
image: 1password/onepassword-operator:1.1.0

expected
v1.4.1 or 1.5

Implement liveness/readiness probes

Summary

The current chart doesn't have liveness/readiness probes. This means Kubernetes will only restart the pod if one of the pid-1 processes in the container crashes or exists.

Use cases

To adhere to Best Current Practices, Kubernetes should detect and act when the service is misbehaving

Proposed solution

  • Implement liveness/readiness probes, maybe based on the /health endpoint?
  • Maybe also allow a restart if the service hasn't synced to the main 1Password.com service for 24 hours?

Is there a workaround to accomplish this today?

No, unless the pod would implement that internally? Unable to verify due to lack of documentation

References & Prior Work

Probes are BCP and have been implemented by most other commonly used charts in the K8S ecosystem :)

Instructions are misleading

I can raise a PR to update the README but wanted to discuss this first with you folks.

If I have a 1password-credentials.json:

{
  "verifier": {
    "salt": "asd",
    "localHash": "asd"
  },
  "encCredentials": {
    "kid": "asd",
    "enc": "asd",
    "cty": "asd",
    "iv": "asd",
    "data": "asd"
  },
  "version": "2",
  "deviceUuid": "asd",
  "uniqueKey": {
    "alg": "asd",
    "ext": true,
    "k": "asd",
    "key_ops": [
      "encrypt",
      "decrypt"
    ],
    "kty": "oct",
    "kid": "asd"
  }
}

and run this:

helm install connect 1password/connect --set-file connect.credentials=<path/to/1password-credentials.json>

This is going to fail because we are passing a literal json file to the helm chart.

When what we really want is to do something like:

cat <path/to/1password-credentials.json> | base64 > <path/to/op-session>
helm install connect 1password/connect --set-file connect.credentials=<path/to/op-session>

FR: PodDisruptionBudget and Replicas

Summary

The current helm chart only allows for a single instance of the connect and operator to be deployed.
This provides a higher risk in terms of availability.

Proposed solution

Add a replicas argument to values.yml with a default value of 1.
Also add a PodDisruptionBudget with a max of 1..

Is there a workaround to accomplish this today?

No.

consider using github pages instead of tgz in the repo

What

Please consider using github pages to host the index.yaml and tgz files that comprise a helm release, instead of committing binary blobs into the git repo

Why

Git is not well suited for hosting binary artifacts, and committing a tgz for each release will cause the repo to grow without bound (short of using some repo surgery commands to expunge old artifacts)

How

Take for example the kubernetes ingress-nginx repo:
https://github.com/kubernetes/ingress-nginx/blob/helm-chart-3.29.0/.github/workflows/helm.yaml although I'm sure there are others, that's just the most famous one that sprang to mind which uses this pattern

One can see the repo patterns in their install instructions

operator connect fails

When I run this command for an already existing service

helm upgrade --set operator.token=<token> connect 1password/connect

I get this error:

coalesce.go:200: warning: cannot overwrite table with non table for token (map[key:token name:onepassword-token value:<nil>])
Error: UPGRADE FAILED: template: connect/templates/operator-token.yaml:1:45: executing "connect/templates/operator-token.yaml" at <.Values.operator.token.value>: can't evaluate field value in type interface {}

Changing the Secret Type does not work with Operator v1.5.0

Your environment

Chart Version: 1.8.0

Helm Version: 3

Kubernetes Version: v1.22.6

What happened?

I upgraded 1Password Helm chart revision since this recently released: #103

Docker images: 1password/connect-api:1.5.4 1password/connect-sync:1.5.4 1password/onepassword-operator:1.5.0

I managed to upgrade 1Password Helm chart from revision 1.7.1 to 1.8.0. So, I upgraded to Connect Server 1.5.4 and Operator 1.5.0 that recently released. I am using ArgoCD App template + Helm source like below.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: onepassword
  namespace: argocd
spec:
  destination:
    namespace: onepassword
    server: <SERVER>
  source:
    repoURL: 'https://1password.github.io/connect-helm-charts/'
    targetRevision: 1.8.0
    chart: connect
    helm:
      releaseName: connect
      values: |
        operator:
          create: true
  project: operations
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

However, I am getting the error from the operator when trying to take advantage of the new type feature and doing type: kubernetes.io/dockerconfigjson in order to deploy a GHCR token like the example below. I already tried redeploying 1Password and the same issue occurs.

apiVersion: onepassword.com/v1
kind: OnePasswordItem
type: kubernetes.io/dockerconfigjson
metadata:
  name: test-ghcr
  namespace: test
  annotations:
    operator.1password.io/auto-restart: "true"
spec:
  itemPath: "vaults/Test/items/test-ghcr"

Error when trying to create a OnePasswordItem with a new Secret type:

"error":"Failed to retrieve item: need at least version 1.3.0 of Connect for this function, detected version 1.2.0 (or earlier). Please update your Connect server"

What did you expect to happen?

1Password Operator creates the secret containing the GHCR token. As mentioned above, I upgraded to 1Password Connect Server to 1.5.4 and Operator to 1.5.0 using the latest chart revision 1.8.0.

Steps to reproduce

  1. Deployed 1Password helm chart revision 1.7.1
  2. Enabled Connect Server and Operator with credentials and token
  3. Upgrade revision to 1.8.0
  4. Create OnePasswordItem with type: kubernetes.io/dockerconfigjson
  5. Error and does not create GHCR token secret

Notes & Logs

loadBalancerIP for service

Summary

There is no loadBalancerIP field for the service.

Use cases

We want to use service type as loadbalancer

Proposed solution

add loadBalancerIP field

Is there a workaround to accomplish this today?

References & Prior Work

Helm chart does not allow deploying the Kubernetes operator without also deploying a 1Password Connect server

Summary

This Helm chart deploys 1Password Connect and optionally the Kubernetes operator plugin for 1Password Connect. There is no option to deploy the operator using this Helm chart without also deploying 1Password Connect.

Use cases

Deploying the operator without deploying 1Password Connect is useful when deploying multiple operators connected to a single 1Password Connect server. This is not currently possible using Helm.

Proposed solution

Add a new chart value and some corresponding logic to disable the 1Password Connect deployment, defaulting to the current behaviour to deploy 1Password Connect only.

Is there a workaround to accomplish this today?

It is currently possible to deploy the operator on its own (see https://github.com/1Password/onepassword-operator), but not by using Helm. The only semi-plausible workaround I can think of is to deploy Connect with 0 replicas.

References & Prior Work

How we optionally deploy the operator:

passing connect.credentials not possible in terraform & helm

Your environment

Chart Version: 1.4.0

Helm Version: 3.x

Kubernetes Version: latest

What happened?

Terraform supports applying helm charts: https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
However, when setting connect.credentials like:

  set_sensitive {
    name  = "connect.credentials"
    type  = "string"
    value = " ${file("${path.root}/path/to/1password-credentials.json")} "
  }

the apply fails to Error: failed parsing key "connect.credentials" with value ...

This is due to this bug: hashicorp/terraform-provider-helm#618 that is apparently also reproduceable in the actual helm: hashicorp/terraform-provider-helm#618 (comment)

What did you expect to happen?

Above should not fail. Could the chart e.g take only the file path and manage the JSON internally instead? Or would there be another possible workaround?

Steps to reproduce

  1. try to deploy the chart via terraform
  2. you get above error

E: found a workaround:

  set_sensitive {
    name  = "connect.credentials"
    type  = "string"
    value = " ${replace(file("${path.root}/path/to/1password-credentials.json"), ",", "\\,")}"
  }

Pod fails to create non-existent mountPath

Your environment

Amazon EKS

Chart Version: secrets-injector-1.0.0

Helm Version: v3.11.3

Kubernetes Version: 1.25

What happened?

If a mountPath specified in a Pod does not exist it is not created

What did you expect to happen?

The non-existent mountPath should be created

Steps to reproduce

This definition does not create /data and returns /bin/sh: can't create /data/file.txt: nonexistent directory:

apiVersion: v1
kind: Pod
metadata:
  name: testing
  namespace: default
  annotations:
    operator.1password.io/inject: "testing"
spec:
  containers:
  - name: testing
    image: busybox
    command: ["/bin/sh"]
    args: ["-c","while true; do echo $(date) >> /data/file.txt; sleep 5; done"]
    env:
    - name: OP_CONNECT_HOST
      value: "http://onepassword-connect:8080"
    envFrom:
    - secretRef:
        name: op-connect-token
    volumeMounts:
    - mountPath: /data/
      name: storage
  volumes:
  - name: storage
    emptyDir: {}

This definition that does not invoke the Injector works as expected:

apiVersion: v1
kind: Pod
metadata:
  name: testing2
  namespace: default
spec:
  containers:
  - name: testing2
    image: busybox
    command: ["/bin/sh"]
    args: ["-c","while true; do echo $(date) >> /data/file.txt; sleep 5; done"]
    env:
    - name: OP_CONNECT_HOST
      value: "http://onepassword-connect:8080"
    envFrom:
    - secretRef:
        name: op-connect-token
    volumeMounts:
    - mountPath: /data/
      name: storage
  volumes:
  - name: storage
    emptyDir: {}

Notes & Logs

Error 500 when trying to get a secret

Your environment

Chart Version: connect-1.9.0 | APP 1.5.7

Helm Version: v3.10.2

Kubernetes Version: v1.25.4-rc4+k3s1

What happened?

Followed the blog https://blog.bennycornelissen.nl/post/onepassword-on-kubernetes/
however, I don't get the k8s secret

What did you expect to happen?

get a Kubernetes secret for the 1password secret

Steps to reproduce

  1. install via helm
# Put our 1Password Connect access token in a variable
OP_TOKEN="< paste your token here >"

# Use Helm to install and configure everything. It will use the 
# credential file and the ${OP_TOKEN} variable to use the integration
# we set up earlier.
helm upgrade --install connect 1password/connect --set-file connect.credentials=1password-credentials.json --set operator.create=true --set operator.token.value="${OP_TOKEN}" --set "operator.watchNamespace={opconnect,default}" --namespace opconnect

and add the following yaml manifest after the 1password pods are started

apiVersion: onepassword.com/v1 kind: OnePasswordItem metadata: name: password spec: itemPath: "vaults/systems/items/dummy"

Notes & Logs

kubectl logs onepassworditem.onepassword.com/password error: no kind "OnePasswordItem" is registered for version "onepassword.com/v1" in scheme "pkg/scheme/scheme.go:28"
The following has a 408 error
`
kubectl get onepassworditem.onepassword.com/password -o yaml
apiVersion: onepassword.com/v1
kind: OnePasswordItem
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"onepassword.com/v1","kind":"OnePasswordItem","metadata":{"annotations":{},"name":"password","namespace":"default"},"spec":{"itemPath":"vaults/systems/items/dummy"}}
creationTimestamp: "2022-11-22T02:35:26Z"
finalizers:

  • onepassword.com/finalizer.secret
    generation: 1
    name: password
    namespace: default
    resourceVersion: "185126"
    uid: bf55e594-bb43-4c68-95ad-dd7a7ca06528
    spec:
    itemPath: vaults/systems/items/dummy
    status:
    conditions:
  • lastTransitionTime: "2022-11-22T02:35:27Z"
    message: 'Failed to retrieve item: status 408: sync did not complete within timeout,
    please retry the request'
    status: "False"
    type: Ready
    `
    Logs of the operator pod
kubectl -n opconnect logs onepassword-connect-operator-7467685677-4j45g

{"level":"info","ts":1669084536.1768796,"logger":"controller_onepassworditem","msg":"Reconciling OnePasswordItem","Request.Namespace":"default","Request.Name":"password"}
{"level":"error","ts":1669084544.651247,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"onepassworditem-controller","request":"default/password","error":"Failed to retrieve item: status 500: failed to initiate, review service logs for details","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/workspace/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}
{"level":"info","ts":1669084544.7315106,"logger":"controller_onepassworditem","msg":"Reconciling OnePasswordItem","Request.Namespace":"default","Request.Name":"password"}

Add support for existing credentials

Summary

It would be great if we could specify e.g. operator.existingTokenSecret and connect.existingCredentialsSecret when installing the Helm chart

Use cases

I use this with ArgoCD and don't want the credentials/token in my Git repository.

Proposed solution

Specifying the name of a Secret with the existing token should negate the need to specify token/credentials on every install.

Make Connect sidecar of Operator

Summary

Making Connect sidecar of the Operator would limit the Connect scope to inside pod, which in turn would mean that no ports would need to be opened outside the pod. This would limit the risk of misconfiguration and exposing the Connect too widely accidentally.

Use cases

When you only need Connect for serving Operator. For example I only need Connect to serve the Operator so I don't need the endpoints to be exposed to anything else. I would sleep my nights better if it were abstracted away.

Proposed solution

Implement a possibility to make Connect sidecar of Operator

Is there a workaround to accomplish this today?

Not that I know.

E: Actually this is exactly the reason why I'd rather keep the Connect as a sidecar for the Operator: #65 It is too easy to expose the endpoints to external world.

tolerations in helm-templates

Summary

I have nodes with some taints. Therefore i want to specify tolerations for the deployements. In this helmchart, i can't set those tolerations.

Use cases

I have a separate worker. The normal workload should not be deployed there. So i tainted the node. But i want the connect-deployment on this node.

Proposed solution

add tolerations to the helm-chart-templates. I'll prepare an PR for this.

1P Connect fails to talk to its cloud: unable to get credentials and initialize API

Your environment

Chart Version:

Helm Version: 1.8.1

Kubernetes Version: 1.24

What happened?

1P cannot talk home. The logs say:

{"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2022-09-20T10:09:36.36339545Z","level":3}
{"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), Wrapped: (failed to loadCredentialsFile), Wrapped: (LoadLocalAuthV2 failed to credentialsDataFromBase64), illegal base64 data at input byte 0","timestamp":"2022-09-20T10:09:36.363794629Z","level":1}

What did you expect to happen?

1P works

Steps to reproduce

I created the secret from file just as it says in the docs:

k create secret generic op-credentials --from-file 1password-credentials.json

I verified that the secret's value (the JSON) is passed into the pod via OP_SESSION.

Allow log format parameter

Summary

Currently all containers default to the human-readable log format as described here https://developer.1password.com/docs/cli/reference/. It should be possible to set the desired format on the helm chart.

Use cases

Set log format via parameters to ensure that monitoring systems pass the logs correctly.

Proposed solution

Allow a operator.logFormat, connect.logFormat, and sync.logFormat, or global logFormat parameter on the helm chart. Alternatively, accept a set of extraEnvs that are propagated to all pods such that a user can set OP_FORMAT.

Is there a workaround to accomplish this today?

Not that I'm aware of.

Deploy Operator independly of Connect

Summary

Support deploying the Operator independently of the Connect.

Use cases

Currently, this chart only supports deploying the operator next to the connect component. If a user has multiple Kubernetes clusters, or has deployed 1Password Connect outside of Kubernetes, they currently can't use this chart.

Proposed solution

  • allow an option to enable / disable the connect component
  • allow a user to override the connect url in the operator deployment template.

Is there a workaround to accomplish this today?

The Operator repo has yaml files, but no chart.

References & Prior Work

Secrets will not populate and no useful error logs

Your environment

Chart Version: connect-1.4.0

Helm Version: version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"dirty", GoVersion:"go1.16.3"}

Kubernetes Version: Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

What happened?

password connect does not create secrets although secret was created, the containers show no useful logs or errors

What did you expect to happen?

A kubernetes secrets should've been created automatically

Steps to reproduce

  1. Install operator
helm repo add 1password https://1password.github.io/connect-helm-charts/ && \
helm upgrade -i connect 1password/connect --set-file connect.credentials=1password-credentials.json --set operator.token.value=$TOKEN
  1. Create secret
apiVersion: onepassword.com/v1
kind: OnePasswordItem
metadata:
  name: test
spec:
  itemPath: "vaults/fi2nz7kvpcg4p2fcpizeftbava/items/agm66y3qdnd7djfahzfiwuv4cq"

Notes & Logs

secrets

NAME                            TYPE                                  DATA   AGE
default-token-v6dhh             kubernetes.io/service-account-token   3      56m
op-credentials                  Opaque                                1      46m
sh.helm.release.v1.connect.v1   helm.sh/release.v1                    1      46m
sh.helm.release.v1.connect.v2   helm.sh/release.v1                    1      44m
sh.helm.release.v1.connect.v3   helm.sh/release.v1                    1      25m
sh.helm.release.v1.connect.v4   helm.sh/release.v1                    1      24m
sh.helm.release.v1.connect.v5   helm.sh/release.v1                    1      17m

connect-api

{"log_message":"(I) starting 1Password Connect API ...","timestamp":"2021-07-05T20:06:28.5647019Z","level":3}
{"log_message":"(I) serving on :8080","timestamp":"2021-07-05T20:06:28.564763Z","level":3}
{"log_message":"(I) [discovery-local] starting discovery, advertising endpoint 34473 /meta/message","timestamp":"2021-07-05T20:06:28.564569Z","level":3}
{"log_message":"(I) GET /heartbeat","timestamp":"2021-07-05T20:06:42.0665801Z","level":3,"scope":{"request_id":"0108f9dc-d4b7-4e96-96e0-d750ac0b49e8"}}
{"log_message":"(I) GET /heartbeat completed (200: OK)","timestamp":"2021-07-05T20:06:42.0668241Z","level":3,"scope":{"request_id":"0108f9dc-d4b7-4e96-96e0-d750ac0b49e8"}}
...

connect-sync

{"log_message":"(W) configured to use HTTP with no TLS","timestamp":"2021-07-05T20:06:27.6574652Z","level":2}
{"log_message":"(I) [discovery-local] starting discovery, advertising endpoint 45279 /meta/message","timestamp":"2021-07-05T20:06:27.6575066Z","level":3}
{"log_message":"(I) no existing database found, will initialize at /home/opuser/.op/data/1password.sqlite","timestamp":"2021-07-05T20:06:27.6579724Z","level":3}
{"log_message":"(I) starting 1Password Connect Sync ...","timestamp":"2021-07-05T20:06:27.6584934Z","level":3}
{"log_message":"(I) serving on :8081","timestamp":"2021-07-05T20:06:27.6585384Z","level":3}
{"log_message":"(I) database initialization complete","timestamp":"2021-07-05T20:06:27.6671754Z","level":3}
{"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2021-07-05T20:06:27.6673713Z","level":3}
{"log_message":"(I) GET /health","timestamp":"2021-07-05T20:06:45.405312Z","level":3,"scope":{"request_id":"d2065c03-f0c6-41d2-9051-769df7cbc670"}}
{"log_message":"(I) GET /health completed (200: OK)","timestamp":"2021-07-05T20:06:45.4056794Z","level":3,"scope":{"request_id":"d2065c03-f0c6-41d2-9051-769df7cbc670"}}
{"log_message":"(I) GET /heartbeat","timestamp":"2021-07-05T20:06:55.2725209Z","level":3,"scope":{"request_id":"709dfbec-267f-4c22-9483-59f7c087d4e8"}}
...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.