1password / connect-helm-charts Goto Github PK
View Code? Open in Web Editor NEWOfficial 1Password Helm Charts
Home Page: https://developer.1password.com
License: MIT License
Official 1Password Helm Charts
Home Page: https://developer.1password.com
License: MIT License
Please include the option to define pod placement using affinity:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
We have specific node pools dedicated to different resource requirements, and we place our pods using node / pod affinity. This is much more flexible than using nodeSelector
for our purposes.
Change these (and any I missed):
connect-helm-charts/charts/connect/templates/operator-deployment.yaml
Lines 35 to 40 in 54faa0c
connect-helm-charts/charts/connect/templates/connect-deployment.yaml
Lines 35 to 38 in 54faa0c
to:
{{- with .Values.<deployment>.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.<deployment>.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.<deployment>.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
n/a
Currently the metadata is referencing https://1password.com/img/logo-v1.svg
as the icon for ArtifactHub.
Instead we should be using https://avatars.githubusercontent.com/u/38230737
to match other registries that pull an icon in from GitHub
Chart Version: 2.1.0
Helm Version:
Kubernetes Version: v1.20.4
Operator logs endless permission error listing namespaces after a change to a 1Password vault item previously created as a secret on the cluster.
E0808 23:25:10.319571 1 reflector.go:178] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:1password-connect:onepassword-connect-operator" cannot list resource "namespaces" in API group "" at the cluster scope
Operator should have updated the secret successfully and restarted the deployment using it.
1password-connect
and default
namespaces.default
namespace.example-secret
1Password item is created and available both in env vars and on disk in the deployment pod.some-key
in the 1Password UI (e.g., some-value
-> some-value-changed
).apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
labels:
app.kubernetes.io/name: example-deployment
annotations:
operator.1password.io/item-path: 'vaults/sandbox-dev-k8s/items/example-secret'
operator.1password.io/item-name: 'example-secret'
operator.1password.io/auto-restart: 'true'
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: example-deployment
template:
metadata:
labels:
app.kubernetes.io/name: example-deployment
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: example-deployment
image: alpine:3
command: ['/bin/sh', '-c', '--']
args: ['while true; do sleep 30; done;']
resources:
limits:
memory: 64Mi
requests:
memory: 64Mi
# Example mounting contents of key/value pairs from the 1Password entry as env vars
env:
- name: SECRET_SOME_KEY
valueFrom:
secretKeyRef:
name: example-secret
key: some-key
- name: SECRET_ANOTHER_KEY
valueFrom:
secretKeyRef:
name: example-secret
key: another-key
# Example mounting contents of key/value pairs from the 1Password entry as files on disk
volumeMounts:
- name: secret
mountPath: '/mnt/secrets'
readOnly: true
volumes:
- name: secret
secret:
secretName: example-secret
I verified the helm release created a ClusterRole with list
permission for the namespaces
resource. I also verified the helm release created a RoleBinding in the watched namespaces.
Should the operator be attempting to list all namespaces when the helm release is configured with an explicit set of namespaces? Is listing all namespaces an allowed operation for RoleBindings (perhaps only for ClusterRoleBindings)?
Chart Version: 1.8.0
Helm Version: v3.9.0
Kubernetes Version: 1.21
I am using the chart as dependency in my helm chart. My helm chart includes the following Chart.yaml
apiVersion: v2
name: 1password-connect-local
description: A Helm chart for deploying 1Password Connect and the 1Password Connect Kubernetes Operator
type: application
version: 0.1.0
appVersion: "1.5.4"
dependencies:
- name: connect
version: 1.8.0
repository: https://1password.github.io/connect-helm-charts
If i execute :
helm dependency build .
helm -n 1password install mychart --create-namespace --dry-run --debug .
i get the following computed values:
connect:
acceptanceTests:
enabled: false
fixtures: {}
annotations: {}
api:
httpPort: 8080
httpsPort: 8443
imageRepository: 1password/connect-api
name: connect-api
resources: {}
applicationName: onepassword-connect
connect:
annotations: {}
api:
httpPort: 8080
httpsPort: 8443
imageRepository: 1password/connect-api
name: connect-api
resources: {}
applicationName: onepassword-connect
credentialsKey: 1password-credentials.json
credentialsName: op-credentials
dataVolume:
name: shared-data
type: emptyDir
values: {}
imagePullPolicy: IfNotPresent
labels: {}
nodeSelector: {}
podAnnotations: {}
podLabels: {}
As you see the connect
dictionary exists two times and AcceptanceTests
are under one of the connect dictionaries instead of root
Chart Version: 1.10.0
Helm Version: 3.10.1
Kubernetes Version: v1.23.9
I tried using the following to set up a PVC and attach it to 1password-connect pod so that 1password doesn't write the node (which btw was causing intermittent disk pressure warnings)
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: onepassword-connect-shared-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
# values.yaml
connect:
serviceType: ClusterIP
dataVolume:
type: persistentVolumeClaim
values:
claimName: onepassword-connect-shared-data
readOnly: false
When the connect-sync
container starts up I see the following:
{"log_message":"(I) disabling bus peer auto-discovery","timestamp":"2023-01-18T02:07:57.431543892Z","level":3}
{"log_message":"(W) did not initialize bus connection to peer localhost:11220. If the peer is currently booting, it may initialize the connection while starting. Details: failed to transport.CreateConnection: [transport-websocket] failed to Dial endpoint: dial tcp 127.0.0.1:11220: connect: connection refused. ","timestamp":"2023-01-18T02:07:57.435251551Z","level":2}
{"log_message":"(W) configured to use HTTP with no TLS","timestamp":"2023-01-18T02:07:57.435325947Z","level":2}
{"log_message":"(I) no existing database found, will initialize at /home/opuser/.op/data/1password.sqlite","timestamp":"2023-01-18T02:07:57.435664243Z","level":3}
Error: Server: (failed to OpenDefault), Wrapped: (failed to open db), unable to open database file: no such file or directory
Usage:
connect-sync [flags]
Flags:
-h, --help help for connect-sync
-v, --version version for connect-sync
1password starts up without issue
Hi ๐๐ผ The way crds are handled in this repo Helm does not upgrade them when the user upgrades the chart.
https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations
There is no support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. Furthermore, there is currently no community consensus around how to handle CRDs and their lifecycle. As this evolves, Helm will add support for those use cases.
What I've seen most chart developers do it have a installCRDs
option in the helm values and template them out, take a look at cert-manager and rook-ceph charts for an example on how they handle CRDs.
I would suggest a option in the values.yaml
and to move the crds folder into the templates folder and wrap it in an if statement.
operator:
create: false
installCRDs: true
autoRestart: false
Chart Version: latest
Helm Version: latest
Kubernetes Version: 1.26
The helm deploy is stuck, because the deployment is marked as a hook here: https://github.com/1Password/connect-helm-charts/blob/main/charts/secrets-injector/templates/deployment.yaml#L10
But since the hook never completes (because it's a deployment), the deploy never completes and helm gets stuck.
Removing the hook annotation fixes it.
The helm chart to work
Try deploying the helm chart. I did it through Argo. It times out on waiting for the hook to complete.
Chart Version:
Helm Version:
Kubernetes Version:
I got this error:
Warning FailedCreate \
replicaset/onepassword-connect-operator-95f9f56b7 \
Error creating: pods "onepassword-connect-operator-95f9f56b7-" \
is forbidden: error looking up service account default/onepassword-connect-operator: serviceaccount \
"onepassword-connect-operator" not found
Please set the security context for all containers. Without this, pods are run as root, have a read/write filesystem, etc.
For a security related application, these are expected pieces of configuration.
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
When operating behind a corporate web proxy, I need to be able to set environment variables on my containers like https_proxy, http_proxy, no_proxy. These instruct most Linux applications that they should send requests via a specified proxy on their way out to the internet.
When operating behind a corporate web proxy, all traffic is required to go via the web proxy for security reasons and no other route out of the network exists. I feel like there would be a non zero number of 1Password connect users who are wanting to make this scenario work. I'm quite surprised I'm the first issue mention.
The helm template for the connect-deployment doesn't allow for any values to be added to environment variables for the containers. I needed to add the 3 mentioned above.
You could either allow in the values, for custom environment variables to be added and append these to the bottom of the env variable lists for the containers, or do something more structured for just my use case. I think allowing custom env variables is fine, but I realise this could have security implications that I am missing.
As as result of not being able to accomplish this using your helm chart, I've had to use your containers in a generic application chart which has a lot more management overhead for me.
I've never done a template like this myself, but I have applied my own env to a number of other helm charts and its been able to handle it ok. I went digging and the first to pop up in my config was datadog, which (although a messy complicated chart) has this include statement in the template which gives an idea of how it could work.
https://github.com/DataDog/helm-charts/blob/e3133172449038caaca4c18342fecd2976be377a/charts/datadog/templates/cluster-agent-deployment.yaml#L297
Currently we use an initContainer
to correct the permissions on volume mounts to be accessible by reduced privilege user accounts. Starting with Kubernetes 1.20+ there is a beta feature to configure volume permissions and perform ownership changes.
We should update the chart to support this feature and replace the initContainer flow with it.
You should use .Release.Namespace
instead of a custom variable. Generally you would want to avoid using the default
namespace, but helm also has standard variables for a lot of stuff, the namespace being one of them.
No cluster role bind was created causing the operator pod to get stuck in a crash backoff loop.
The operator pod should work as intended.
helm install connect 1password/connect \
-n 1pass \
--create-namespace \
--set operator.create=true \
--set operator.watchNamespace={default}
This seems to be the offending line:
It doesn't create the binding when you specify a namespace to watch. If this is intended, then close this issue.
Hey, this issue seems to have returned in v1.5. If I downgrade to v1.4 then it works. #38 (comment)
@priscilasolis we tested the fixes from this PR, with the goal of using multiple standalone operators side by side. If that's your use case as well, you will soon notice that the fix indeed allows starting 1 operator standalone, but that spinning up a second causes an issue where both try to acquire the same lock. This is something that still needs to be fixed.
Originally posted by @Matthiasvanderhallen in #126 (comment)
Hey
Been testing out the new onepassword-connect k8s operator. Firstly, what a fantastic ideas - Our team are really keen to get this going.
Our architecture has several isolated namespaces inside a single gke cluster so our plan was to set up 1password connect and operator inside of each namespace.
Trying to install this into a single namespace dev
initially results in an error.
helm repo add 1password https://raw.githubusercontent.com/1Password/connect-helm-charts/main
helm upgrade --install connect 1password/connect --namespace=dev --set-file connect.credentials=$SERVICES_BASE_PATH/gke-1password-dev-credentials.json --set operator.create=true --set operator.token.name=gke-1password-dev-access-token --set operator.token.value=REDACTED --set namespace=dev
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
"1password" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "soluto" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "1password" chart repository
...Successfully got an update from the "stable" chart repository
...Unable to get an update from the "chartmuseum" chart repository (https://chartmuseum-gke.vizibl.co):
Get "https://chartmuseum-gke.vizibl.co/index.yaml": context deadline exceeded
Update Complete. โHappy Helming!โ
Release "connect" does not exist. Installing it now.
Error: Service "onepassword-connect" is invalid: spec.ports[0].nodePort: Invalid value: 31080: provided port is already allocated
Error: Service "onepassword-connect" is invalid: spec.ports[0].nodePort: Invalid value: 31080: provided port is already allocated
Hope someone can help us resolve this issue. Also, if there's any advice on how best to manage this architecture where there's an instance of 1password connect per namespace, that would be greatly appreciated.
Chart Version:
Helm Version:
Kubernetes Version:
Chart Version: latest
Helm Version: 3.11.2
Kubernetes Version: 1.25.4
onepassword-connect-https
service does not exist as the service template generate onepassword-connect
service using http and https if tls is enabled
onepassword-connect
service should be used by the ingress when tls is enabled
Chart Version: 1.10.0
Helm Version: 3.11.3
Kubernetes Version: 1.23
The POLLING_INTERVAL
environment is set to 10
on the onepassword-connect-operator
pod.
The polling interval should be set to 600
per the chart README.
operator.pollingInterval
10
for its pollingInterval
$ helm get values connect --all --namespace secrets-management --output json | jq .operator.pollingInterval
10
10
for its POLLING_INTERVAL
environment variable.$ kubectl get pod --namespace secrets-management --selector name=onepassword-connect --output json | jq '.items[].spec.containers[].env[] | select(.name == "POLLING_INTERVAL")'
{
"name": "POLLING_INTERVAL",
"value": "10"
}
Here is where the default chart value is getting set:
The default value should be 600
per the chart README here:
connect-helm-charts/charts/connect/README.md
Line 109 in 0267c68
The operator README also states that the default value should 600
here:
https://github.com/1Password/onepassword-operator/blob/fe930fef052d71516854568bf1042cb93af594a1/README.md?plain=1#L89
The default for the helm chart and the operator itself should not necessarily be same, but the fact that there's agreement between their READMEs suggests to me that this is a mistake in the default value, rather than a mistake in the helm chart README.
Add the option to use existing secrets instead of creating new ones.
1Password connect and operator are installed via Flux (GitOps). This means the configuration is in Git and cannot contain any secrets. Initial secrets, like the ones used by 1Password, can be created by admins when the cluster is bootstrapped.
Add a flag to connect
and operator
to use an existing secret. This flag can be checked in connect-credentials.yaml
and operator-token.yaml
.
Chart Version: 1.7.1
Helm Version: 3.8.0
Kubernetes Version: OCP 4.10.10 (Kubernetes v1.23.5+9ce5071
Security error when trying to create pods due to using runAsUser with a user id number that's not allowed
Chart installed correctly
I'm currently using ArgoCD to deploy this via their App-of-apps pattern. So this is a resource for the Application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: one-password-connect
namespace: openshift-gitops
labels:
app.kubernetes.io/instance: argocd
spec:
project: default
source:
repoURL: https://1password.github.io/connect-helm-charts/
targetRevision: 1.7.1
path: apps/one-password-connect
helm:
releaseName: onepassword-connect
values: |
connect:
credentials: |
<redacted>
tls:
enabled: false
secret: op-connect-tls
operator:
create: true
token:
value: <redacted>
version: v3
chart: connect
destination:
namespace: onepassword
server: 'https://kubernetes.default.svc'
syncPolicy:
syncOptions:
- CreateNamespace=true
Speific error from logs
pods "onepassword-connect-96776974f-" is forbidden: unable to validate
against any security context constraint: [provider "anyuid": Forbidden:
not usable by user or serviceaccount,
spec.containers[0].securityContext.runAsUser: Invalid value: 999: must
be in the ranges: [1000680000, 1000689999],
spec.containers[1].securityContext.runAsUser: Invalid value: 999: must
be in the ranges: [1000680000, 1000689999], provider "nonroot":
Forbidden: not usable by user or serviceaccount, provider
"hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
provider "machine-api-termination-handler": Forbidden: not usable by
user or serviceaccount, provider "hostnetwork": Forbidden: not usable by
user or serviceaccount, provider "hostaccess": Forbidden: not usable by
user or serviceaccount, provider "node-exporter": Forbidden: not usable
by user or serviceaccount, provider "privileged": Forbidden: not usable
by user or serviceaccount]
Stackoverflow that led me to this line of thinking: https://stackoverflow.com/questions/69433216/helm-is-failing-in-openshift-due-to-security-context-error
Openshift Security Documentation that might be helpful: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html
Hi folks :)
Could we have a new tagged release in order to capture the changes involving .Release.Namespace?
Thank you!
I am currently using standard Helm and a template like:
{{- $def := index .Values "default" -}}
apiVersion: v1
data:
STRIPE_PUBLIC_KEY: {{ default $def.STRIPE_PUBLIC_KEY | b64enc }}
STRIPE_SECRET_KEY: {{ default $def.STRIPE_SECRET_KEY | b64enc }}
# .... etc ....
kind: Secret
metadata:
name: api-env
type: Opaque
Then the deployment simply does:
envFrom:
- secretRef:
name: api-env
Finally, in the values.yaml
we specify the secrets like:
default:
STRIPE_PUBLIC_KEY: foobar
STRIPE_SECRET_KEY: secret-foobar
How would migrating to 1Password-operator in our Kubernetes cluster work?
Operator Version: 1.0.0
Connect Server Version: 1.0.0
Kubernetes Version:1.18.16-gke.502
helm install connect 1password/connect --namespace=dev \
--set-file connect.credentials=dev-credentials.json \
--set operator.serviceAccount.create=true \
--set operator.clusterRole.create=true \
--set operator.roleBinding.create=true \
--set operator.create=true,operator.token.name=gke-1password-dev-access-token \
--set namespace=dev
kubectl create secret generic gke-1password-dev-access-token --from-literal=token=$OP_ACCESS_TOKEN \
--namespace=dev
helm install connect 1password/connect --namespace=qa \
--skip-crds \
--set-file connect.credentials=qa-credentials.json \
--set operator.serviceAccount.create=true \
--set operator.clusterRole.create=true \
--set operator.roleBinding.create=true \
--set operator.create=true,operator.token.name=gke-1password-qa-access-token \
--set namespace=qa
Error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "onepassworditems.onepassword.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "qa": current value is "dev"
The second install command with skip-crds would not create the CRD again
Create two environments for connect on 1password
generate a token and credentials for each
create two namespces
deploy connect with the operator to the first namespace
attempt to create the operator in a second namespace
Not entirely sure if this is me missing something obvious or wether it's the checks for those annotiation labels causing the issues. Essentially we have 4 isolated namespaces in our cluster. We want to have 4 environments each with it's own access-token and credentials which has access to a single vault for (one per namespace)
Chart Version: 1.7.1
Helm Version: 3.8.1
Kubernetes Version: v1.23.4
I am using the op
cli tool to get the credentials.json from a 1Password vault and register it to an ansible variable. Then I'm base64 encoding the json and pass it to the helm-charts connect.credentials_base64
value.
Because of the stringData:
type in the secret, the base64 value gets encoded again.
If a _base64
variable is provided, then it should be able to take values without using --set-file flags. This would make it compatible with different scenarios, where you don't want to store the credentials json in an actual file or retrieve it from a script.
connect.credentials_base64
valueDon't override the setting in 1Password/onepassword-operator#40
I want to configure as little as possible and don't have to redeploy when I add more namespaces.
Set OP_WATCH_NAMESPACES: []
in values.yml
Override the OP_WATCH_NAMESPACES
configured by the Helm chart.
Chart Version: 1.4.0
Helm Version: 3.6.3
Kubernetes Version: 1.2.0
The current release of the helm chart contains several issues that have since been resolved and merged to main
.
Regular releases of the helm chart occur to eliminate known, fixed issues and provide new features.
helm repo add onepassword-connect https://1password.github.io/connect-helm-charts
helm search repo onepassword-connect
This is a particular issue for me as main
contains changes that allow the operator
to work across namespaces.
add templated metadata.annotations to the Service
useful for anyone that uses Service annotations to control things like load balancer controllers, etc.
like what has been done for metadata.labels in the Service template, but for annotations
manually add to the manifest, post deploy
Would it be possible for the connect-api
images to be patched more frequently, or a distroless version of it to be made available, so that its supply chain CVEs are kept to a minimum? Currently trivy is detecting 174 CVEs on the latest version:
The latest tag was updated 3 months ago:
The onepassword-operator in the helm chart is falling behind.
helm repo add 1password https://1password.github.io/connect-helm-charts
helm repo update
helm template connect 1password/connect --set operator.create=true | grep "image: 1password"
result
image: 1password/onepassword-operator:1.1.0
expected
v1.4.1 or 1.5
Please follow helm and kubernetes standards for labels. See https://helm.sh/docs/chart_best_practices/labels/.
The current chart doesn't have liveness/readiness probes. This means Kubernetes will only restart the pod if one of the pid-1 processes in the container crashes or exists.
To adhere to Best Current Practices, Kubernetes should detect and act when the service is misbehaving
No, unless the pod would implement that internally? Unable to verify due to lack of documentation
Probes are BCP and have been implemented by most other commonly used charts in the K8S ecosystem :)
I can raise a PR to update the README but wanted to discuss this first with you folks.
If I have a 1password-credentials.json:
{
"verifier": {
"salt": "asd",
"localHash": "asd"
},
"encCredentials": {
"kid": "asd",
"enc": "asd",
"cty": "asd",
"iv": "asd",
"data": "asd"
},
"version": "2",
"deviceUuid": "asd",
"uniqueKey": {
"alg": "asd",
"ext": true,
"k": "asd",
"key_ops": [
"encrypt",
"decrypt"
],
"kty": "oct",
"kid": "asd"
}
}
and run this:
helm install connect 1password/connect --set-file connect.credentials=<path/to/1password-credentials.json>
This is going to fail because we are passing a literal json file to the helm chart.
When what we really want is to do something like:
cat <path/to/1password-credentials.json> | base64 > <path/to/op-session>
helm install connect 1password/connect --set-file connect.credentials=<path/to/op-session>
The current helm chart only allows for a single instance of the connect and operator to be deployed.
This provides a higher risk in terms of availability.
Add a replicas
argument to values.yml
with a default value of 1
.
Also add a PodDisruptionBudget
with a max of 1.
.
No.
Please consider using github pages to host the index.yaml
and tgz
files that comprise a helm release, instead of committing binary blobs into the git repo
Git is not well suited for hosting binary artifacts, and committing a tgz for each release will cause the repo to grow without bound (short of using some repo surgery commands to expunge old artifacts)
Take for example the kubernetes ingress-nginx repo:
https://github.com/kubernetes/ingress-nginx/blob/helm-chart-3.29.0/.github/workflows/helm.yaml although I'm sure there are others, that's just the most famous one that sprang to mind which uses this pattern
One can see the repo patterns in their install instructions
When I run this command for an already existing service
helm upgrade --set operator.token=<token> connect 1password/connect
I get this error:
coalesce.go:200: warning: cannot overwrite table with non table for token (map[key:token name:onepassword-token value:<nil>])
Error: UPGRADE FAILED: template: connect/templates/operator-token.yaml:1:45: executing "connect/templates/operator-token.yaml" at <.Values.operator.token.value>: can't evaluate field value in type interface {}
Chart Version: 1.8.0
Helm Version: 3
Kubernetes Version: v1.22.6
I upgraded 1Password Helm chart revision since this recently released: #103
Docker images: 1password/connect-api:1.5.4
1password/connect-sync:1.5.4
1password/onepassword-operator:1.5.0
I managed to upgrade 1Password Helm chart from revision 1.7.1 to 1.8.0. So, I upgraded to Connect Server 1.5.4 and Operator 1.5.0 that recently released. I am using ArgoCD App template + Helm source like below.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: onepassword
namespace: argocd
spec:
destination:
namespace: onepassword
server: <SERVER>
source:
repoURL: 'https://1password.github.io/connect-helm-charts/'
targetRevision: 1.8.0
chart: connect
helm:
releaseName: connect
values: |
operator:
create: true
project: operations
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
However, I am getting the error from the operator when trying to take advantage of the new type
feature and doing type: kubernetes.io/dockerconfigjson
in order to deploy a GHCR token like the example below. I already tried redeploying 1Password and the same issue occurs.
apiVersion: onepassword.com/v1
kind: OnePasswordItem
type: kubernetes.io/dockerconfigjson
metadata:
name: test-ghcr
namespace: test
annotations:
operator.1password.io/auto-restart: "true"
spec:
itemPath: "vaults/Test/items/test-ghcr"
Error when trying to create a OnePasswordItem with a new Secret type:
"error":"Failed to retrieve item: need at least version 1.3.0 of Connect for this function, detected version 1.2.0 (or earlier). Please update your Connect server"
1Password Operator creates the secret containing the GHCR token. As mentioned above, I upgraded to 1Password Connect Server to 1.5.4 and Operator to 1.5.0 using the latest chart revision 1.8.0.
OnePasswordItem
with type: kubernetes.io/dockerconfigjson
There is no loadBalancerIP field for the service.
We want to use service type as loadbalancer
add loadBalancerIP field
This Helm chart deploys 1Password Connect and optionally the Kubernetes operator plugin for 1Password Connect. There is no option to deploy the operator using this Helm chart without also deploying 1Password Connect.
Deploying the operator without deploying 1Password Connect is useful when deploying multiple operators connected to a single 1Password Connect server. This is not currently possible using Helm.
Add a new chart value and some corresponding logic to disable the 1Password Connect deployment, defaulting to the current behaviour to deploy 1Password Connect only.
It is currently possible to deploy the operator on its own (see https://github.com/1Password/onepassword-operator), but not by using Helm. The only semi-plausible workaround I can think of is to deploy Connect with 0 replicas.
How we optionally deploy the operator:
Chart Version: 1.4.0
Helm Version: 3.x
Kubernetes Version: latest
Terraform supports applying helm charts: https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
However, when setting connect.credentials
like:
set_sensitive {
name = "connect.credentials"
type = "string"
value = " ${file("${path.root}/path/to/1password-credentials.json")} "
}
the apply fails to Error: failed parsing key "connect.credentials" with value ...
This is due to this bug: hashicorp/terraform-provider-helm#618 that is apparently also reproduceable in the actual helm: hashicorp/terraform-provider-helm#618 (comment)
Above should not fail. Could the chart e.g take only the file path and manage the JSON internally instead? Or would there be another possible workaround?
E: found a workaround:
set_sensitive {
name = "connect.credentials"
type = "string"
value = " ${replace(file("${path.root}/path/to/1password-credentials.json"), ",", "\\,")}"
}
Amazon EKS
Chart Version: secrets-injector-1.0.0
Helm Version: v3.11.3
Kubernetes Version: 1.25
If a mountPath specified in a Pod does not exist it is not created
The non-existent mountPath should be created
This definition does not create /data
and returns /bin/sh: can't create /data/file.txt: nonexistent directory
:
apiVersion: v1
kind: Pod
metadata:
name: testing
namespace: default
annotations:
operator.1password.io/inject: "testing"
spec:
containers:
- name: testing
image: busybox
command: ["/bin/sh"]
args: ["-c","while true; do echo $(date) >> /data/file.txt; sleep 5; done"]
env:
- name: OP_CONNECT_HOST
value: "http://onepassword-connect:8080"
envFrom:
- secretRef:
name: op-connect-token
volumeMounts:
- mountPath: /data/
name: storage
volumes:
- name: storage
emptyDir: {}
This definition that does not invoke the Injector works as expected:
apiVersion: v1
kind: Pod
metadata:
name: testing2
namespace: default
spec:
containers:
- name: testing2
image: busybox
command: ["/bin/sh"]
args: ["-c","while true; do echo $(date) >> /data/file.txt; sleep 5; done"]
env:
- name: OP_CONNECT_HOST
value: "http://onepassword-connect:8080"
envFrom:
- secretRef:
name: op-connect-token
volumeMounts:
- mountPath: /data/
name: storage
volumes:
- name: storage
emptyDir: {}
Chart Version: connect-1.9.0 | APP 1.5.7
Helm Version: v3.10.2
Kubernetes Version: v1.25.4-rc4+k3s1
Followed the blog https://blog.bennycornelissen.nl/post/onepassword-on-kubernetes/
however, I don't get the k8s secret
get a Kubernetes secret for the 1password secret
# Put our 1Password Connect access token in a variable
OP_TOKEN="< paste your token here >"
# Use Helm to install and configure everything. It will use the
# credential file and the ${OP_TOKEN} variable to use the integration
# we set up earlier.
helm upgrade --install connect 1password/connect --set-file connect.credentials=1password-credentials.json --set operator.create=true --set operator.token.value="${OP_TOKEN}" --set "operator.watchNamespace={opconnect,default}" --namespace opconnect
and add the following yaml manifest after the 1password pods are started
apiVersion: onepassword.com/v1 kind: OnePasswordItem metadata: name: password spec: itemPath: "vaults/systems/items/dummy"
kubectl logs onepassworditem.onepassword.com/password error: no kind "OnePasswordItem" is registered for version "onepassword.com/v1" in scheme "pkg/scheme/scheme.go:28"
The following has a 408 error
`
kubectl get onepassworditem.onepassword.com/password -o yaml
apiVersion: onepassword.com/v1
kind: OnePasswordItem
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"onepassword.com/v1","kind":"OnePasswordItem","metadata":{"annotations":{},"name":"password","namespace":"default"},"spec":{"itemPath":"vaults/systems/items/dummy"}}
creationTimestamp: "2022-11-22T02:35:26Z"
finalizers:
kubectl -n opconnect logs onepassword-connect-operator-7467685677-4j45g
{"level":"info","ts":1669084536.1768796,"logger":"controller_onepassworditem","msg":"Reconciling OnePasswordItem","Request.Namespace":"default","Request.Name":"password"}
{"level":"error","ts":1669084544.651247,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"onepassworditem-controller","request":"default/password","error":"Failed to retrieve item: status 500: failed to initiate, review service logs for details","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/workspace/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}
{"level":"info","ts":1669084544.7315106,"logger":"controller_onepassworditem","msg":"Reconciling OnePasswordItem","Request.Namespace":"default","Request.Name":"password"}
It would be great if we could specify e.g. operator.existingTokenSecret
and connect.existingCredentialsSecret
when installing the Helm chart
I use this with ArgoCD and don't want the credentials/token in my Git repository.
Specifying the name of a Secret with the existing token should negate the need to specify token/credentials on every install.
Making Connect sidecar of the Operator would limit the Connect scope to inside pod, which in turn would mean that no ports would need to be opened outside the pod. This would limit the risk of misconfiguration and exposing the Connect too widely accidentally.
When you only need Connect for serving Operator. For example I only need Connect to serve the Operator so I don't need the endpoints to be exposed to anything else. I would sleep my nights better if it were abstracted away.
Implement a possibility to make Connect sidecar of Operator
Not that I know.
E: Actually this is exactly the reason why I'd rather keep the Connect as a sidecar for the Operator: #65 It is too easy to expose the endpoints to external world.
I have nodes with some taints. Therefore i want to specify tolerations for the deployements. In this helmchart, i can't set those tolerations.
I have a separate worker. The normal workload should not be deployed there. So i tainted the node. But i want the connect-deployment on this node.
add tolerations to the helm-chart-templates. I'll prepare an PR for this.
Chart Version:
Helm Version: 1.8.1
Kubernetes Version: 1.24
1P cannot talk home. The logs say:
{"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2022-09-20T10:09:36.36339545Z","level":3}
{"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), Wrapped: (failed to loadCredentialsFile), Wrapped: (LoadLocalAuthV2 failed to credentialsDataFromBase64), illegal base64 data at input byte 0","timestamp":"2022-09-20T10:09:36.363794629Z","level":1}
1P works
I created the secret from file just as it says in the docs:
k create secret generic op-credentials --from-file 1password-credentials.json
I verified that the secret's value (the JSON) is passed into the pod via OP_SESSION
.
Currently all containers default to the human-readable
log format as described here https://developer.1password.com/docs/cli/reference/. It should be possible to set the desired format on the helm chart.
Set log format via parameters to ensure that monitoring systems pass the logs correctly.
Allow a operator.logFormat, connect.logFormat, and sync.logFormat, or global logFormat parameter on the helm chart. Alternatively, accept a set of extraEnvs
that are propagated to all pods such that a user can set OP_FORMAT.
Not that I'm aware of.
Support deploying the Operator independently of the Connect.
Currently, this chart only supports deploying the operator
next to the connect
component. If a user has multiple Kubernetes clusters, or has deployed 1Password Connect outside of Kubernetes, they currently can't use this chart.
The Operator repo has yaml files, but no chart.
Chart Version: connect-1.4.0
Helm Version: version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"dirty", GoVersion:"go1.16.3"}
Kubernetes Version: Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
password connect does not create secrets although secret was created, the containers show no useful logs or errors
A kubernetes secrets should've been created automatically
helm repo add 1password https://1password.github.io/connect-helm-charts/ && \
helm upgrade -i connect 1password/connect --set-file connect.credentials=1password-credentials.json --set operator.token.value=$TOKEN
apiVersion: onepassword.com/v1
kind: OnePasswordItem
metadata:
name: test
spec:
itemPath: "vaults/fi2nz7kvpcg4p2fcpizeftbava/items/agm66y3qdnd7djfahzfiwuv4cq"
NAME TYPE DATA AGE
default-token-v6dhh kubernetes.io/service-account-token 3 56m
op-credentials Opaque 1 46m
sh.helm.release.v1.connect.v1 helm.sh/release.v1 1 46m
sh.helm.release.v1.connect.v2 helm.sh/release.v1 1 44m
sh.helm.release.v1.connect.v3 helm.sh/release.v1 1 25m
sh.helm.release.v1.connect.v4 helm.sh/release.v1 1 24m
sh.helm.release.v1.connect.v5 helm.sh/release.v1 1 17m
{"log_message":"(I) starting 1Password Connect API ...","timestamp":"2021-07-05T20:06:28.5647019Z","level":3}
{"log_message":"(I) serving on :8080","timestamp":"2021-07-05T20:06:28.564763Z","level":3}
{"log_message":"(I) [discovery-local] starting discovery, advertising endpoint 34473 /meta/message","timestamp":"2021-07-05T20:06:28.564569Z","level":3}
{"log_message":"(I) GET /heartbeat","timestamp":"2021-07-05T20:06:42.0665801Z","level":3,"scope":{"request_id":"0108f9dc-d4b7-4e96-96e0-d750ac0b49e8"}}
{"log_message":"(I) GET /heartbeat completed (200: OK)","timestamp":"2021-07-05T20:06:42.0668241Z","level":3,"scope":{"request_id":"0108f9dc-d4b7-4e96-96e0-d750ac0b49e8"}}
...
{"log_message":"(W) configured to use HTTP with no TLS","timestamp":"2021-07-05T20:06:27.6574652Z","level":2}
{"log_message":"(I) [discovery-local] starting discovery, advertising endpoint 45279 /meta/message","timestamp":"2021-07-05T20:06:27.6575066Z","level":3}
{"log_message":"(I) no existing database found, will initialize at /home/opuser/.op/data/1password.sqlite","timestamp":"2021-07-05T20:06:27.6579724Z","level":3}
{"log_message":"(I) starting 1Password Connect Sync ...","timestamp":"2021-07-05T20:06:27.6584934Z","level":3}
{"log_message":"(I) serving on :8081","timestamp":"2021-07-05T20:06:27.6585384Z","level":3}
{"log_message":"(I) database initialization complete","timestamp":"2021-07-05T20:06:27.6671754Z","level":3}
{"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2021-07-05T20:06:27.6673713Z","level":3}
{"log_message":"(I) GET /health","timestamp":"2021-07-05T20:06:45.405312Z","level":3,"scope":{"request_id":"d2065c03-f0c6-41d2-9051-769df7cbc670"}}
{"log_message":"(I) GET /health completed (200: OK)","timestamp":"2021-07-05T20:06:45.4056794Z","level":3,"scope":{"request_id":"d2065c03-f0c6-41d2-9051-769df7cbc670"}}
{"log_message":"(I) GET /heartbeat","timestamp":"2021-07-05T20:06:55.2725209Z","level":3,"scope":{"request_id":"709dfbec-267f-4c22-9483-59f7c087d4e8"}}
...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.