portainer / k8s Goto Github PK
View Code? Open in Web Editor NEWHow to deploy Portainer inside a Kubernetes environment.
License: MIT License
How to deploy Portainer inside a Kubernetes environment.
License: MIT License
Hello,
I noticed that, with the manifests (or the Helm template) in this repository, the pods for the Portainer server mount a service account with cluster-admin access. However, if I understand correctly, all interactions that Portainer has with Kubernetes are done through the agent, not directly by the server. Is there a reason why the server's pods also need cluster admin access to run?
We need to update our manifests to use the AlwaysPull image policy so that users can simply re-deploy the Portainer application and upgrade to the latest version.
Hi Everyone,
I was deploying the portainer 2.0.0 on kubernetes (k3s) cluster, I was facing issue with Persistent volume allocation after deploying it on cluster. so I added some content in yaml file to figure it out, I am extremely sorry, I dont have errors/issues in details cause I was working on client`s server and not able to take information from it. I am pasting the content for reference.
Edited:
I was facing two issues as mentioned below.
so, While debugging I found that my environment wasn't ready for auto provisioning, so I need to add persistent volume to yaml file. I attached.
Hello,
Could you please add loadBalancerIP option to helm chart k8s/charts/portainer/templates/service.yaml
it already supports .service.type to be LoadBalancer but would be nice to be able also to set the desired IP directly.
Something like this maybe:
{{- if and .Values.service.loadBalancerIP (eq .Values.service.type "LoadBalancer") }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
Hello,
The instruction section for ClusterIP in NOTES.txt is not complete. Here is the diff to apply:
diff --git a/charts/portainer/templates/NOTES.txt b/charts/portainer/templates/NOTES.txt
index 604843e..cd3a259 100644
--- a/charts/portainer/templates/NOTES.txt
+++ b/charts/portainer/templates/NOTES.txt
@@ -18,6 +18,7 @@
echo http://$SERVICE_IP:{{ .Values.service.httpsPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
Get the application URL by running these commands:
- export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].me$ echo "Visit http://127.0.0.1:9443 to use your application"
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:9443 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9443:9443
{{- end }}
Guillaume,
Hello,
Is portainer for K8S compatible with AKS deployed in a private network ?
I deployed it using Helm with load balancer option. I can connect to the cluster during initial setup but I get a failure message "Unable to retrieve namespaces" each time I try to browse something from the portal.
I tried same deployment (same version, etc.) from a public AKS cluster and it works fine. So the reason I suspect it is related to private cluster.
helm uninstall portainer
helm repo add portainer https://portainer.github.io/k8s/
helm repo update
kubectl create namespace portainer
helm install portainer portainer/portainer -n portainer -f portainer-values.yml --debug
Yields:
Error: cannot re-use a name that is still in use
helm.sh/helm/v3/pkg/action.(*Install).availableName
/home/circleci/helm.sh/helm/pkg/action/install.go:435
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:181
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:242
main.newInstallCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:958
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:204
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1374
It is currently not possible to use the Edge compute feature of Portainer when deploying Portainer exposed over node port (works fine when exposed via load balancer).
The following manifest is invalid https://github.com/portainer/k8s/blob/master/deploy/manifests/portainer/portainer.yaml
I'll open a PR to actually fix this manifest but we'll need to check the helm manifest template as well to make sure it is not impacted.
I got the following warning recently while deploying the agent/nodeport manifest:
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
We should probably replace the deprecated resource.
Hi there,
I am using the default helm installation
helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=ClusterIP
But the deployment keep crashing because the liveness chekc fails (connection refused to the local pod ip address atr port 9000).
Any idea whats going on ?
I am trying to set an ingress path as below.
- path: /?(.*) pathType: Prefix backend: service: name: health-ui-clusterip-srv port: number: 80
/?(.*) this path will not allow me to save in Portainer, but will allow me to save it if I use yaml and kubectl.
Hello all,
I'm trying to deploy portainer to my EKS using terraform.
Here is my code:
resource "helm_release" "portainer" {
name = "portainer"
repository = "https://portainer.github.io/k8s/"
chart = "portainer"
namespace = "portainer"
lint = false
cleanup_on_fail = true
create_namespace = true
values = [yamlencode({
service = { type = "ClusterIP" }
ingress = {
enabled = true
annotations = { "kubernetes.io/ingress.class" = "nginx" }
hosts = [{ host = "portainer.${local.tfstate["route53"]["internal"]["name"]}" }]
}
persistence = { storageClass = "gp2" }
})]
}
Here is my error output:
helm_release.portainer: Creating...
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http): missing required field "paths" in io.k8s.api.networking.v1beta1.HTTPIngressRuleValue
I've said "okay" and added "paths":
ingress = {
enabled = true
annotations = { "kubernetes.io/ingress.class" = "nginx" }
hosts = [{
host = "portainer.${local.tfstate["route53"]["internal"]["name"]}"
paths = ["/"]
}]
}
And now I see this error:
helm_release.portainer: Creating...
Error: template: portainer/templates/ingress.yaml:35:21: executing "portainer/templates/ingress.yaml" at <.path>: can't evaluate field path in type interface {}
What I'm doing wrong? ๐
helm install portainer/portainer --set persistence.enabled=false
does the same as without --set persistence.enabled
.
After checking the codebase found out that persistence.enabled
is not in the code but only documented so actually not possible to install this helm chart without persistence.
Either should be implemented as documented or the docs should not mention the values that are not there.
It's not possible to have helm auto-create the portainer namespace (given the helm secrets must be stored in the namespace), but we can include the namespace YAML in the automatically-generated manifests. This will slightly reduce the friction for new users deploying using manifest files.
We need to enhance this repository to support the deployment of Portainer EE.
Todo:
portainer/portainer-ee
The pvc template in the current helm chart uses the deprecated kubernetes volume annotation:
{{- if .Values.persistence.enabled -}}
{{- if not .Values.persistence.existingClaim -}}
---
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: {{ template "portainer.fullname" . }}
namespace: {{ .Release.Namespace }}
annotations:
{{- if .Values.persistence.storageClass }}
volume.beta.kubernetes.io/storage-class: {{ .Values.persistence.storageClass | quote }}
{{- else }}
volume.alpha.kubernetes.io/storage-class: "generic"
{{- end }}
{{- if .Values.persistence.annotations }}
{{ toYaml .Values.persistence.annotations | indent 2 }}
{{ end }}
labels:
io.portainer.kubernetes.application.stack: portainer
{{- include "portainer.labels" . | nindent 4 }}
spec:
accessModes:
- {{ default "ReadWriteOnce" .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.selector }}
selector:
{{ toYaml .Values.persistence.selector | indent 4 }}
{{ end }}
{{- end }}
{{- end }}
According to K8 documentation - https://kubernetes.io/docs/concepts/storage/persistent-volumes/
A PV can have a class, which is specified by setting the storageClassName attribute to the name of a [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/). A PV of a particular class can only be bound to PVCs requesting that class. A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
In the past, the annotation volume.beta.kubernetes.io/storage-class was used instead of the storageClassName attribute. This annotation is still working; however, it will become fully deprecated in a future Kubernetes release
I noticed this when it caused an issue with some newer software, example - longhorn: longhorn/longhorn#6264, that only looks for spec.storageClass
instead of annotations.
We should push a change/enhancement to remove the deprecated annotation and instead use spec.storageClass
In the latest 1.12 version, there does not seem to be an option to create a Template from an existing Stack or Application. This is definitely a needed function. I have dozens of microservices, that need to be installed on dozens of sites. The ability to create the templates from existing applications / stacks would save so much work.
Hi,
I do have the following helm command:
NAMESPACE=portainer
DOMAIN=test.com
helm upgrade portainer -n ${NAMESPACE} portainer/portainer \
--install \
--create-namespace \
--set service.type=ClusterIP \
--set ingress.enabled=true \
--set ingress.annotations."kubernetes\.io/ingress\.class"=nginx \
--set ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-staging \
--set ingress.hosts.host=portainer.${DOMAIN} \
--set ingress.tls[0].secretName=portainer.internal.${DOMAIN}-tls \
--set ingress.tls[0].hosts[0]=portainer.internal.${DOMAIN}
but the hosts and tls part are causing problems, I do get this error:
coalesce.go:202: warning: destination for hosts is a table. Ignoring non-table value [map[host:<nil> paths:[]]]
Error: template: portainer/templates/ingress.yaml:31:15: executing "portainer/templates/ingress.yaml" at <.host>: can't evaluate field host in type interface {}
Can anybody give me a clue what the correct syntax is for the hosts and tls part?
A helm install on a stack with more than 1 default storeage class produces an error - the following fix corrects it.
persistence:
size: "2Gi"
annotations: {}
storageClass: - this is a value that should be documented/included in values.yaml
When setting up portainer on my k3s cluster using the provided command kubectl apply -n portainer -f https://downloads.portainer.io/ce2-16/portainer-lb.yaml
I get an error saying Error from server (BadRequest): error when creating "portainer/portainer-account.yml": ClusterRoleBinding in version "v1" cannot be handled as a ClusterRoleBinding: json: cannot unmarshal object into Go struct field ClusterRoleBinding.subjects of type []v1.Subject
. I found that in the ClusterRoleBinding under subjects was set as listed below.
subjects:
kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
The issue is that these entries must be set like this instead.
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
After changing it I was able to start up portainer and access my cluster environment. The repo looks to be correct so I'm guess that the files hosted on downloads.portainer.io are out of date. I did notice some white space issues and other formatting problems so I'll submit a PR to fix those. Hopefully that triggers a new build to be deployed to the download server.
I was following the documentation here: https://docs.portainer.io/advanced/ssl.
The secret was created without any errors thus confirming that the public key and private key matched.
microk8s.kubectl create secret tls portainer-tls-secret -n portainer --cert=/etc/ssl/certs/Bundle.pem --key=/etc/ssl/private/PrivateKey-Unprotected.pem
secret/portainer-tls-secret created
However Portainer seemed to ignore this when I fired it up until I changed --existingSecret=portainer-tls-secret
to tls.existingSecret=portainer-tls-secret
.
Now I'm getting :
2023/06/09 11:30PM INF main.go:568 > encryption key file not present | filename=portainer
2023/06/09 11:30PM INF main.go:602 > proceeding without encryption key |
in the Pod's logs.
My understanding from this is that go only allows one to pass an unencrypted private key. So it would seem that I'm doing the correct thing, but Portainer's not a happy bunny about it.
Any idea what I'm doing wrong?
The CI tests currently run against the current default version of KinD (1.18.2). We should ideally test the chart against the last few major Kubernetes releases, (1.19 and 1.20 in this case), as well as run some kube-scoring against the generated manifests.
If I create a new namespace, but do not set any resource limits, I can save it. However, if I later open the namespace to change something, I am unable to save any changes to the namespace.
A workaround is I must set Allow Resource Over-commit in the cluster - setup form.
If I try to set a limit in the namespace on the edit, I receive an error on save saying resource-quotas not found.
Portainer namespace is missing from manifests. I am fixing that
I am receiving several errors with my Portainer instance. I've followed the instructions available on https://www.portainer.io/installation/
I am using Helm 3 on K8S v1.17.9-gke.1504 The service is exposed using Ingress.
This happens when I browse https://portainer.demos.clvr.cloud/#!/1/kubernetes/dashboard
โฏ helm list -n portainer
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
portainer portainer 2 2020-10-16 00:17:57.040719 +1300 NZDT deployed portainer-1.0.3 2.0.0
Values file
image:
pullPolicy: IfNotPresent
repository: portainer/portainer-ce
tag: latest
imagePullSecrets: []
ingress:
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: xxxxx
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-signin: xxxxxxxxxxxxxx
nginx.ingress.kubernetes.io/auth-url: xxxxxxxxxxx
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
enabled: true
hosts:
- host: xxxxxxxxxx
paths:
- path: /
port: 9000
tls:
- hosts:
- xxxxxxxxxxxx
secretName: xxxxxxxxx-tls
persistence:
annotations: {}
size: 10Gi
replicaCount: 1
resources: {}
service:
annotations: {}
edgeNodePort: 30776
edgePort: 8000
httpNodePort: 30777
httpPort: 9000
type: ClusterIP
serviceAccount:
annotations: {}
name: portainer-sa-clusteradmin
Portainer Pod Log:
2020-10-15T11:21:38.314989155Z 2020/10/15 11:21:38 server: Reverse tunnelling enabled
2020-10-15T11:21:38.315053807Z 2020/10/15 11:21:38 server: Fingerprint 23:d5:ac:6e:2d:bd:65:0b:5e:45:24:5b:ee:6f:36:06
2020-10-15T11:21:38.315066086Z 2020/10/15 11:21:38 server: Listening on 0.0.0.0:8000...
2020-10-15T11:21:38.319101178Z 2020/10/15 11:21:38 Starting Portainer 2.0.0 on :9000
2020-10-15T11:21:38.319580475Z 2020/10/15 11:21:38 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message: starting tunnel management process]
2020-10-15T11:21:53.802212793Z 2020/10/15 11:21:53 http error: Invalid JWT token (err=Invalid JWT token) (code=401)
Google Chrome DevTools Network:
GET https://portainer.demos.clvr.cloud/api/settings
401
{"message":"Invalid JWT token","details":"Invalid JWT token"}
[...]
Some Queries to /namespaces /tags and /status. No issues
[...]
GET https://portainer.demos.clvr.cloud/api/endpoints/1/kubernetes/api/v1/namespaces/adm/resourcequotas/portainer-rq-adm
404
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "resourcequotas \"portainer-rq-cert-manager\" not found",
"reason": "NotFound",
"details": {
"name": "portainer-rq-cert-manager",
"kind": "resourcequotas"
},
"code": 404
}
[..]
I am trying to install portainer into a kubernetes cluster, using the instructions found on your website. But when I install the helm chart, the portainer pod remains in a Creating state.
If I use describe on the pod I get....
Events:
Type Reason Age From Message
Warning FailedScheduling 3m39s (x3 over 3m43s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 3m34s default-scheduler Successfully assigned portainer/portainer-7b6c599757-9jltq to k8snode0
Warning FailedMount 94s kubelet MountVolume.MountDevice failed for volume "pvc-4c67a455-953e-439d-a8a2-049937d24b27" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning FailedMount 94s kubelet MountVolume.MountDevice failed for volume "pvc-4c67a455-953e-439d-a8a2-049937d24b27" : rpc error: code = Aborted desc = operation locked due to in progress operation(s): ["volume_id_pvc-4c67a455-953e-439d-a8a2-049937d24b27"]
Warning FailedMount 91s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-c78bm]: timed out waiting for the condition
I have checked on the NAS, the volume has been created. But you container seems to have an issue using it.
Here are my values yaml contents.
service: type: ClusterIP ingress: enabled: true certManager: true annotations: cert-manager.io/cluster-issuer: letsencrypt kubernetes.io/ingress.class: nginx hosts: - host: "portainer.*********.com" paths: - path: / tls: - hosts: - portainer.*********.com secretName: portainer-tls persistence: size: "10Gi" storageclass: "freenas-iscsi-csi"
I just tried (several times) to deploy portainer via helm on my Kubernetes cluster. After starting it I don't have the possibility to create an administrator account. Database file is created in the PV. The log of the portainer pod is almost completely empty. I am attaching all kinds of information that could help finding the cause of the error to this issue.
Many thanks in advance
Note: The timestamps may confuse something that causes my MacBook to be configured with a different time zone than the server
service:
type: ClusterIP
ingress:
enabled: true
certManager: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
hosts:
- host: ** domain removed **
paths:
- path: /
tls:
- hosts:
- ** domain removed **
secretName: portainer-tls
# Volume on Nfs Server
persistence:
size: "25Gi"
2020-10-06T21:10:20.687275304Z 2020/10/06 21:10:20 Starting Portainer 2.0.0 on :9000
2020-10-06T21:10:20.687363185Z 2020/10/06 21:10:20 server: Reverse tunnelling enabled
2020-10-06T21:10:20.687411390Z 2020/10/06 21:10:20 server: Fingerprint 9f:09:27:51:0c:02:f1:68:f8:5d:23:f7:43:bc:58:cd
2020-10-06T21:10:20.687419634Z 2020/10/06 21:10:20 server: Listening on 0.0.0.0:8000...
2020-10-06T21:10:20.688052006Z 2020/10/06 21:10:20 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message: starting tunnel management process]
drwxrwxrwx 5 root root 4096 Oct 6 23:00 .
drwxrwxrwx 10 nobody nobody 4096 Oct 6 22:59 ..
drwx------ 2 root root 4096 Oct 6 23:00 bin
drwx------ 2 root root 4096 Oct 6 23:00 compose
-rw-r--r-- 1 root root 389 Oct 6 23:10 config.json
-rw------- 1 root root 65536 Oct 6 23:10 portainer.db
-rw------- 1 root root 227 Oct 6 23:00 portainer.key
-rw------- 1 root root 190 Oct 6 23:00 portainer.pub
drwx------ 2 root root 4096 Oct 6 23:00 tls
I have created a namespace for our API's and I am trying to configure ingress to use the multiple paths. These paths require the rewrite-target and use-regex annotations to work.
I have tried to enter them, both on a new namespace, and by updating an existing namespace, but when I save them, they do not appear to be persisted, and the nginx controller does not seem to indicate it is using them.
In addition, the paths do not seem to be working, a further indication the annotations are not doing anything.
Hi there.
I've a feature request.
As said in documentation, we have possibility to skip initial setup for users section. But this works only for simple docker installation.
It will be cool, if there will be possibility to pass initial user data with values on helm install
. Also, it will be cool to pass initial endpoint as configuration (or may be automatically detect kubernetes local environment on startup, but this feature is more related to other repository).
Hey there,
as we're deploying Portainer to AKS using Helm we want to add annotations to the service definition. These are currently not available in the corresponding Helm template. I'm preparing a PR to add the field.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.