Giter Site home page Giter Site logo

k8s's Introduction

Portainer Community Edition is a lightweight service delivery platform for containerized applications that can be used to manage Docker, Swarm, Kubernetes and ACI environments. It is designed to be as simple to deploy as it is to use. The application allows you to manage all your orchestrator resources (containers, images, volumes, networks and more) through a ‘smart’ GUI and/or an extensive API.

Portainer consists of a single container that can run on any cluster. It can be deployed as a Linux container or a Windows native container.

Portainer Business Edition builds on the open-source base and includes a range of advanced features and functions (like RBAC and Support) that are specific to the needs of business users.

Latest Version

Portainer CE is updated regularly. We aim to do an update release every couple of months.

latest version

Getting started

Features & Functions

View this table to see all of the Portainer CE functionality and compare to Portainer Business.

Getting help

Portainer CE is an open source project and is supported by the community. You can buy a supported version of Portainer at portainer.io

Learn more about Portainer's community support channels here.

You can join the Portainer Community by visiting https://www.portainer.io/join-our-community. This will give you advance notice of events, content and other related Portainer content.

Reporting bugs and contributing

  • Want to report a bug or request a feature? Please open an issue.
  • Want to help us build portainer? Follow our contribution guidelines to build it locally and make a pull request.

Security

Work for us

If you are a developer, and our code in this repo makes sense to you, we would love to hear from you. We are always on the hunt for awesome devs, either freelance or employed. Drop us a line to [email protected] with your details and/or visit our careers page.

Privacy

To make sure we focus our development effort in the right places we need to know which features get used most often. To give us this information we use Matomo Analytics, which is hosted in Germany and is fully GDPR compliant.

When Portainer first starts, you are given the option to DISABLE analytics. If you don't choose to disable it, we collect anonymous usage as per our privacy policy. Please note, there is no personally identifiable information sent or stored at any time and we only use the data to help us improve Portainer.

Limitations

Portainer supports "Current - 2 docker versions only. Prior versions may operate, however these are not supported.

Licensing

Portainer is licensed under the zlib license. See LICENSE for reference.

Portainer also contains code from open source projects. See ATTRIBUTIONS.md for a list.

k8s's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s's Issues

No Access: Failure, Unable to verify administrator account existence

Issue

I just tried (several times) to deploy portainer via helm on my Kubernetes cluster. After starting it I don't have the possibility to create an administrator account. Database file is created in the PV. The log of the portainer pod is almost completely empty. I am attaching all kinds of information that could help finding the cause of the error to this issue.

Many thanks in advance

Note: The timestamps may confuse something that causes my MacBook to be configured with a different time zone than the server

Helm values file (portainer.yaml in my case)

service:
  type: ClusterIP

ingress:
  enabled: true
  certManager: true
  annotations: 
    cert-manager.io/cluster-issuer: letsencrypt
    kubernetes.io/ingress.class: nginx
  hosts:
    - host: ** domain removed **
      paths:
        - path: /
  tls:
    - hosts:
        - ** domain removed **
      secretName: portainer-tls

# Volume on Nfs Server
persistence:
  size: "25Gi"

Result of Helm Installation:

Bildschirmfoto 2020-10-06 um 23 04 46

Portainer WebApp

Bildschirmfoto 2020-10-06 um 23 01 34

Portainer Pod Log

2020-10-06T21:10:20.687275304Z 2020/10/06 21:10:20 Starting Portainer 2.0.0 on :9000
2020-10-06T21:10:20.687363185Z 2020/10/06 21:10:20 server: Reverse tunnelling enabled
2020-10-06T21:10:20.687411390Z 2020/10/06 21:10:20 server: Fingerprint 9f:09:27:51:0c:02:f1:68:f8:5d:23:f7:43:bc:58:cd
2020-10-06T21:10:20.687419634Z 2020/10/06 21:10:20 server: Listening on 0.0.0.0:8000...
2020-10-06T21:10:20.688052006Z 2020/10/06 21:10:20 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message: starting tunnel management process]

PV Content

drwxrwxrwx  5 root   root    4096 Oct  6 23:00 .
drwxrwxrwx 10 nobody nobody  4096 Oct  6 22:59 ..
drwx------  2 root   root    4096 Oct  6 23:00 bin
drwx------  2 root   root    4096 Oct  6 23:00 compose
-rw-r--r--  1 root   root     389 Oct  6 23:10 config.json
-rw-------  1 root   root   65536 Oct  6 23:10 portainer.db
-rw-------  1 root   root     227 Oct  6 23:00 portainer.key
-rw-------  1 root   root     190 Oct  6 23:00 portainer.pub
drwx------  2 root   root    4096 Oct  6 23:00 tls

PVC Template Uses Deprecated Volume Annotation For storageClass

The pvc template in the current helm chart uses the deprecated kubernetes volume annotation:

{{- if .Values.persistence.enabled -}}
{{- if not .Values.persistence.existingClaim -}}
---
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
  name: {{ template "portainer.fullname" . }}
  namespace: {{ .Release.Namespace }}
  annotations:
  {{- if .Values.persistence.storageClass }}
    volume.beta.kubernetes.io/storage-class: {{ .Values.persistence.storageClass | quote }}
  {{- else }}
    volume.alpha.kubernetes.io/storage-class: "generic"
  {{- end }}
  {{- if .Values.persistence.annotations }}
  {{ toYaml .Values.persistence.annotations | indent 2 }}  
  {{ end }}
  labels:
    io.portainer.kubernetes.application.stack: portainer
    {{- include "portainer.labels" . | nindent 4 }}
spec:
  accessModes:
    - {{ default "ReadWriteOnce" .Values.persistence.accessMode | quote }}
  resources:
    requests:
      storage: {{ .Values.persistence.size | quote }}
  {{- if .Values.persistence.selector }}
  selector:
{{ toYaml .Values.persistence.selector | indent 4 }}
  {{ end }}
{{- end }}
{{- end }}

According to K8 documentation - https://kubernetes.io/docs/concepts/storage/persistent-volumes/

A PV can have a class, which is specified by setting the storageClassName attribute to the name of a [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/). A PV of a particular class can only be bound to PVCs requesting that class. A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.

In the past, the annotation volume.beta.kubernetes.io/storage-class was used instead of the storageClassName attribute. This annotation is still working; however, it will become fully deprecated in a future Kubernetes release

I noticed this when it caused an issue with some newer software, example - longhorn: longhorn/longhorn#6264, that only looks for spec.storageClass instead of annotations.

We should push a change/enhancement to remove the deprecated annotation and instead use spec.storageClass

Unable to set an ingress path based on the root path.

I am trying to set an ingress path as below.

- path: /?(.*) pathType: Prefix backend: service: name: health-ui-clusterip-srv port: number: 80

/?(.*) this path will not allow me to save in Portainer, but will allow me to save it if I use yaml and kubectl.

Initial setup with config maps or secrets

Hi there.

I've a feature request.

As said in documentation, we have possibility to skip initial setup for users section. But this works only for simple docker installation.

It will be cool, if there will be possibility to pass initial user data with values on helm install. Also, it will be cool to pass initial endpoint as configuration (or may be automatically detect kubernetes local environment on startup, but this feature is more related to other repository).

Make service annotations available in Helm Chart

Hey there,

as we're deploying Portainer to AKS using Helm we want to add annotations to the service definition. These are currently not available in the corresponding Helm template. I'm preparing a PR to add the field.

Beef up CI testing

The CI tests currently run against the current default version of KinD (1.18.2). We should ideally test the chart against the last few major Kubernetes releases, (1.19 and 1.20 in this case), as well as run some kube-scoring against the generated manifests.

Attempting to fire up Portainer using Helm chart on Microk8s 1.27 using TLS Cert & key fails

I was following the documentation here: https://docs.portainer.io/advanced/ssl.

The secret was created without any errors thus confirming that the public key and private key matched.

microk8s.kubectl create secret tls portainer-tls-secret -n portainer  --cert=/etc/ssl/certs/Bundle.pem  --key=/etc/ssl/private/PrivateKey-Unprotected.pem

secret/portainer-tls-secret created

However Portainer seemed to ignore this when I fired it up until I changed --existingSecret=portainer-tls-secret to tls.existingSecret=portainer-tls-secret.

Now I'm getting :

2023/06/09 11:30PM INF main.go:568 > encryption key file not present | filename=portainer
2023/06/09 11:30PM INF main.go:602 > proceeding without encryption key |

in the Pod's logs.

My understanding from this is that go only allows one to pass an unencrypted private key. So it would seem that I'm doing the correct thing, but Portainer's not a happy bunny about it.

Any idea what I'm doing wrong?

Support of "private" Azure Kubernetes Services

Hello,

Is portainer for K8S compatible with AKS deployed in a private network ?

I deployed it using Helm with load balancer option. I can connect to the cluster during initial setup but I get a failure message "Unable to retrieve namespaces" each time I try to browse something from the portal.

I tried same deployment (same version, etc.) from a public AKS cluster and it works fine. So the reason I suspect it is related to private cluster.

Portainer server ServiceAccount

Hello,

I noticed that, with the manifests (or the Helm template) in this repository, the pods for the Portainer server mount a service account with cluster-admin access. However, if I understand correctly, all interactions that Portainer has with Kubernetes are done through the agent, not directly by the server. Is there a reason why the server's pods also need cluster admin access to run?

PV issue in version 2.0.0

Hi Everyone,
I was deploying the portainer 2.0.0 on kubernetes (k3s) cluster, I was facing issue with Persistent volume allocation after deploying it on cluster. so I added some content in yaml file to figure it out, I am extremely sorry, I dont have errors/issues in details cause I was working on client`s server and not able to take information from it. I am pasting the content for reference.

image

Edited:
I was facing two issues as mentioned below.

  1. I deployed Portainer on my Kubernetes cluster, I am trying to use API for my personal UI, whenever my Portainer or server is restarted then endpoint id is changing. due to change in endpoints I need to change API routes. (For this I used official yaml file and issue solved but use official yaml file made Portainer inaccessible. its issue 2)

so, While debugging I found that my environment wasn't ready for auto provisioning, so I need to add persistent volume to yaml file. I attached.

helm value persistence.enabled is only in docs, not in the code

helm install portainer/portainer --set persistence.enabled=false does the same as without --set persistence.enabled.
After checking the codebase found out that persistence.enabled is not in the code but only documented so actually not possible to install this helm chart without persistence.

Either should be implemented as documented or the docs should not mention the values that are not there.

Error: cannot re-use a name that is still in use

helm uninstall portainer

helm repo add portainer https://portainer.github.io/k8s/
helm repo update

kubectl create namespace portainer
helm install portainer portainer/portainer -n portainer -f portainer-values.yml --debug

Yields:
Error: cannot re-use a name that is still in use
helm.sh/helm/v3/pkg/action.(*Install).availableName
/home/circleci/helm.sh/helm/pkg/action/install.go:435
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:181
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:242
main.newInstallCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:958
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:204
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1374

Use of deprecated ClusterRoleBinding

I got the following warning recently while deploying the agent/nodeport manifest:

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding

We should probably replace the deprecated resource.

Include namespace in manifests

It's not possible to have helm auto-create the portainer namespace (given the helm secrets must be stored in the namespace), but we can include the namespace YAML in the automatically-generated manifests. This will slightly reduce the friction for new users deploying using manifest files.

Problem with hosts and tls in Helm chart

Hi,

I do have the following helm command:

NAMESPACE=portainer
DOMAIN=test.com

helm upgrade portainer -n ${NAMESPACE} portainer/portainer \
    --install \
    --create-namespace \
    --set service.type=ClusterIP \
    --set ingress.enabled=true \
    --set ingress.annotations."kubernetes\.io/ingress\.class"=nginx \
    --set ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-staging \
    --set ingress.hosts.host=portainer.${DOMAIN} \
    --set ingress.tls[0].secretName=portainer.internal.${DOMAIN}-tls \
    --set ingress.tls[0].hosts[0]=portainer.internal.${DOMAIN}

but the hosts and tls part are causing problems, I do get this error:

coalesce.go:202: warning: destination for hosts is a table. Ignoring non-table value [map[host:<nil> paths:[]]]
Error: template: portainer/templates/ingress.yaml:31:15: executing "portainer/templates/ingress.yaml" at <.host>: can't evaluate field host in type interface {}

Can anybody give me a clue what the correct syntax is for the hosts and tls part?

Unable to save a namespace unless Cluster Setup Allow Resource Over-commit selected.

If I create a new namespace, but do not set any resource limits, I can save it. However, if I later open the namespace to change something, I am unable to save any changes to the namespace.

A workaround is I must set Allow Resource Over-commit in the cluster - setup form.

If I try to set a limit in the namespace on the edit, I receive an error on save saying resource-quotas not found.

Issue with PVC not ready

I am trying to install portainer into a kubernetes cluster, using the instructions found on your website. But when I install the helm chart, the portainer pod remains in a Creating state.

If I use describe on the pod I get....
Events:
Type Reason Age From Message


Warning FailedScheduling 3m39s (x3 over 3m43s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 3m34s default-scheduler Successfully assigned portainer/portainer-7b6c599757-9jltq to k8snode0
Warning FailedMount 94s kubelet MountVolume.MountDevice failed for volume "pvc-4c67a455-953e-439d-a8a2-049937d24b27" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning FailedMount 94s kubelet MountVolume.MountDevice failed for volume "pvc-4c67a455-953e-439d-a8a2-049937d24b27" : rpc error: code = Aborted desc = operation locked due to in progress operation(s): ["volume_id_pvc-4c67a455-953e-439d-a8a2-049937d24b27"]
Warning FailedMount 91s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-c78bm]: timed out waiting for the condition

I have checked on the NAS, the volume has been created. But you container seems to have an issue using it.

Here are my values yaml contents.

service: type: ClusterIP ingress: enabled: true certManager: true annotations: cert-manager.io/cluster-issuer: letsencrypt kubernetes.io/ingress.class: nginx hosts: - host: "portainer.*********.com" paths: - path: / tls: - hosts: - portainer.*********.com secretName: portainer-tls persistence: size: "10Gi" storageclass: "freenas-iscsi-csi"

Deployment with Nginx Ingress failed

Hello all,
I'm trying to deploy portainer to my EKS using terraform.

Here is my code:

resource "helm_release" "portainer" {
  name             = "portainer"
  repository       = "https://portainer.github.io/k8s/"
  chart            = "portainer"
  namespace        = "portainer"
  lint             = false
  cleanup_on_fail  = true
  create_namespace = true

  values = [yamlencode({
    service = { type = "ClusterIP" }

    ingress = {
      enabled     = true
      annotations = { "kubernetes.io/ingress.class" = "nginx" }
      hosts       = [{ host = "portainer.${local.tfstate["route53"]["internal"]["name"]}" }]
    }

    persistence = { storageClass = "gp2" }
  })]
}

Here is my error output:

helm_release.portainer: Creating...

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http): missing required field "paths" in io.k8s.api.networking.v1beta1.HTTPIngressRuleValue

I've said "okay" and added "paths":

    ingress = {
      enabled     = true
      annotations = { "kubernetes.io/ingress.class" = "nginx" }
      hosts = [{
        host  = "portainer.${local.tfstate["route53"]["internal"]["name"]}"
        paths = ["/"]
      }]
    }

And now I see this error:

helm_release.portainer: Creating...

Error: template: portainer/templates/ingress.yaml:35:21: executing "portainer/templates/ingress.yaml" at <.path>: can't evaluate field path in type interface {}

What I'm doing wrong? 😐

Portainer on GKE - Errors querying Portainer API

I am receiving several errors with my Portainer instance. I've followed the instructions available on https://www.portainer.io/installation/

I am using Helm 3 on K8S v1.17.9-gke.1504 The service is exposed using Ingress.

This happens when I browse https://portainer.demos.clvr.cloud/#!/1/kubernetes/dashboard

image

❯ helm list  -n portainer
NAME     	NAMESPACE	REVISION	UPDATED                              	STATUS  	CHART          	APP VERSION
portainer	portainer	2       	2020-10-16 00:17:57.040719 +1300 NZDT	deployed	portainer-1.0.3	2.0.0

Values file

image:
  pullPolicy: IfNotPresent
  repository: portainer/portainer-ce
  tag: latest
imagePullSecrets: []
ingress:
  annotations:
    certmanager.k8s.io/acme-challenge-type: http01
    certmanager.k8s.io/cluster-issuer: xxxxx
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/auth-signin: xxxxxxxxxxxxxx
    nginx.ingress.kubernetes.io/auth-url: xxxxxxxxxxx
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
  enabled: true
  hosts:
  - host: xxxxxxxxxx
    paths:
    - path: /
      port: 9000
  tls:
  - hosts:
    - xxxxxxxxxxxx
    secretName: xxxxxxxxx-tls
persistence:
  annotations: {}
  size: 10Gi
replicaCount: 1
resources: {}
service:
  annotations: {}
  edgeNodePort: 30776
  edgePort: 8000
  httpNodePort: 30777
  httpPort: 9000
  type: ClusterIP
serviceAccount:
  annotations: {}
  name: portainer-sa-clusteradmin

Portainer Pod Log:

2020-10-15T11:21:38.314989155Z 2020/10/15 11:21:38 server: Reverse tunnelling enabled
2020-10-15T11:21:38.315053807Z 2020/10/15 11:21:38 server: Fingerprint 23:d5:ac:6e:2d:bd:65:0b:5e:45:24:5b:ee:6f:36:06
2020-10-15T11:21:38.315066086Z 2020/10/15 11:21:38 server: Listening on 0.0.0.0:8000...
2020-10-15T11:21:38.319101178Z 2020/10/15 11:21:38 Starting Portainer 2.0.0 on :9000
2020-10-15T11:21:38.319580475Z 2020/10/15 11:21:38 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message: starting tunnel management process]
2020-10-15T11:21:53.802212793Z 2020/10/15 11:21:53 http error: Invalid JWT token (err=Invalid JWT token) (code=401)

Google Chrome DevTools Network:

GET https://portainer.demos.clvr.cloud/api/settings
401
{"message":"Invalid JWT token","details":"Invalid JWT token"}
[...]
Some Queries to /namespaces /tags and /status. No issues
[...]
GET https://portainer.demos.clvr.cloud/api/endpoints/1/kubernetes/api/v1/namespaces/adm/resourcequotas/portainer-rq-adm
404
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "resourcequotas \"portainer-rq-cert-manager\" not found",
  "reason": "NotFound",
  "details": {
    "name": "portainer-rq-cert-manager",
    "kind": "resourcequotas"
  },
  "code": 404
}
[..]

Support Portainer EE deployment

We need to enhance this repository to support the deployment of Portainer EE.

Todo:

  • Updated chart to support portainer EE deployment via portainer/portainer-ee
  • New manifests to deploy Portainer EE
  • Updated instructions with a section specific to Portainer EE

Edit and redeploy external application

I created application and i can edit and redeploy application created by me. But if application have tag external, i cann't edit and redeploy. But why? How to edit external application(with tag external)?
image

portainer-lb.yaml - cannot unmarshal object into Go struct field ClusterRoleBinding.subjects of type []v1.Subject

When setting up portainer on my k3s cluster using the provided command kubectl apply -n portainer -f https://downloads.portainer.io/ce2-16/portainer-lb.yaml I get an error saying Error from server (BadRequest): error when creating "portainer/portainer-account.yml": ClusterRoleBinding in version "v1" cannot be handled as a ClusterRoleBinding: json: cannot unmarshal object into Go struct field ClusterRoleBinding.subjects of type []v1.Subject. I found that in the ClusterRoleBinding under subjects was set as listed below.

subjects:
  kind: ServiceAccount
  namespace: portainer
  name: portainer-sa-clusteradmin

The issue is that these entries must be set like this instead.

subjects:
- kind: ServiceAccount
  namespace: portainer
  name: portainer-sa-clusteradmin

After changing it I was able to start up portainer and access my cluster environment. The repo looks to be correct so I'm guess that the files hosted on downloads.portainer.io are out of date. I did notice some white space issues and other formatting problems so I'll submit a PR to fix those. Hopefully that triggers a new build to be deployed to the download server.

Create Template from Stack.

In the latest 1.12 version, there does not seem to be an option to create a Template from an existing Stack or Application. This is definitely a needed function. I have dozens of microservices, that need to be installed on dozens of sites. The ability to create the templates from existing applications / stacks would save so much work.

Ingress Annotations not saved or applied

I have created a namespace for our API's and I am trying to configure ingress to use the multiple paths. These paths require the rewrite-target and use-regex annotations to work.

I have tried to enter them, both on a new namespace, and by updating an existing namespace, but when I save them, they do not appear to be persisted, and the nginx controller does not seem to indicate it is using them.

In addition, the paths do not seem to be working, a further indication the annotations are not doing anything.

ClusterIP instruction in NOTES.txt is not complete

Hello,

The instruction section for ClusterIP in NOTES.txt is not complete. Here is the diff to apply:

diff --git a/charts/portainer/templates/NOTES.txt b/charts/portainer/templates/NOTES.txt
index 604843e..cd3a259 100644
--- a/charts/portainer/templates/NOTES.txt
+++ b/charts/portainer/templates/NOTES.txt
@@ -18,6 +18,7 @@
   echo http://$SERVICE_IP:{{ .Values.service.httpsPort }}
 {{- else if contains "ClusterIP" .Values.service.type }}
   Get the application URL by running these commands:
-  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].me$  echo "Visit http://127.0.0.1:9443 to use your application"
+  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+  echo "Visit http://127.0.0.1:9443 to use your application"
   kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9443:9443
 {{- end }}

Guillaume,

Please add loadBalancerIP option to helm chart

Hello,

Could you please add loadBalancerIP option to helm chart k8s/charts/portainer/templates/service.yaml it already supports .service.type to be LoadBalancer but would be nice to be able also to set the desired IP directly.

Something like this maybe:

{{- if and .Values.service.loadBalancerIP (eq .Values.service.type "LoadBalancer") }}
  loadBalancerIP: {{ .Values.service.loadBalancerIP }}
  {{- end }}

liveness check failed

Hi there,

I am using the default helm installation

helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=ClusterIP

But the deployment keep crashing because the liveness chekc fails (connection refused to the local pod ip address atr port 9000).

Any idea whats going on ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.