Giter Site home page Giter Site logo

helm's Introduction

Nextcloud Helm Charts

Helm repo for different charts related to Nextcloud which can be installed on Kubernetes

Add Helm repository

To install the repo just run:

helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update

Helm Charts

  • nextcloud

    helm install my-release nextcloud/nextcloud

For more information, please checkout the chart level README.md.

Support and Contribution

This helm chart is community maintained, and not supported by Nextcloud GmbH. Please also review the official NextCloud Code of Conduct and this repo's contributing doc before contributing.

Questions and Discussions

GitHub Discussion

Bugs and other Issues

If you have a bug to report or a feature to request, you can first search the GitHub Issues, and if you can't find what you're looking for, feel free to open an issue.

Contributing to the Code

We're always happy to review a pull request :) Please just be sure to check the pull request template to make sure you fufill all the required checks, most importantly the DCO.

helm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm's Issues

Document nextcloud.persistence.subPath

At the moment it is not really possible to use nextcloud.persistence.subPath because it is not clear what can be entered.
It would be good if an example could be given here similar to nextcloud.configs.
For example how to integrate a custom app.

With cronjob enabled, the Chart does not install

From the values.yaml, "Nexcloud image is used as default but only curl is needed". I chose curlimages/curl, but this doesn't work.

With the following:

    cronjob:
      enabled: true
      image:
        repository: curlimages/curl
        tag: 7.73.0

The nextcloud-cron pods fail to get in the READY state. The logs:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (22) The requested URL returned error: 503 

The main nextcloud pod waits for these, which eventually times out and the installation fails.

Edit: Never mind, the chart is broken even using the default nextcloud image. Deploy file can be found here.

It looks related to #16. The change broke it for somebody else too:

#16 (comment)

The error message is different (503 vs 400). Total guess: it's running curl and expecting to find the Nextcloud instance up, but Nextcloud won't be up until after the cron pod itself is up.

SMTP settings broken when using existingSecret

When nextcloud.existingSecret.enabled is set to true, the secret referenced by the below is not created and will cause NextCloud to not come up when SMTP is enabled.

        - name: SMTP_NAME
          valueFrom:
            secretKeyRef:
              name: {{ template "nextcloud.fullname" . }}
              key: smtp-username
        - name: SMTP_PASSWORD
          valueFrom:
            secretKeyRef:
              name: {{ template "nextcloud.fullname" . }}
              key: smtp-password

unable to use nodePort service without setting a nodePort

nodePort should be optional and not required for nodePort Service
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
my nextcloud.yaml file for helm helm install nextcloud nextcloud/nextcloud -f nextcloud.yaml

persistence:
  enabled: true
service:
  type: NodePort

ERROR:

Error: unable to build kubernetes objects from release manifest: error validating "":
error validating data: ValidationError(Service.spec.ports[0].nodePort):
invalid type for io.k8s.api.core.v1.ServicePort.nodePort: got "string", expected "integer"

Include path in ingress

Hi, could you please include option to configure path in ingress.yaml?

spec:
  rules:
  - host: {{ .Values.nextcloud.host }}
    http:
      paths:
      - backend:
          serviceName: {{ template "nextcloud.fullname" . }}
          servicePort: {{ .Values.service.port }}
        path: {{ .Values.ingress.path }}

Logs complaining about password for Redis when set to no password

I have redis set to enabled in my values and that's all. According to the values.yaml it should be defaulting usePassword to false. When I check out Logging in my install, I am getting a ton of these errors non-stop. It should be defaulting to no password, but it's still complaining that it's not configured.

Here is an image of the logs:

image

Here is the full message:

[no app in context] Error: RedisException: ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct? at <<closure>>

 0. /var/www/html/lib/private/RedisFactory.php line 94
    Redis->auth(false)
 1. /var/www/html/lib/private/RedisFactory.php line 108
    OC\RedisFactory->create()
 2. /var/www/html/lib/private/Memcache/Redis.php line 43
    OC\RedisFactory->getInstance()
 3. /var/www/html/lib/private/Memcache/Factory.php line 135
    OC\Memcache\Redis->__construct("5124fc20e78cd15f0fe08373bd06dfb6/lock")
 4. /var/www/html/lib/private/Server.php line 1025
    OC\Memcache\Factory->createLocking("lock")
 5. /var/www/html/3rdparty/pimple/pimple/src/Pimple/Container.php line 118
    OC\Server->OC\{closure}("*** sensitive parameters replaced ***")
 6. /var/www/html/lib/private/ServerContainer.php line 124
    Pimple\Container->offsetGet("OCP\\Lock\\ILockingProvider")
 7. /var/www/html/lib/private/Server.php line 1975
    OC\ServerContainer->query("OCP\\Lock\\ILockingProvider")
 8. /var/www/html/lib/private/Files/View.php line 118
    OC\Server->getLockingProvider()
 9. /var/www/html/lib/private/Server.php line 813
    OC\Files\View->__construct()
10. /var/www/html/3rdparty/pimple/pimple/src/Pimple/Container.php line 118
    OC\Server->OC\{closure}("*** sensitive parameters replaced ***")
11. /var/www/html/lib/private/ServerContainer.php line 124
    Pimple\Container->offsetGet("OCP\\Http\\Client\\IClientService")
12. /var/www/html/lib/private/AppFramework/DependencyInjection/DIContainer.php line 388
    OC\ServerContainer->query("OCP\\Http\\Client\\IClientService", true)
13. /var/www/html/lib/private/AppFramework/Utility/SimpleContainer.php line 71
    OC\AppFramework\DependencyInjection\DIContainer->query("OCP\\Http\\Client\\IClientService", true)
14. /var/www/html/lib/private/AppFramework/Utility/SimpleContainer.php line 101
    OC\AppFramework\Utility\SimpleContainer->buildClass(ReflectionClass  ... "})
15. /var/www/html/lib/private/AppFramework/Utility/SimpleContainer.php line 116
    OC\AppFramework\Utility\SimpleContainer->resolve("OCA\\Support\\S ... e")
16. /var/www/html/lib/private/AppFramework/DependencyInjection/DIContainer.php line 414
    OC\AppFramework\Utility\SimpleContainer->query("OCA\\Support\\S ... e")
17. /var/www/html/lib/private/AppFramework/DependencyInjection/DIContainer.php line 385
    OC\AppFramework\DependencyInjection\DIContainer->queryNoFallback("OCA\\Support\\S ... e")
18. /var/www/html/lib/private/AppFramework/Utility/SimpleContainer.php line 71
    OC\AppFramework\DependencyInjection\DIContainer->query("OCA\\Support\\S ... e", true)
19. /var/www/html/lib/private/AppFramework/Utility/SimpleContainer.php line 101
    OC\AppFramework\Utility\SimpleContainer->buildClass(ReflectionClass  ... "})
20. /var/www/html/lib/private/AppFramework/Utility/SimpleContainer.php line 116
    OC\AppFramework\Utility\SimpleContainer->resolve("OCA\\Support\\S ... r")
21. /var/www/html/lib/private/AppFramework/DependencyInjection/DIContainer.php line 414
    OC\AppFramework\Utility\SimpleContainer->query("OCA\\Support\\S ... r")
22. /var/www/html/lib/private/AppFramework/DependencyInjection/DIContainer.php line 385
    OC\AppFramework\DependencyInjection\DIContainer->queryNoFallback("OCA\\Support\\S ... r")
23. /var/www/html/apps/support/lib/AppInfo/Application.php line 48
    OC\AppFramework\DependencyInjection\DIContainer->query("OCA\\Support\\S ... r")
24. /var/www/html/apps/support/appinfo/app.php line 27
    OCA\Support\AppInfo\Application->register()
25. /var/www/html/lib/private/legacy/OC_App.php line 266
    require_once("/var/www/html/a ... p")
26. /var/www/html/lib/private/legacy/OC_App.php line 155
    OC_App::requireAppFile(OCA\Support\AppInfo\Application {})
27. /var/www/html/lib/private/legacy/OC_App.php line 128
    OC_App::loadApp("support")
28. /var/www/html/lib/base.php line 648
    OC_App::loadApps(["session"])
29. /var/www/html/lib/base.php line 1094
    OC::init()
30. /var/www/html/index.php line 35
    require_once("/var/www/html/lib/base.php")

GET /apps/logreader/poll?lastReqId=0AxQAsgtDZizEZxmFSoE
from 10.233.118.0 at 2020-10-19T16:50:57+00:00

Stuck at "Initializing Nextcloud..." when attached to NFS PVC

Doing my best to dupe helm/charts#22920 over to the new repo as I am experiencing this issue as well. I have refined the details a bit, as this issue appears to be specifically related to NFS-based storage.

Describe the bug

When bringing up the nextcloud pod via the helm chart, the logs show the pod as being stuck at:

2020-08-31T19:00:42.054297154Z Configuring Redis as session handler
2020-08-31T19:00:42.098305129Z Initializing nextcloud 19.0.1.1 ...

Even backing out the liveness/readiness probes to over 5 minutes does not give
If I instead switch the PVC to my storageClass for Rancher Longhorn (iSCSI) for example, the nextcloud install initializes in seconds.

Version of Helm and Kubernetes:

helm: v3.3.0
kubernetes: v1.18.6

Which chart:

nextcloud/helm

What happened:

  • Namespace is created.
  • Helm creates NFS PVC, or it is created manually
  • Helm instantiates Nextcloud pod
  • Nextcloud pod attaches PVC, and starts
  • Nextcloud container is stuck at the above line

What you expected to happen:

Nextcloud finishes initialization
Nextcloud files appear with correct permissions on NFS volume

How to reproduce it (as minimally and precisely as possible):

Set up an NFS provisioner:

helm install stable/nfs-client-provisioner nfs  \
--set nfs.server=x.x.x.x --set nfs.path=<path>

OR
Configure an NFS PV and PVC manually

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nextcloud-data
  labels:
    app: cloud
    type: data
spec:
  capacity:
    storage: 100Ti
  nfs:
    path: <path>
    server: <server>
  mountOptions:
    - async
    - nfsvers=4.2
    - noatime
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-manual
  volumeMode: Filesystem
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nextcloud-data
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Ti
  storageClassName: nfs-manual
  volumeMode: Filesystem
  selector:
    matchLabels:
      app: cloud
      type: data

Install nextcloud
helm install -f values.yaml nextcloud/helm nextcloud --namespace=nextcloud

values.yaml:

image:
  repository: nextcloud
  tag: 19
readinessProbe:
  initialDelaySeconds: 560
livenessProbe:
  initialDelaySeconds: 560
resources:
  requests:
    cpu: 200m
    memory: 500Mi
  limits:
    cpu: 2
    memory: 1Gi
ingress:
  enabled: true
  annotations:
    cert-manager.io/cluster-issuer: acme
    kubernetes.io/ingress.class: nginx
    # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  hosts:
    - "cloud.myhost.com"
  tls:
    - hosts:
        - "cloud.myhost.com"
      secretName: prod-cert
  path: /
nextcloud:
  username: admin
  password: admin1
  # datadir: /mnt/data
  host: "cloud.myhost.com"
internalDatabase:
  enabled: true
externalDatabase:
  enabled: false
persistence:
  enabled: true
  # accessMode: ReadWriteMany
  # storageClass: nfs-client if creating via provisioner
  existingClaim: nextcloud-data # comment out if creating new PVC via provisioner

Direct upgrade from 17.0 to 19.0 not supported, breaks existing installations

Hi,

the latest commit 5c9f27e upgrades Nextcloud from version 17 to 19. However, a direct upgrade is not supported. Users need to go to 18.0 first.

Therefore, upgrading an existing release of this chart (with default parameters, i.e., Nextcloud 17) will break on upgrade.
There should be documentation that one needs to go to 18.0 first and that this is a manual step.

Workaround after it happened: fix the version number from 19.x to 17.0.9 in html/version.php (which is on the persistent volume). If 19.x is mentioned there, the 18.x upgrade will abort and not even see that the database is still 17.x.

connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET /status.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "nc.xxxx.com"

Describe the bug

I try to deploy nexcloud with mariadb but this fails with

kubectl -n nextcloud logs nextcloud-74b56fb9dd-c4smn nextcloud-nginx
2020/04/15 19:03:54 [error] 9#9: *3 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET /status.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "nc.xxxx.com"
2020/04/15 19:03:54 [error] 7#7: *2 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET /status.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "nc.xxxx.com"
x.x.x.x - - [15/Apr/2020:19:03:54 +0000] "GET /status.php HTTP/1.1" 502 157 "-" "kube-probe/1.17" "-"
x.x.x.x - - [15/Apr/2020:19:03:54 +0000] "GET /status.php HTTP/1.1" 502 157 "-" "kube-probe/1.17" "-"
x.x.x.x - - [15/Apr/2020:19:03:54 +0000] "GET /status.php HTTP/1.1" 499 0 "-" "kube-probe/1.17" "-"

This is my values.yaml

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 4G
    # nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    cert-manager.io/cluster-issuer: "letsencrypt-prd"
  tls:
    - secretName: nextcloud-tls
      hosts:
        - nc.xxx.com
  labels: {}
nextcloud:
  host: xxxx
  username: xxxx
  password: xxx
  update: 0
  datadir: /var/www/html/data
  tableprefix:
  phpConfigs: {}
  defaultConfigs:
    .htaccess: true
    redis.config.php: true
    apache-pretty-urls.config.php: true
    apcu.config.php: true
    apps.config.php: true
    autoconfig.php: true
    smtp.config.php: true
  configs: {}

nginx:
  enabled: true
  image:
    repository: nginx
    tag: alpine
    pullPolicy: IfNotPresent
  config:
    default: true
  resources: {}

internalDatabase:
  enabled: false
  name: nextcloud

externalDatabase:
  enabled: false


mariadb:
  enabled: true
  volumePermissions:
    enabled: true
  securityContext:
    fsGroup: 82
    runAsUser: 33
  db:
    name: nextcloud
    user: nextcloud
    password: xxxx
  persistence:
    enabled: false
    accessMode: ReadWriteMany
    size: 8Gi
  master:
    persistence:
      accessModes: 
        - ReadWriteMany
  slave:
    persistence:
      accessModes:
        - ReadWriteMany
redis:
  enabled: false
  usePassword: false
cronjob:
  enabled: false


service:
  type: ClusterIP
  port: 8080
  loadBalancerIP: nil

persistence:
  enabled: true
  annotations: {}
  storageClass: "kadalu.replica3"
  accessMode: ReadWriteMany
  size: 500Gi

Version of Helm and Kubernetes:

helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"archive", BuildDate:"2020-02-29T16:37:45Z", GoVersion:"go1.14", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

Which chart:

stable/nextcloud

What happened:

pod/nextcloud-nginx logs

kubectl -n nextcloud logs nextcloud-74b56fb9dd-c4smn nextcloud-nginx
2020/04/15 19:03:54 [error] 9#9: *3 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET /status.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "nc.xxxx.com"
2020/04/15 19:03:54 [error] 7#7: *2 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET /status.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "nc.xxxx.com"
x.x.x.x - - [15/Apr/2020:19:03:54 +0000] "GET /status.php HTTP/1.1" 502 157 "-" "kube-probe/1.17" "-"
x.x.x.x - - [15/Apr/2020:19:03:54 +0000] "GET /status.php HTTP/1.1" 502 157 "-" "kube-probe/1.17" "-"
x.x.x.x - - [15/Apr/2020:19:03:54 +0000] "GET /status.php HTTP/1.1" 499 0 "-" "kube-probe/1.17" "-"

What you expected to happen:

No error

How to reproduce it (as minimally and precisely as possible):

Run

helm install nextcloud stable/nextcloud --namespace nextcloud -f nextcloud.values.yml    

Anything else we need to know:

n/a

mariadb-isalive container stuck with "ERROR 1045 (28000): Access denied for user..."

Hi, I updated the helm chart to 2.5.5 today which led to the following error repeating in the mariadb-isalive container:

ERROR 1045 (28000): Access denied for user 'nextcloud'@'10.42.0.236' (using password: YES) waiting for mysql

Now the DB isn't starting and the pod is stuck in state init 1.

status:
  conditions:
    - lastProbeTime: null
      lastTransitionTime: '2021-02-16T21:01:20Z'
      message: 'containers with incomplete status: [mariadb-isalive]'
      reason: ContainersNotInitialized
      status: 'False'
      type: Initialized

Is there any more data I can provide to help fixing this issue?

Nextcloud stuck at ContainerCreating

When I install the Nextcloud chart in my cluster with these values, it gets stuck at ContainerCreating with the reason

0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

even though the persistent volume claim gets bound by a Persistent Volume created by OpenEBS cStor dynamic provisioning. So why does it say that there are unbound immediate PVCs when there is only one pvc created which is immediately bound to a Persistent Volume?

Upgrade from 2.2.0 to 2.2.1 is broken

Running a helm upgrade from 2.2.0 to 2.2.1 results in the following error:

Error: UPGRADE FAILED: cannot patch "nextcloud-dev" with kind Deployment: Deployment.apps "nextcloud-dev" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"app", "app.kubernetes.io/instance":"nextcloud-dev", "app.kubernetes.io/name":"nextcloud"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

file CAN_INSTALL is missing from your config directory

I can not install Nextcloud with v2.5.0 Helm chart and configuration below.

Nextcloud pod logfile before first restart

Initializing nextcloud 19.0.3.1 ...
Initializing finished
New nextcloud instance
Installing with MySQL database
starting nextcloud installation
Error while trying to create admin user: Failed to connect to the database: An exception occurred in driver: SQLSTATE[HY000] [2002] Connection timed out
 -> 
retrying install...

Error message in web gui

Nextcloud pod is restarting and does not continue with the installation. In web gui this message is displayed.

It looks like you are trying to reinstall your Nextcloud. However the file CAN_INSTALL is missing from your config directory. Please create the file CAN_INSTALL in your config folder to continue.

Configuration

nextcloud_hostname="nextcloud.example.com"
nextcloud_username="myusername"
nextcloud_password="my-password"
namespace="default"
nextcloud_helm_version="2.5.0"

helm repo add nextcloud https://nextcloud.github.io/helm/
cat "nextcloud.override.yaml"
ingress:
  enabled: true
  tls:
    - secretName: nextcloud-tls
      hosts:
        - "${nextcloud_hostname}"
  annotations:
    kubernetes.io/ingress.class: "contour"
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-staging"
    ingress.kubernetes.io/force-ssl-redirect: "true"

persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi

nextcloud:
  host: "${nextcloud_hostname}"
  username: "${nextcloud_username}"
  password: "${nextcloud_password}"

  configs:
    custom.config.php: |-
      <?php
        \$CONFIG = array (
          'overwrite.cli.url' => "https://${nextcloud_hostname}",
          'overwritehost' => "${nextcloud_hostname}",
          'overwriteprotocol' => 'https',
        );

internalDatabase:
  enabled: false
  
mariadb:
  enabled: true
  
  master:
    persistence:
      enabled: true
      # storageClass: ""
      accessMode: ReadWriteOnce
      size: 8Gi
install_or_upgrade="install"
helm "${install_or_upgrade}" nextcloud nextcloud/nextcloud --namespace "${namespace}" \
--version "${nextcloud_helm_version}" --values "nextcloud.override.yaml"

Database

It is correct that the database is not yet up when Nextcloud is beginning is installation. Nextcloud needs to wait for it.

Solution

I do not know what the problem is but am guessing it's the lack of readiness probe against external dependencies.

  • Bitnami mariadb has readiness probe status update which can be queried by a Init Container before nextcloud pod is allowed to start.
  • Another solution would be to implement something like wait-for-it in a init Container.
  • Best would be if Nextcloud installation would fail nicely so after restart of the nextcloud pod the installation will be retried.

Postrgresql need to be started first while using externalDatabase

I found that if Postgresql is not running before the nextcloud docker image is launched, I get some weird issues:

Initializing nextcloud 19.0.3.1 ...
Initializing finished
New nextcloud instance
Installing with PostgreSQL database
starting nextcloud installation
PostgreSQL username and/or password not valid
 -> You need to enter details of an existing account.
retrying install...
An unhandled exception has been thrown:
OC\DatabaseException: An exception occurred while executing 'SHOW SERVER_VERSION':
Failed to connect to the database: An exception occurred in driver: SQLSTATE[08006] [7] could not connect to server: Connection refused
	Is the server running on host "nextcloud-postgresql.nextcloud" (10.43.125.77) and accepting
	TCP/IP connections on port 5432? in /var/www/html/lib/private/legacy/OC_DB.php:73
Stack trace:
#0 /var/www/html/lib/private/legacy/OC_DB.php(139): OC_DB::prepare('SHOW SERVER_VER...', NULL, NULL)
#1 /var/www/html/lib/private/legacy/OC_Util.php(971): OC_DB::executeAudited(Array)
#2 /var/www/html/lib/private/legacy/OC_Util.php(951): OC_Util::checkDatabaseVersion()
#3 /var/www/html/lib/private/Console/Application.php(161): OC_Util::checkServer(Object(OC\SystemConfig))
#4 /var/www/html/console.php(99): OC\Console\Application->loadCommands(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#5 /var/www/html/occ(11): require_once('/var/www/html/c...')
#6 {main}retrying install...
An unhandled exception has been thrown:
OC\DatabaseException: An exception occurred while executing 'SHOW SERVER_VERSION':
Failed to connect to the database: An exception occurred in driver: SQLSTATE[08006] [7] could not connect to server: Connection refused
	Is the server running on host "nextcloud-postgresql.nextcloud" (10.43.125.77) and accepting
	TCP/IP connections on port 5432? in /var/www/html/lib/private/legacy/OC_DB.php:73
Stack trace:
#0 /var/www/html/lib/private/legacy/OC_DB.php(139): OC_DB::prepare('SHOW SERVER_VER...', NULL, NULL)
#1 /var/www/html/lib/private/legacy/OC_Util.php(971): OC_DB::executeAudited(Array)
#2 /var/www/html/lib/private/legacy/OC_Util.php(951): OC_Util::checkDatabaseVersion()
#3 /var/www/html/lib/private/Console/Application.php(161): OC_Util::checkServer(Object(OC\SystemConfig))
#4 /var/www/html/console.php(99): OC\Console\Application->loadCommands(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#5 /var/www/html/occ(11): require_once('/var/www/html/c...')
#6 {main}retrying install...

nextcloud-values.yml:

...
externalDatabase:
  enabled: true
  type: postgresql
  host: nextcloud-postgresql.nextcloud:5432
  user: CHANGEME
  password: CHANGEME
  database: nextcloud


postgresql:
  enabled: true
  image:
    registry: docker.io
    repository: postgres
    tag: 13.1
    debug: true
  ...
...

Sometimes I get the setup screen of nextcloud (asking to create a new admin account), sometimes I just get an error code 503 indefinitely.

If I take the same value and I deploy the bitnami postgresql chart manualy (using the same config) and then I deploy nextcloud (with postgresql.enable = false this time) everything works great.
Currently, I have a script which does 2 deployments with a sleep in between to resolve this issue

helm install postgresql bitnami/postgresql --values ./postgresql-values.yml -n nextcloud
sleep 60
helm install nextcloud nextcloud/nextcloud --values ./nextcloud-values.yml -n nextcloud

Nextcloud securityContext and rootlessness

Goal

securityContext should be able to be configured in a values.yaml file.

With the configuration available, there should be default rules to secure the container to make it run as www-data instead of root (this may require running on a different port).

Multiple nextcloud releases cannot coexist

If you create more than one release of this chart in the same namespace, they will interfere: some requests for one of the sites end up at a pod that's part of the other site, and vice versa.

I think this is because the service spec.selector is app.kubernetes.io/name: {{ include "nextcloud.name" . }} which is โ€“ by default at least โ€“ just the chart name "nextcloud", so the same for both instances.

The selector should select on something that is unique for every release, such as the app.kubernetes.io/instance label.

303 response code on login for fpm-alpine with mariadb

Unable to login due to 303 response.
Any suggestions on how to fix this? Thanks

Logs:

kubectl logs nextcloud-6ff79b69d6-vd4lw -c nextcloud -f
127.0.0.1 -  24/Jan/2021:06:13:23 +0000 "POST /index.php" 303
127.0.0.1 -  24/Jan/2021:06:13:23 +0000 "GET /index.php" 303
127.0.0.1 -  24/Jan/2021:06:13:23 +0000 "GET /index.php" 200
127.0.0.1 -  24/Jan/2021:06:13:24 +0000 "GET /cron.php" 200

Helm values:

image:
  tag: 19.0.3-fpm-alpine
ingress:
  enabled: true
  tls:
    - secretName: nextcloud-tls
      hosts:
        - nextcloud.<my-domain>.com
  annotations: # Uncommented all values
nginx:
  enabled: true
internalDatabase:
  enabled: false  
mariadb:
  enabled: true
  replication:
    enabled: true
  master:
    persistence:
      enabled: true
      storageClass: "<my-nfs-storage-class>"
      accessMode: ReadWriteMany    
redis:
  enabled: true
persistence:
  enabled: true
  storageClass: "<my-nfs-storage-class>"
  accessMode: ReadWriteMany        
livenessProbe:
  enabled: false
readinessProbe:
  enabled: false

Run sidecar containers

It would be great to have the ability to run user-specified sidecar containers. The most obvious use is for backup of the data, in particular for ReadWriteOnce PVCs but other cases might be useful like logging, service mesh, etc.. Is the idea acceptable? If so, I can propose a PR.

Nextcloud try to reinit database after a crash

I forced a crash of nextcloud pod (deleting it) just to see what would happen if nextcloud really crashed. When a new pod was created I got this error:

Initializing nextcloud 19.0.3.1 ...
Initializing finished
New nextcloud instance
Installing with PostgreSQL database
starting nextcloud installation
Error while trying to initialise the database: An exception occurred while executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL, version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version))':
SQLSTATE[42P07]: Duplicate table: 7 ERROR:  relation "oc_migrations" already exists
 -> 
retrying install...
Error while trying to initialise the database: An exception occurred while executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL, version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version))':
SQLSTATE[42P07]: Duplicate table: 7 ERROR:  relation "oc_migrations" already exists
 -> 
retrying install...
Error while trying to initialise the database: An exception occurred while executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL, version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version))':
SQLSTATE[42P07]: Duplicate table: 7 ERROR:  relation "oc_migrations" already exists
 -> 
retrying install...
Error while trying to initialise the database: An exception occurred while executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL, version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version))':
SQLSTATE[42P07]: Duplicate table: 7 ERROR:  relation "oc_migrations" already exists
 -> 
retrying install...
Error while trying to initialise the database: An exception occurred while executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL, version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version))':
SQLSTATE[42P07]: Duplicate table: 7 ERROR:  relation "oc_migrations" already exists
 -> 
retrying install...
Error while trying to initialise the database: An exception occurred while executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL, version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version))':
SQLSTATE[42P07]: Duplicate table: 7 ERROR:  relation "oc_migrations" already exists
 -> 
retrying install...

It seems that nextcloud try to reinit the database when it should IMAO just use it. After all this error, I get a first setup screen asking me to create a new admin account. If I try to do so, nextcloud interface give me an error (spoiler, the same one as above):

Error
Error while trying to initialise the database: An exception occurred while executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL, version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version))': SQLSTATE[42P07]: Duplicate table: 7 ERROR: relation "oc_migrations" already exists

Easy way to configure 'trusted_domains' in config.php

I am using NextCloud with PicoCMS to host a couple of public-facing sites operating on their own domain.

I can create appropriate ingresses for them but the requests still get blocked to the sites because the domains are not in the 'trusted_domains' variable in config/config.php.

If I add them manually it works but this is very cumbersome because it has to be done in the container.

Is there a way to configure the list of trusted hosts via the Helm chart?

If not, what is the least painful way to extend the list now and again.

Hardcoded initContainer image name

Please don't hardcode image names into helm chart.
I use custom postgresql image, because bitnami doesn't support armv8.

values.yaml:

postgresql:
  enabled: true
  image:
    registry: docker.io
    repository: postgres
    tag: 11.1

Helm chart should respect custom image values.

Introduced in: 8c75469

Add startupProbe for container

Due to the installation in the initial setup, the startup takes some time. Currently, this is caught through (high) initialDelaySeconds values. A better solution would be to implement startupProbe as describe here.

If you agree, I would create a pr with this change.

Chart unable to deploy

I'm trying to deploy this chart via helm3 on a 1.19 cluster. I tried with my normal options and it failed, so after debugging for a while I tried to deploy just the bare chart and it errors in the same way.

$ kubectl -n testcloud describe pod/testcloud-nextcloud-5cb9bcc74b-bjc6x
Name:         testcloud-nextcloud-5cb9bcc74b-bjc6x
Namespace:    testcloud
Priority:     0
Node:         alice/10.0.4.51
Start Time:   Fri, 23 Oct 2020 19:34:00 +0000
Labels:       app.kubernetes.io/component=app
              app.kubernetes.io/instance=testcloud
              app.kubernetes.io/name=nextcloud
              pod-template-hash=5cb9bcc74b
Annotations:  kubernetes.io/psp: privileged
Status:       Running
IP:           10.1.2.195
IPs:
  IP:           10.1.2.195
Controlled By:  ReplicaSet/testcloud-nextcloud-5cb9bcc74b
Containers:
  nextcloud:
    Container ID:   containerd://e9d53cff149296b0a2ef7cf1115b61b144f818b82795a86f6efdfcb4c4da80e3
    Image:          nextcloud:19.0.3-apache
    Image ID:       docker.io/library/nextcloud@sha256:9347cbf381cdd06038a8d4e392aa7c40119bd13c77f37fb03b29bc47434bd393
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    135
      Started:      Fri, 23 Oct 2020 19:37:00 +0000
      Finished:     Fri, 23 Oct 2020 19:37:00 +0000
    Ready:          False
    Restart Count:  5
    Liveness:       http-get http://:http/status.php delay=30s timeout=5s period=15s #success=1 #failure=3
    Readiness:      http-get http://:http/status.php delay=30s timeout=5s period=15s #success=1 #failure=3
    Environment:
      SQLITE_DATABASE:            nextcloud
      NEXTCLOUD_ADMIN_USER:       <set to the key 'nextcloud-username' in secret 'testcloud-nextcloud'>  Optional: false
      NEXTCLOUD_ADMIN_PASSWORD:   <set to the key 'nextcloud-password' in secret 'testcloud-nextcloud'>  Optional: false
      NEXTCLOUD_TRUSTED_DOMAINS:  nextcloud.kube.home
      NEXTCLOUD_DATA_DIR:         /var/www/html/data
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ppqfd (ro)
      /var/www/ from nextcloud-data (rw,path="root")
      /var/www/html from nextcloud-data (rw,path="html")
      /var/www/html/config from nextcloud-data (rw,path="config")
      /var/www/html/custom_apps from nextcloud-data (rw,path="custom_apps")
      /var/www/html/data from nextcloud-data (rw,path="data")
      /var/www/html/themes from nextcloud-data (rw,path="themes")
      /var/www/tmp from nextcloud-data (rw,path="tmp")
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  nextcloud-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-ppqfd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-ppqfd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m14s                default-scheduler  Successfully assigned testcloud/testcloud-nextcloud-5cb9bcc74b-bjc6x to alice
  Normal   Pulled     95s (x5 over 3m14s)  kubelet            Container image "nextcloud:19.0.3-apache" already present on machine
  Normal   Created    95s (x5 over 3m14s)  kubelet            Created container nextcloud
  Normal   Started    95s (x5 over 3m14s)  kubelet            Started container nextcloud
  Warning  BackOff    94s (x10 over 3m3s)  kubelet            Back-off restarting failed container

The logs are not very interesting:

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.2.195. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.2.195. Set the 'ServerName' directive globally to suppress this message

Is there a better option than chart 2.2.1?

packaged postgresql is not setup properly

In the values.yaml you can activate .Values.postgresql.enabled: true. An appropriate postgresql container would be spawned, however the postgresql service endpoint is not configured within the nextclouds deployment configuration. There is still a MYSQL configuration included.

occ cli tool usage?

I have to log into the pod and run the following command : occ db:add-missing-primary-keys

The thing is, the occ cli tool must be run with www-data user, which has no shell. Kubernetes does not support --user option with exec command. So is there a way to achieve that?

$ ./occ
Console has to be executed with the user that owns the file config/config.php
Current user id: 0
Owner id of config.php: 33
Try adding 'sudo -u #33' to the beginning of the command (without the single quotes)
If running with 'docker exec' try adding the option '-u 33' to the docker command (without the single quotes)

Extra configs on fresh install causes error

Including extra configs (like trusted_proxies) on first install of the chart appears to result in Nextcloud failing with the following:

Error. It looks like you are trying to reinstall your NextCloud. However the file CAN_INSTALL is missing from your config directory. Please create the file CAN_INSTALL in your config folder to continue.

Adding the config after the fact works as intended.

ARM Support

Bitnami currently does not support ARM (I would like to use this helm chart on Raspberry Pi's).

If there is interest, I will probably experiment with using the official mariadb docker image instead of the bitnami version.

## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
  repository: nextcloud
  tag: 17.0.0-apache
  pullPolicy: IfNotPresent
  # pullSecrets:
  #   - myRegistrKeySecretName

nameOverride: ""
fullnameOverride: ""

# Number of replicas to be deployed
replicaCount: 1

## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: false
  annotations: {}
  #  nginx.ingress.kubernetes.io/proxy-body-size: 4G
  #  kubernetes.io/tls-acme: "true"
  #  certmanager.k8s.io/cluster-issuer: letsencrypt-prod
  #  nginx.ingress.kubernetes.io/server-snippet: |-
  #    server_tokens off;
  #    proxy_hide_header X-Powered-By;

  #    rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
  #    rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
  #    rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
  #    location = /.well-known/carddav {
  #      return 301 $scheme://$host/remote.php/dav;
  #    }
  #    location = /.well-known/caldav {
  #      return 301 $scheme://$host/remote.php/dav;
  #    }
  #    location = /robots.txt {
  #      allow all;
  #      log_not_found off;
  #      access_log off;
  #    }
  #    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
  #      deny all;
  #    }
  #    location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
  #      deny all;
  #    }
  #  tls:
  #    - secretName: nextcloud-tls
  #      hosts:
  #        - nextcloud.kube.home
  labels: {}


# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
lifecycle: {}
  # postStartCommand: []
  # preStopCommand: []

nextcloud:
  host: nextcloud.<HOST>
  username: <USER>
  password: <PASSWORD>
  update: 0
  datadir: /var/www/html/data
  tableprefix:
  persistence:
    enabled: true
    existingClaim: "nextcloud-storage"
    accessMode: ReadWriteOnce
    size: 50Gi
  mail:
    enabled: false
    fromAddress: user
    domain: domain.com
    smtp:
      host: domain.com
      secure: ssl
      port: 465
      authtype: LOGIN
      name: user
      password: pass
  # PHP Configuration files
  # Will be injected in /usr/local/etc/php/conf.d
  phpConfigs: {}
  # Default config files
  # IMPORTANT: Will be used only if you put extra configs, otherwise default will come from nextcloud itself
  # Default confgurations can be found here: https://github.com/nextcloud/docker/tree/master/16.0/apache/config
  defaultConfigs:
    # To protect /var/www/html/config
    .htaccess: true
    # Redis default configuration
    redis.config.php: true
    # Apache configuration for rewrite urls
    apache-pretty-urls.config.php: true
    # Define APCu as local cache
    apcu.config.php: true
    # Apps directory configs
    apps.config.php: true
    # Used for auto configure database
    autoconfig.php: true
    # SMTP default configuration
    smtp.config.php: true
  # Extra config files created in /var/www/html/config/
  # ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
  configs: {}

  # For example, to use S3 as primary storage
  # ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
  #
  #  configs:
  #    s3.config.php: |-
  #      <?php
  #      $CONFIG = array (
  #        'objectstore' => array(
  #          'class' => '\\OC\\Files\\ObjectStore\\S3',
  #          'arguments' => array(
  #            'bucket'     => 'my-bucket',
  #            'autocreate' => true,
  #            'key'        => 'xxx',
  #            'secret'     => 'xxx',
  #            'region'     => 'us-east-1',
  #            'use_ssl'    => true
  #          )
  #        )
  #      );

  ## Strategy used to replace old pods
  ## IMPORTANT: use with care, it is suggested to leave as that for upgrade purposes
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
  strategy:
    type: Recreate
    # type: RollingUpdate
    # rollingUpdate:
    #   maxSurge: 1
    #   maxUnavailable: 0

  ##
  ## Extra environment variables
  extraEnv:
  #  - name: SOME_SECRET_ENV
  #    valueFrom:
  #      secretKeyRef:
  #        name: nextcloud
  #        key: secret_key

  # Extra mounts for the pods. Example shown is for connecting a legacy NFS volume
  # to NextCloud pods in Kubernetes. This can then be configured in External Storage
  extraVolumes:
  #  - name: nfs
  #    nfs:
  #      server: "10.0.0.1"
  #      path: "/nextcloud_data"
  #      readOnly: false
  extraVolumeMounts:
  #  - name: nfs
  #    mountPath: "/legacy_data"

nginx:
  ## You need to set an fpm version of the image for nextcloud if you want to use nginx!
  enabled: false
  image:
    repository: nginx
    tag: alpine
    pullPolicy: IfNotPresent

  config:
    # This generates the default nginx config as per the nextcloud documentation
    default: true
    # custom: |-
    #     worker_processes  1;..

  resources: {}

internalDatabase:
  enabled: true
  name: nextcloud

##
## External database configuration
##
externalDatabase:
  enabled: false

  ## Supported database engines: mysql or postgresql
  type: mysql

  ## Database host
  host:

  ## Database user
  user: nextcloud

  ## Database password
  password:

  ## Database name
  database: nextcloud

  ## Use a existing secret
  existingSecret:
    enabled: false
    # secretName: nameofsecret
    # usernameKey: username
    # passwordKey: password

##
## MariaDB chart configuration
##
mariadb:
  ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
  enabled: true

  db:
    name: nextcloud
    user: nextcloud
    password: <PASSWORD>

  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    enabled: true
    existingClaim: "nextcloud-storage"
    accessMode: ReadWriteOnce
    size: 50Gi

redis:
  enabled: false
  usePassword: false

## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#cron-jobs
##
cronjob:
  enabled: false
  # Nexcloud image is used as default but only curl is needed
  image: {}
    # repository: nextcloud
    # tag: 16.0.3-apache
    # pullPolicy: IfNotPresent
    # pullSecrets:
    #   - myRegistrKeySecretName
  # Every 15 minutes
  # Note: Setting this to any any other value than 15 minutes might
  #  cause issues with how nextcloud background jobs are executed
  schedule: "*/15 * * * *"
  annotations: {}
  # Set curl's insecure option if you use e.g. self-signed certificates
  curlInsecure: false
  failedJobsHistoryLimit: 5
  successfulJobsHistoryLimit: 2
  # If not set, nextcloud deployment one will be set
  # resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi

  # If not set, nextcloud deployment one will be set
  # nodeSelector: {}

  # If not set, nextcloud deployment one will be set
  # tolerations: []

  # If not set, nextcloud deployment one will be set
  # affinity: {}

service:
  type: ClusterIP
  port: 8080
  loadBalancerIP: nil
  nodePort: nil

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  existingClaim: "nextcloud-storage"
  # Nextcloud Data (/var/www/html)
  annotations: {}
  ## nextcloud data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  accessMode: ReadWriteOnce
  size: 50Gi

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1

## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
  enabled: false
  cputhreshold: 60
  minPods: 1
  maxPods: 10

nodeSelector: {}

tolerations: []

affinity: {}


## Prometheus Exporter / Metrics
##
metrics:
  enabled: false

  replicaCount: 1
  # The metrics exporter needs to know how you serve Nextcloud either http or https
  https: false
  timeout: 5s

  image:
    repository: xperimental/nextcloud-exporter
    tag: v0.3.0
    pullPolicy: IfNotPresent

  ## Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  # resources: {}

  ## Metrics exporter pod Annotation and Labels
  # podAnnotations: {}

  # podLabels: {}

  service:
    type: ClusterIP
    ## Use serviceLoadBalancerIP to request a specific static IP,
    ## otherwise leave blank
    # loadBalancerIP:
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9205"
    labels: {}

PVC gets deleted with release

I found that uninstalling the helm release also deletes the PVC associated with nextcloud data. Other helm charts do explicitly not delete the PVC for their persistent data, as that might result in dataloss.

I would suggest nextcloud's helm chart adopts a similar approach

enabling mariadb effects admin account creation

Installing Nextcloud without any values changed creates a admin account with the credentials admin/changeme. But when enabling mariadb like so:

internalDatabase:
    enabled: false
mariadb:
    enabled: true
    master:
        persistence:
            enabled: true

Nextcloud asks you to create an admin account at first and starts the installation procedure.
nextcloud-createAdmin

Unable to login when using redis as memcache

Hi,
I'm unable to login to freshly installed instance with redis enabled.

config.php section:

'memcache.local' => '\\OC\\Memcache\\Redis',
  'filelocking.enabled' => 'true',
  'memcache.distributed' => '\\OC\\Memcache\\Redis',
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => 
  array (
    'host' => 'nextcloud-redis-master',
    'port' => '6379',
  ),

After login, I'm redirected to login?redirect_url=/apps/files/ URL but then the login page just reloads and I'm stuck on login page again. There are no entries in nextloud.log.

It works on 18.0.12 version.

postgresql-isready breaks installations on arm64

HI,
the new init container postgresql-isready does not work on arm64 architecture

kubectl logs -f nextcloud-566d5c5459-gxl99 -c postgresql-isready
standard_init_linux.go:219: exec user process caused: exec format error

is it possible to at least make the init container optional with a values.yaml flag?

CronJob should curl service name rather than ingress host

Current template:

command: [ "curl" ]
args:
{{- if .Values.cronjob.curlInsecure }}
- "-k"
{{- end }}
- "--fail"
- "-L"
{{- if .Values.ingress.tls }}
- "https://{{ .Values.nextcloud.host }}/cron.php"
{{- else }}
- "http://{{ .Values.nextcloud.host }}/cron.php"
{{- end }}

The problems:

  1. the cronjob requests an external url, which may not exist if we did not enable ingress
  2. even we enable ingress, the ingress may use other ports rather than normal http port(80) or https port(443)
  3. even we use normal http(s) ports, the .Values.ingress.tls is not the only approach to set https

The solution:

just curl the service name and service port, like this:

              command: [ "curl" ]
              args:
                - "--fail"
                - "-L"
                - "http://{{ template "nextcloud.fullname" . }}:{{ .Values.service.port }}/cron.php"

if we use this solution, .Values.cronjob.curlInsecure can be removed.

The test:

image

Error updating Endpoint Slices for Service nextcloud/nextcloud

The nextcloud pod does not start but eventually fails. I can see this event logged:

8m40s       Warning   FailedToUpdateEndpointSlices   service/nextcloud                Error updating Endpoint Slices for Service nextcloud/nextcloud: Error deleting nextcloud-lv57c EndpointSlice for Service nextcloud/nextcloud: endpointslices.discovery.k8s.io "nextcloud-lv57c" not found

The pod itself stuck at

Initializing nextcloud 19.0.0.8 ...

Reproduction steps

Installation using the helm chart and the values.yml given below

helm upgrade nextcloud nextcloud/nextcloud --namespace nextcloud -f nextcloud.values.yml

kubectl version

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"archive", BuildDate:"2020-09-03T15:34:56Z", GoVersion:"go1.15.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Values.yml

image:
  repository: nextcloud
  tag: 19.0.0
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

replicaCount: 1

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 4G
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prd"
  tls:
    - secretName: nextcloud-tls
      hosts:
        - REDACTED
  labels: {}

nextcloud:
  host: REDACTED
  username: REDACTED
  password: REDACTED
  update: 0
  datadir: /var/www/html/data
  tableprefix:
  defaultConfigs:
    .htaccess: true
    redis.config.php: true
    apache-pretty-urls.config.php: true
    apcu.config.php: true
    apps.config.php: true
    autoconfig.php: true
    smtp.config.php: true

nginx:
  enabled: false
  image:
    repository: nginx
    tag: alpine
    pullPolicy: IfNotPresent
  config:
    default: true

  resources: {}

internalDatabase:
  enabled: false
  name: nextcloud

externalDatabase:
  enabled: false

mariadb:
  enabled: true
  volumePermissions:
    enabled: true
    securityContext:
      fsGroup: 1001
      runAsUser: 1001
  db:
    name: nextcloud
    user: REDACTED
    password: REDACTED
  persistence:
    enabled: true
    accessMode: ReadWriteOnce
    size: 8Gi
  master:
    persistence:
      accessModes: 
        - ReadWriteOnce
  slave:
    persistence:
      accessModes:
        - ReadWriteOnce
redis:
  enabled: false
  usePassword: false

cronjob:
  enabled: true
  schedule: "*/15 * * * *"
  annotations: {}
  curlInsecure: false
  failedJobsHistoryLimit: 5
  successfulJobsHistoryLimit: 2
service:
  type: ClusterIP
  port: 8080
  loadBalancerIP: nil

persistence:
  enabled: true
  storageClass: "kadalu.replica3"
  accessMode: ReadWriteOnce
  size: 500Gi

livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1

Additional info

Interestingly is also, that I only see one mariadb-pod where I would expect a master and a slave according to my initial tries, using the helm chart from the previous repo

nextcloud-1599540300-bvnmc   0/1     Error              0          8m35s
nextcloud-1599540300-jhskn   0/1     Error              0          7m53s
nextcloud-1599540300-k7qjq   0/1     Error              0          2m41s
nextcloud-1599540300-kzpds   0/1     Error              0          6m41s
nextcloud-1599540300-smvxj   0/1     Error              0          7m42s
nextcloud-1599540300-tnwgf   0/1     Error              0          7m22s
nextcloud-75fff49d4c-pcc2z   0/1     CrashLoopBackOff   5          15m
nextcloud-mariadb-0          1/1     Running            0          19h
kubectl -n nextcloud get pvc
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
nextcloud-nextcloud   Bound    pvc-bd1cc26a-c959-422a-a49e-02714da099ef   500Gi      RWO            kadalu.replica3   19h

HPA non-functional due to incorrect LabelSelector

Issue

Nextcloud App pod HPA does not work

Cause

The HPA is unable to get the CPU requested/usage/limit for the application pods.
image

Why

The HPA references the nextcloud deployment. This deployment does not define a component label. Therefore, if you also deploy the nextcloud-metrics deployment, one of the following will happen:

  1. Metrics are returned for both the metrics pod and application pod, and are averaged together (if resource requests/limits defined on metrics pod)
  2. If no resource requests/limits are defined on the metrics pod, the metrics are returned, but the HPA throws an error, because while the application pod may have resources defined, the metrics pod does not, and the HPA is unable to compute and average requests/limits as a result

What happens

image

The metrics server logs do not return an error, but their debug output is helpful in highlighting the issue further. In the above, the HPAs I have deployed for nginx ingress controllers specifically call out the component label in their request. These HPAs are working without issue.

Suggested Fix

Add app.kubernetes.io/component: app to the nextcloud application deployment, and update the spec accordingly:

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: nextcloud
      app.kubernetes.io/component: app
      app.kubernetes.io/name: nextcloud
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: nextcloud
        app.kubernetes.io/name: nextcloud
        app.kubernetes.io/component: app
        nextcloud-redis-client: 'true'

I am happy to open a PR for this issue if this seems like the correct approach.

IMAP authentication fails deployment

I try to change the nextcloud.phpConfigs Value to authenticate nextcloud to an IMAP server. For that I copied the given example from the tutorial as the nextcloud.phpConfigs Value:

nextcloud:
  phpConfigs: 
  - "\'user_backends\' => array( 
        array( 
            \'class\' => \'OC_User_IMAP\', 
            \'arguments\' => array( 
                \'127.0.0.1\', 993, \'ssl\', \'example.com\', true, false 
            ), 
        ), 
    ),"

with helm template nextcloud -f values.yaml nextcloud/nextcloud this renders to this configmap:

# Source: nextcloud/templates/php-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nextcloud-phpconfig
  labels:
    app.kubernetes.io/name: nextcloud
    helm.sh/chart: nextcloud-2.5.11
    app.kubernetes.io/instance: nextcloud
    app.kubernetes.io/managed-by: Helm
data:
  0: |-
    'user_backends' => array( array( 'class' => 'OC_User_IMAP', 'arguments' => array( '127.0.0.1', 993, 'ssl', 'example.com', true, false ), ), ),

this looks right to me, as data is an array and the first Value is a multiline string (|-), but when deploying with helm install nextcloud -f values.yaml nextcloud/nextcloud I get this error:

coalesce.go:196: warning: cannot overwrite table with non table for phpConfigs (map[])
coalesce.go:196: warning: cannot overwrite table with non table for phpConfigs (map[])
W0226 20:53:03.325034   10154 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Error: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.InitContainers: []v1.Container: Containers: []v1.Container: v1.Container.VolumeMounts: []v1.VolumeMount: v1.VolumeMount.SubPath: ReadString: expects " or n, but found 0, error found in #10 byte of ...|subPath":0}]}],"init|..., bigger context ...|s(int=0)","name":"nextcloud-phpconfig","subPath":0}]}],"initContainers":[{"command":["sh","-c","unti|...

I hope you can help me with this issue

Cannot configure TLS while using MetalLB

I'm trying to install this chart on bare metal cluster at home. It seems to work perfectly fine using MetalLB right now, but the problem is that chart is ignoring my TLS configuration. I understand that using port 80 in load balancer will not work, but pod seems to expose only port 80 anyway.

Please have a look at my values.yml

USER-SUPPLIED VALUES:
ingress:
  enabled: true
  hosts:
    - "nextcloud.domain.tld"
  tls:
    - hosts:
        - "nextcloud.domain.tld"
      secretName: wildcard-domain-tld-tls
  path: /
nextcloud:
  host: "nextcloud.domain.tld"
service:
  type: LoadBalancer
  port: 80
  loadBalancerIP: 172.16.100.100

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.