Giter Site home page Giter Site logo

mother-of-all-self-hosting / mash-playbook Goto Github PK

View Code? Open in Web Editor NEW
372.0 9.0 38.0 1011 KB

🐋 Ansible playbook which helps you host various FOSS services as Docker containers on your own server

License: GNU Affero General Public License v3.0

Just 34.91% Python 65.09%
ansible-playbook docker self-hosting collabora-online gitea miniflux nextcloud radicale uptime-kuma vaultwarden

mash-playbook's Introduction

Support room on Matrix donate

Mother-of-All-Self-Hosting Ansible playbook

MASH (Mother-of-All-Self-Hosting) is an Ansible playbook that helps you self-host services as Docker containers on your own server.

By running services in containers, we can have a predictable and up-to-date setup, across multiple supported distros and CPU architectures.

This project allows self-hosting of a large number of services and will continue to grow by adding support for FOSS.

Installation (upgrades) and some maintenance tasks are automated using Ansible (see our Ansible guide).

Supported services

See the full list of supported services here.

Installation

To configure and install services on your own server, follow the README in the docs/ directory.

Changes

This playbook evolves over time, sometimes with backward-incompatible changes.

When updating the playbook, refer to the changelog to catch up with what's new.

Support

Why create such a mega playbook?

We used to maintain separate playbooks for various services (Matrix, Nextcloud, Gitea, Gitlab, Vaultwarden, PeerTube, ..). They re-used Ansible roles (for Postgres, Traefik, etc.), but were still hard to maintain due to the large duplication of effort.

Most of these playbooks hosted services which require a Postgres database, a Traefik reverse-proxy, a backup solution, etc. All of them needed to come with documentation, etc. All these things need to be created and kept up-to-date in each and every playbook.

Having to use a dedicated Ansible playbook for each and every piece of software means that you have to juggle many playbooks and make sure they don't conflict with one another when installing services on the same server. All these related playbooks interoperated nicely, but still required at least a bit of manual configuration to achieve this interoperability.

Using specialized Ansible playbooks also means that trying out new software is difficult. Despite the playbooks being similar (which eases the learning curve), each one is still a new git repository you need to clone and maintain, etc.

Furthermore, not all pieces of software are large enough to justify having their own dedicated Ansible playbook. They have no home, so no one uses them.

We're finding the need for a playbook which combines all of this into one, so that:

  • you don't need to juggle multiple Ansible playbooks
  • you can try out various services easily - a few lines of extra configuration and you're ready to go
  • small pieces of software (like Miniflux, powered by the miniflux Ansible role) which don't have their own playbook can finally find a home
  • you can use a single playbook with the quality you know and trust
  • shared services (like Postgres) are maintained in one single place
  • backups are made easy, because everything lives together (same base data path, same Postgres instance)

Having one large playbook with all services does not necessarily mean you need to host everything on the same server though. Feel free to use as many servers as you see fit. While containers provide some level of isolation, it's still better to not put all your eggs in one basket and create a single point of failure.

All of the aforementioned playbooks have been absorbed into this one. See the full list of supported services here. The Matrix playbook will remain separate, because it contains a huge number of components and will likely grow even more. It deserves to stand on its own.

What's with the name?

Our goal is to create a large Ansible playbook which can be your all-in-one-toolkit for self-hosting services in a clean and reliable way.

We like the MASH acronym, and mashing is popular in the alcohol brewing industry. The result of all that mash is an enjoyable (at least by some) product.

Then, there's mixing and mashing stuff, which is also what this Ansible playbook is all about - you can mix and mash various pieces of software to create the self-hosted stack of your dreams!

mash-playbook's People

Contributors

adam-kress avatar beastafk avatar bergruebe avatar brush avatar etkecc avatar hooger avatar iucca avatar jahanson avatar kinduff avatar moan0s avatar mxwmnn avatar neonminnen avatar nielscil avatar qeded avatar sagat79 avatar sidewinder94 avatar spantaleev avatar spatteriight avatar sudo-tiz avatar zenkyma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mash-playbook's Issues

Add Immich

Immich is a really cool Google Photos alternative for photo backup and library management, with some ML features too. It would be really nice to have in the playbook

Docker compose example for Ansible

https://github.com/mother-of-all-self-hosting/mash-playbook/blob/main/docs/ansible.md#running-ansible-in-a-container-on-the-server-itself this only provides a docker run command, which tooling like Dockge unsuccessfully attempts to convert.

version: "3.3"
services:
  ansible:
    stdin_open: true
    tty: true
    privileged: true
    pid: host
    working_dir: /work
    volumes:
      - "`pwd`:/work"
    entrypoint:
      - /bin/sh
    image: docker.io/devture/ansible:2.14.5-r0-0
networks: {}
validating /opt/stacks/ansible/compose.yaml: volumes Additional property `pwd` is not allowed

Ideally, the guide would include a working example for docker compose

[mash-uptime-kuma] Touble using Docker monitor connect EACCES /var/run/docker.sock

Hello,

fairly simple issue, i installed uptime kuma using mash playbook, it works well, now i'd like to try the docker monitoring to monitor the machine hosting uptime kuma and other docker containers but when i try to set it up i got this error Connect EACCES /var/run/docker.sock

how to allow uptime kuma to connect it?

#######################################################################
#                                                                      #
# uptime-kuma                                                          #
#                                                                      #
########################################################################

uptime_kuma_enabled: true
uptime_kuma_hostname: up.example.com

uptime_kuma_container_extra_arguments:
  - '--env "PGID=1006"'
  - '--env "PUID=996"'
  - '-v /var/run/docker.sock:/var/run/docker.sock'

########################################################################
#                                                                      #
# /uptime-kuma                                                         #
#                                                                      #
########################################################################

Source How to Monitor Docker Containers

Collabora page is blank showing only an "OK"

Hi! I just did a mash setup with nextcloud, redis and collabora on a fresh ubuntu 22.04 vm. The collabora page only returns a blank page with "OK" on it and nothing else. And I cannot open document files in nextcloud office after running the command just run-tags install-nextcloud-app-collabora.

image

Here is my var.yml:

########################################################################
#                                                                      #
# nextcloud                                                            #
#                                                                      #
########################################################################

nextcloud_enabled: true

nextcloud_hostname: cloud.mydom.com

nextcloud_redis_hostname: "{{ redis_identifier }}"

nextcloud_systemd_required_services_list_custom:
  - "{{ redis_identifier }}.service"

nextcloud_container_additional_networks_custom:
  - "{{ redis_identifier }}"

nextcloud_collabora_app_wopi_url: "{{ collabora_online_url }}"

########################################################################
#                                                                      #
# /nextcloud                                                           #
#                                                                      #
########################################################################

########################################################################
#                                                                      #
# redis                                                                #
#                                                                      #
########################################################################

redis_enabled: true

########################################################################
#                                                                      #
# /redis                                                               #
#                                                                      #
########################################################################

########################################################################
#                                                                      #
# collabora-online                                                     #
#                                                                      #
########################################################################

collabora_online_enabled: true

collabora_online_hostname: collabora.mydom.com

collabora_online_env_variable_password: 'mypasswd'

collabora_online_aliasgroup: "https://{{ nextcloud_hostname | replace('.', '\\.') }}:443"

########################################################################
#                                                                      #
# /collabora-online                                                    #
#                                                                      #
########################################################################

Mobilizon returns 502 Bad Gateway

Hello!

I am trying to run a mobilizon on my server. Unfortunately, even with a fresh installation I only get a 502, but unfortunately I can't see what I'm doing wrong. My configuration:

mash_playbook_generic_secret_key: ***
mash_playbook_docker_installation_enabled: true
devture_docker_sdk_for_python_installation_enabled: true
devture_timesync_installation_enabled: true
mash_playbook_reverse_proxy_type: playbook-managed-traefik
devture_traefik_config_certificatesResolvers_acme_email: ***
mobilizon_enabled: true
mobilizon_hostname: events.***.com

Can somebody give me a hint?

Owncast Port 1935 not Open

It looks like the port is open UDP but not TCP, which is what RTMP uses.

Not possible to stream to the software.

freshrss service not working

First and foremost, thanks for this awesome collection of services. It has saved me a lot of time and effort with setup and maintenance, as well as allowed me to discover new useful services to try out.

The only issue I have encountered so far is, that I can't get the freshrss service to work. After setting it up (with the default values) I always get a 404 status code when accessing the "/freshrss" path on my server. I noticed the following error message, printed multiple times, in the logs of the mash-freshrss systemd service:

[unixd:alert] [pid 38] (1)Operation not permitted: AH02157: initgroups: unable to set groups for User www-data and Group 33

I tried this with multiple versions of the freshrss docker image, but always with the same result. Any help with this would be very much appreciated.

Radicale role not working

Just after enabling radicale, when I run the playbook it throws the following error:

TASK [galaxy/radicale : Ensure git repo is configured] ****************************************
failed: [<mydomain>] (item=user.email nobody@nowhere) => {"ansible_loop_var": "item", "changed": true, "cmd": ["git", "config", "user.email", "nobody@nowhere"], "delta": "0:00:00.004751", "e
nd": "2023-06-18 12:08:30.849207", "item": "user.email nobody@nowhere", "msg": "non-zero return code", "rc": 128, "start": "2023-06-18 12:08:30.844456", "stderr": "fatal: not in a git directo
ry", "stderr_lines": ["fatal: not in a git directory"], "stdout": "", "stdout_lines": []}      
failed: [<mydomain>] (item=user.name Radicale) => {"ansible_loop_var": "item", "changed": true, "cmd": ["git", "config", "user.name", "Radicale"], "delta": "0:00:00.004018", "end": "2023-06-
18 12:08:31.128675", "item": "user.name Radicale", "msg": "non-zero return code", "rc": 128, "start": "2023-06-18 12:08:31.124657", "stderr": "fatal: not in a git directory", "stderr_lines": 
["fatal: not in a git directory"], "stdout": "", "stdout_lines": []}

On second run it works. I uninstalled radicale again to verify this and again it failed on first run.

// On second run it works because it skips those tasks. Radicale then does not work because its not correctly initialized as a git repo. I will look more into this.

Failure setting up AdGuard Home (`[fatal] This is the first launch of AdGuard Home. You must run it as Administrator.`)

Hi,

I'm setting up AdGuard Home for the first time, and I'm seeing the following failure:

Jan 14 15:26:51 services mash-adguard-home[2682244]: 2024/01/14 20:26:51.185065 [info] AdGuard Home, version v0.107.43
Jan 14 15:26:51 services mash-adguard-home[2682244]: 2024/01/14 20:26:51.185164 [info] This is the first time AdGuard Home is launched
Jan 14 15:26:51 services mash-adguard-home[2682244]: 2024/01/14 20:26:51.185188 [info] Checking if AdGuard Home has necessary permissions
Jan 14 15:26:51 services mash-adguard-home[2682244]: 2024/01/14 20:26:51.185196 [fatal] This is the first launch of AdGuard Home. You must run it as Administrator.
Jan 14 15:27:01 services systemd[1]: mash-adguard-home.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 15:27:01 services systemd[1]: mash-adguard-home.service: Failed with result 'exit-code'.

I found some other upstream issues that seem related to this problem (e.g., AdguardTeam/AdGuardHome#4714). If a workaround is indeed required, I believe the documention should be updated to reflect that.

Thanks!

[feat] Add funkwhale

This is an issue to track the progress of adding Funkwhale to the playbook

  • Create role repository: mother-of-all-self-hosting/ansible-role-funkwhale
  • Add funkwhale-api
  • Add funkwhale-frontend
  • Add celery-worker
  • Add celery-beat
  • Add redis
  • Add to requirements & setup
  • Wire together with group_vars
  • Document basic setup
  • Document setup on a subdomain?
  • Document advanced configuration like multiple redis instances on a host

As orientation we can use https://dev.funkwhale.audio/funkwhale/funkwhale/-/blob/develop/deploy/docker-compose.yml

Failure to deploy certain services using postgresql-16

Heya,

So, ever since we moved to postgresql-16 I've started experiencing some weird problems when deploying certain services. I first noticed this when trying to deploy roundcube in a local VM for testing purposes. The VM is backed by a spinning disk, so the storage is a bit slow, but nothing out of the ordinary. When using the following vars.yml:

roundcube_enabled: true
roundcube_hostname: "10.166.183.192"

roundcube_default_imap_host: 'ssl://mail.example.com'
roundcube_default_imap_port: 993
roundcube_smtp_server: 'ssl://mail.example.com'
roundcube_smtp_port: 465

roundcube_container_http_host_bind_port: "8080"

I see a problem when mash-roundcube starts and attempts to populate the database. Here's the journalctl output for mash-postgres:

Dec 03 05:10:17 roundcube systemd[1]: Starting mash-postgres.service - Postgres server (mash-postgres)...
Dec 03 05:10:17 roundcube systemd[1]: Started mash-postgres.service - Postgres server (mash-postgres).
Dec 03 05:10:18 roundcube mash-postgres[1033]: chmod: /var/run/postgresql: Operation not permitted
Dec 03 05:10:18 roundcube mash-postgres[1033]: The files belonging to this database system will be owned by user "mash".
Dec 03 05:10:18 roundcube mash-postgres[1033]: This user must also own the server process.
Dec 03 05:10:18 roundcube mash-postgres[1033]: The database cluster will be initialized with this locale configuration:
Dec 03 05:10:18 roundcube mash-postgres[1033]:   provider:    libc
Dec 03 05:10:18 roundcube mash-postgres[1033]:   LC_COLLATE:  C
Dec 03 05:10:18 roundcube mash-postgres[1033]:   LC_CTYPE:    C
Dec 03 05:10:18 roundcube mash-postgres[1033]:   LC_MESSAGES: en_US.utf8
Dec 03 05:10:18 roundcube mash-postgres[1033]:   LC_MONETARY: en_US.utf8
Dec 03 05:10:18 roundcube mash-postgres[1033]:   LC_NUMERIC:  en_US.utf8
Dec 03 05:10:18 roundcube mash-postgres[1033]:   LC_TIME:     en_US.utf8
Dec 03 05:10:18 roundcube mash-postgres[1033]: The default text search configuration will be set to "english".
Dec 03 05:10:18 roundcube mash-postgres[1033]: Data page checksums are disabled.
Dec 03 05:10:18 roundcube mash-postgres[1033]: fixing permissions on existing directory /var/lib/postgresql/data ... ok
Dec 03 05:10:18 roundcube mash-postgres[1033]: creating subdirectories ... ok
Dec 03 05:10:18 roundcube mash-postgres[1033]: selecting dynamic shared memory implementation ... posix
Dec 03 05:10:18 roundcube mash-postgres[1033]: selecting default max_connections ... 100
Dec 03 05:10:18 roundcube mash-postgres[1033]: selecting default shared_buffers ... 128MB
Dec 03 05:10:18 roundcube mash-postgres[1033]: selecting default time zone ... UTC
Dec 03 05:10:18 roundcube mash-postgres[1033]: creating configuration files ... ok
Dec 03 05:10:18 roundcube mash-postgres[1033]: running bootstrap script ... ok
Dec 03 05:10:19 roundcube mash-postgres[1033]: performing post-bootstrap initialization ... sh: locale: not found
Dec 03 05:10:19 roundcube mash-postgres[1033]: 2023-12-03 05:10:19.162 UTC [21] WARNING:  no usable system locales were found
Dec 03 05:10:19 roundcube mash-postgres[1033]: ok
Dec 03 05:10:21 roundcube mash-postgres[1033]: syncing data to disk ... ok
Dec 03 05:10:21 roundcube mash-postgres[1033]: initdb: warning: enabling "trust" authentication for local connections
Dec 03 05:10:21 roundcube mash-postgres[1033]: initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
Dec 03 05:10:21 roundcube mash-postgres[1033]: Success. You can now start the database server using:
Dec 03 05:10:21 roundcube mash-postgres[1033]:     pg_ctl -D /var/lib/postgresql/data -l logfile start
Dec 03 05:10:22 roundcube mash-postgres[1033]: waiting for server to start....2023-12-03 05:10:22.143 UTC [27] LOG:  starting PostgreSQL 16.1 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
Dec 03 05:10:22 roundcube mash-postgres[1033]: 2023-12-03 05:10:22.143 UTC [27] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Dec 03 05:10:22 roundcube mash-postgres[1033]: 2023-12-03 05:10:22.234 UTC [30] LOG:  database system was shut down at 2023-12-03 05:10:19 UTC
Dec 03 05:10:22 roundcube mash-postgres[1033]: 2023-12-03 05:10:22.297 UTC [27] LOG:  database system is ready to accept connections
Dec 03 05:10:22 roundcube mash-postgres[1033]:  done
Dec 03 05:10:22 roundcube mash-postgres[1033]: server started
Dec 03 05:10:22 roundcube mash-postgres[1033]: CREATE DATABASE
Dec 03 05:10:22 roundcube mash-postgres[1033]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
Dec 03 05:10:22 roundcube mash-postgres[1033]: 2023-12-03 05:10:22.692 UTC [27] LOG:  received fast shutdown request
Dec 03 05:10:22 roundcube mash-postgres[1033]: waiting for server to shut down....2023-12-03 05:10:22.840 UTC [27] LOG:  aborting any active transactions
Dec 03 05:10:22 roundcube mash-postgres[1033]: 2023-12-03 05:10:22.843 UTC [27] LOG:  background worker "logical replication launcher" (PID 33) exited with exit code 1
Dec 03 05:10:22 roundcube mash-postgres[1033]: 2023-12-03 05:10:22.846 UTC [28] LOG:  shutting down
Dec 03 05:10:23 roundcube mash-postgres[1033]: 2023-12-03 05:10:23.043 UTC [28] LOG:  checkpoint starting: shutdown immediate
Dec 03 05:10:24 roundcube mash-postgres[1033]: .2023-12-03 05:10:24.071 UTC [28] LOG:  checkpoint complete: wrote 923 buffers (0.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.153 s, sync=0.466 s, total=1.225 s; sync files=301, longest=0.388 s, average=0.002 s; distance=4256 kB, estimate=4256 kB; lsn=0/1912678, redo lsn=0/1912678
Dec 03 05:10:24 roundcube mash-postgres[1033]: 2023-12-03 05:10:24.095 UTC [27] LOG:  database system is shut down
Dec 03 05:10:24 roundcube mash-postgres[1033]:  done
Dec 03 05:10:24 roundcube mash-postgres[1033]: server stopped
Dec 03 05:10:24 roundcube mash-postgres[1033]: PostgreSQL init process complete; ready for start up.
Dec 03 05:10:24 roundcube mash-postgres[1033]: 2023-12-03 05:10:24.191 UTC [1] LOG:  starting PostgreSQL 16.1 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
Dec 03 05:10:24 roundcube mash-postgres[1033]: 2023-12-03 05:10:24.191 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
Dec 03 05:10:24 roundcube mash-postgres[1033]: 2023-12-03 05:10:24.191 UTC [1] LOG:  listening on IPv6 address "::", port 5432
Dec 03 05:10:24 roundcube mash-postgres[1033]: 2023-12-03 05:10:24.211 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Dec 03 05:10:24 roundcube mash-postgres[1033]: 2023-12-03 05:10:24.247 UTC [43] LOG:  database system was shut down at 2023-12-03 05:10:24 UTC
Dec 03 05:10:24 roundcube mash-postgres[1033]: 2023-12-03 05:10:24.284 UTC [1] LOG:  database system is ready to accept connections
Dec 03 05:10:40 roundcube systemd[1]: Stopping mash-postgres.service - Postgres server (mash-postgres)...
Dec 03 05:10:40 roundcube mash-postgres[1297]: mash-postgres
Dec 03 05:10:40 roundcube systemd[1]: mash-postgres.service: Main process exited, code=exited, status=137/n/a
Dec 03 05:10:40 roundcube systemd[1]: mash-postgres.service: Failed with result 'exit-code'.
Dec 03 05:10:40 roundcube systemd[1]: Stopped mash-postgres.service - Postgres server (mash-postgres).
Dec 03 05:10:59 roundcube systemd[1]: Starting mash-postgres.service - Postgres server (mash-postgres)...
Dec 03 05:10:59 roundcube systemd[1]: Started mash-postgres.service - Postgres server (mash-postgres).
Dec 03 05:10:59 roundcube mash-postgres[1722]: chmod: /var/run/postgresql: Operation not permitted
Dec 03 05:10:59 roundcube mash-postgres[1722]: PostgreSQL Database directory appears to contain a database; Skipping initialization
Dec 03 05:11:00 roundcube mash-postgres[1722]: 2023-12-03 05:11:00.067 UTC [1] LOG:  starting PostgreSQL 16.1 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
Dec 03 05:11:00 roundcube mash-postgres[1722]: 2023-12-03 05:11:00.068 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
Dec 03 05:11:00 roundcube mash-postgres[1722]: 2023-12-03 05:11:00.068 UTC [1] LOG:  listening on IPv6 address "::", port 5432
Dec 03 05:11:00 roundcube mash-postgres[1722]: 2023-12-03 05:11:00.111 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Dec 03 05:11:00 roundcube mash-postgres[1722]: 2023-12-03 05:11:00.187 UTC [15] LOG:  database system was interrupted; last known up at 2023-12-03 05:10:24 UTC
Dec 03 05:11:01 roundcube mash-postgres[1722]: 2023-12-03 05:11:01.068 UTC [15] LOG:  database system was not properly shut down; automatic recovery in progress
Dec 03 05:11:01 roundcube mash-postgres[1722]: 2023-12-03 05:11:01.092 UTC [15] LOG:  redo starts at 0/19126F0
Dec 03 05:11:01 roundcube mash-postgres[1722]: 2023-12-03 05:11:01.166 UTC [15] LOG:  invalid record length at 0/1D3B9F8: expected at least 24, got 0
Dec 03 05:11:01 roundcube mash-postgres[1722]: 2023-12-03 05:11:01.166 UTC [15] LOG:  redo done at 0/1D3B9C0 system usage: CPU: user: 0.00 s, system: 0.01 s, elapsed: 0.07 s
Dec 03 05:11:02 roundcube mash-postgres[1722]: 2023-12-03 05:11:02.344 UTC [13] LOG:  checkpoint starting: end-of-recovery immediate wait
Dec 03 05:11:03 roundcube mash-postgres[1722]: 2023-12-03 05:11:03.155 UTC [17] FATAL:  the database system is not yet accepting connections
Dec 03 05:11:03 roundcube mash-postgres[1722]: 2023-12-03 05:11:03.155 UTC [17] DETAIL:  Consistent recovery state has not been yet reached.
Dec 03 05:11:06 roundcube mash-postgres[1722]: 2023-12-03 05:11:06.337 UTC [13] LOG:  checkpoint complete: wrote 932 buffers (0.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.292 s, sync=3.175 s, total=4.050 s; sync files=308, longest=1.159 s, average=0.011 s; distance=4260 kB, estimate=4260 kB; lsn=0/1D3B9F8, redo lsn=0/1D3B9F8
Dec 03 05:11:06 roundcube mash-postgres[1722]: 2023-12-03 05:11:06.377 UTC [1] LOG:  database system is ready to accept connections

You can see a few interesting things here:

  1. The service is stopped cold turkey (mash-postgres.service: Main process exited, code=exited, status=137/n/a);

  2. This leads postgresql to consider that the database was not properly shut down, which requires a recovery procedure to kick in. I don't know if that's the root cause of this bug, but it does seem to take a little bit of time (5 seconds) for postgresql to be ready to accept connections again.

Meanwhile, here's the log for mash-roundcube:

Dec 03 05:10:59 roundcube systemd[1]: Starting mash-roundcube.service - Roundcube (mash-roundcube)...
Dec 03 05:11:00 roundcube mash-roundcube[1800]: d6ce1b43948b6f60daf1b75f94d3ac69382284e4246aaa276eef532aa10a040e
Dec 03 05:11:00 roundcube systemd[1]: Started mash-roundcube.service - Roundcube (mash-roundcube).
Dec 03 05:11:02 roundcube mash-roundcube[1845]: roundcubemail not found in /var/www/html - copying now...
Dec 03 05:11:02 roundcube mash-roundcube[1845]: Complete! ROUNDCUBEMAIL has been successfully copied to /var/www/html
Dec 03 05:11:02 roundcube mash-roundcube[1845]: wait-for-it.sh: waiting 30 seconds for mash-postgres:5432
Dec 03 05:11:02 roundcube mash-roundcube[1845]: wait-for-it.sh: mash-postgres:5432 is available after 0 seconds
Dec 03 05:11:02 roundcube mash-roundcube[1845]: Write root config to /var/www/html/config/config.inc.php
Dec 03 05:11:02 roundcube mash-roundcube[1845]: Write Docker config to /var/www/html/config/config.docker.inc.php
Dec 03 05:11:03 roundcube mash-roundcube[1845]: ERROR: SQLSTATE[08006] [7] connection to server at "mash-postgres" (172.18.0.2), port 5432 failed: FATAL:  the database system is not yet accepting connections
Dec 03 05:11:03 roundcube mash-roundcube[1845]: DETAIL:  Consistent recovery state has not been yet reached.
Dec 03 05:11:03 roundcube mash-roundcube[1845]: ERROR: Failed to connect to database
Dec 03 05:11:03 roundcube mash-roundcube[1845]: Failed to initialize/update the database. Please start with an empty database and restart the container.
Dec 03 05:11:03 roundcube mash-roundcube[1845]: Generating locales (this might take a while)...
Dec 03 05:11:04 roundcube mash-roundcube[1845]:   en_US.UTF-8... done
Dec 03 05:11:04 roundcube mash-roundcube[1845]: Generation complete.
Dec 03 05:11:04 roundcube mash-roundcube[1845]: [Sun Dec 03 05:11:04.670799 2023] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.57 (Debian) PHP/8.1.25 configured -- resuming normal operations
Dec 03 05:11:04 roundcube mash-roundcube[1845]: [Sun Dec 03 05:11:04.671134 2023] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

As you can see, it tries to contact the database at exactly the same time that the database recovery is in progress. For that reason, the connection fails. Arguably, roundcube could be more resilient here and retry the connection a few times, but unfortunately it doesn't, and the service is left at an unconfigured state.

A few things are worth noting:

  1. When I was developing ansible-role-roundcube, I did several local tests using the same setup and they were all successful. The only noticeable difference was the fact that mash-playbook was still using postgresql-15 at the time.

  2. I'm also experiencing the same problem with a production system where I have vaultwarden and authentik deployed.

  3. To mitigate the issue, I'm setting the following parameter inside vars.yml:

devture_systemd_service_manager_up_verification_delay_seconds: 29

mash-authentik-server.service was not detected to be running

TASK [galaxy/com.devture.ansible.role.systemd_service_manager : Fail if service isn't detected to be running] **************
failed: [workspace.di.xyz] (item=mash-authentik-server.service) => {"ansible_loop_var": "item", "changed": false, "item": "mash-authentik-server.service", "msg": "mash-authentik-server.service was not detected to be running. It's possible that there's a configuration problem or another service on your server interferes with it (uses the same ports, etc.). Try running `systemctl status mash-authentik-server.service` and `journalctl -fu mash-authentik-server.service` on the server to investigate. If you're on a slow or overloaded server, it may be that services take a longer time to start and that this error is a false-positive. You can consider raising the value of the `devture_systemd_service_manager_up_verification_delay_seconds` variable. See `/home/ubuntu/mash-playbook/roles/galaxy/com.devture.ansible.role.systemd_service_manager/defaults/main.yml` for more details about that."}
failed: [workspace.di.xyz] (item=mash-authentik-worker.service) => {"ansible_loop_var": "item", "changed": false, "item": "mash-authentik-worker.service", "msg": "mash-authentik-worker.service was not detected to be running. It's possible that there's a configuration problem or another service on your server interferes with it (uses the same ports, etc.). Try running `systemctl status mash-authentik-worker.service` and `journalctl -fu mash-authentik-worker.service` on the server to investigate. If you're on a slow or overloaded server, it may be that services take a longer time to start and that this error is a false-positive. You can consider raising the value of the `devture_systemd_service_manager_up_verification_delay_seconds` variable. See `/home/ubuntu/mash-playbook/roles/galaxy/com.devture.ansible.role.systemd_service_manager/defaults/main.yml` for more details about that."}

systemctl status mash-authentik-server.service

● mash-authentik-server.service - Authentik Server (mash-authentik-server)
     Loaded: loaded (/etc/systemd/system/mash-authentik-server.service; enabled; vendor preset: enabled)
     Active: activating (auto-restart) (Result: exit-code) since Fri 2023-11-10 19:04:50 EET; 27s ago
    Process: 4060307 ExecStartPre=/usr/bin/env sh -c /usr/bin/env docker kill mash-authentik-server 2>/dev/null || true (code=exited, status=0/SUCCESS)
    Process: 4060315 ExecStartPre=/usr/bin/env sh -c /usr/bin/env docker rm mash-authentik-server 2>/dev/null || true (code=exited, status=0/SUCCESS)
    Process: 4060323 ExecStartPre=/usr/bin/env docker create --rm --name=mash-authentik-server --log-driver=none --user=997:1003 --cap-drop=ALL --read-only --network=mash-authentik --env-file=>
    Process: 4060329 ExecStartPre=/usr/bin/env docker network connect traefik mash-authentik-server (code=exited, status=0/SUCCESS)
    Process: 4060336 ExecStartPre=/usr/bin/env docker network connect mash-postgres mash-authentik-server (code=exited, status=0/SUCCESS)
    Process: 4060343 ExecStartPre=/usr/bin/env docker network connect mash-authentik-redis mash-authentik-server (code=exited, status=0/SUCCESS)
    Process: 4060349 ExecStart=/usr/bin/env docker start --attach mash-authentik-server (code=exited, status=1/FAILURE)
   Main PID: 4060349 (code=exited, status=1/FAILURE)
        CPU: 145ms

Radicale doesn't include 'radicale_auth_matrix' python module

I've enabled Radicale and configure to auth via matrix like in defaults/main.yaml:

radicale_enabled: true

radicale_hostname: 'mash.mydomain.com'
radicale_path_prefix: '/radicale'

radicale_auth_type: 'radicale_auth_matrix'
radicale_auth_matrix_server: 'matrix.mydomain.com'

But got in the journal:

Jul 18 17:08:00 mash systemd[1]: Starting mash-radicale.service - radicale (mash-radicale)...
Jul 18 17:08:00 mash mash-radicale[71682]: 1d869aafed024ae341875f53545b2d602b9df3d9b8756c30e38f68ef4194f5a5
Jul 18 17:08:00 mash systemd[1]: Started mash-radicale.service - radicale (mash-radicale).
Jul 18 17:08:01 mash mash-radicale[71695]: [2023-07-18 12:08:01 +0000] [1] [CRITICAL] An exception occurred during server startup: Failed to load auth module 'radicale_auth_matrix': No module named 'radicale_auth_matrix'
Jul 18 17:08:01 mash systemd[1]: mash-radicale.service: Main process exited, code=exited, status=1/FAILURE
Jul 18 17:08:01 mash systemd[1]: mash-radicale.service: Failed with result 'exit-code'.
Jul 18 17:08:32 mash systemd[1]: mash-radicale.service: Scheduled restart job, restart counter is at 1.
Jul 18 17:08:32 mash systemd[1]: Stopped mash-radicale.service - radicale (mash-radicale).

How to add module 'radicale_auth_matrix' to Radicale docker image? I think an image already has it but it doesn't.

Add Gotify Push

Would be awesome to have Gotify server to send/receive push notifications.

Services from this project that can use Gotify

  • Changedetection
  • Healthchecks
  • Uptime Kuma

Some services outside of this project that can use Gotify

  • Proxmox VE version 8.1+
  • Netdata
  • Acme.sh

Project Homepage | Installation Instructions | Github Page

ARM support for focalboard

Cannot use focalboard_enabled on ARM using this playbook.

Always fails at this step:

TASK [galaxy/com.devture.ansible.role.systemd_service_manager : Fail if service isn't detected to be running] *********************
failed: [mash.redacted.com] (item=mash-focalboard.service) => {"ansible_loop_var": "item", "changed": false, "item": "mash-focalboard.service", "msg": "mash-focalboard.service was not detected to be running. It's possible that there's a configuration problem or another service on your server interferes with it (uses the same ports, etc.). Try running `systemctl status mash-focalboard.service` and `journalctl -fu mash-focalboard.service` on the server to investigate. If you're on a slow or overloaded server, it may be that services take a longer time to start and that this error is a false-positive. You can consider raising the value of the `devture_systemd_service_manager_up_verification_delay_seconds` variable. See `/home/daries/mash-playbook/roles/galaxy/com.devture.ansible.role.systemd_service_manager/defaults/main.yml` for more details about that."}

journalctl

Mar 21 20:14:55 daries systemd[1]: mash-focalboard.service: Failed with result 'exit-code'.
Mar 21 20:15:26 daries systemd[1]: mash-focalboard.service: Scheduled restart job, restart counter is at 20.
Mar 21 20:15:26 daries systemd[1]: Stopped Focalboard (mash-focalboard).
Mar 21 20:15:26 daries systemd[1]: Starting Focalboard (mash-focalboard)...
Mar 21 20:15:26 daries mash-focalboard[4118180]: WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Mar 21 20:15:26 daries mash-focalboard[4118180]: c69ada62b00a9789f656f00fc50119fee77f0816f1b24a85845fada520e45d1a
Mar 21 20:15:26 daries systemd[1]: Started Focalboard (mash-focalboard).
Mar 21 20:15:26 daries mash-focalboard[4118240]: exec /opt/focalboard/bin/focalboard-server: exec format error
Mar 21 20:15:27 daries systemd[1]: mash-focalboard.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 20:15:27 daries systemd[1]: mash-focalboard.service: Failed with result 'exit-code'.

Using Authentic to secure services (especially those without authentication)

Hello,

Now that we have Authentic (and keycloak), I want to propose that we add flags that allow us to secure any service with Authentic. We may want to add more services in the future, also services that do not have their own login methods. We could easily secure them behind Authentic now.

If I understood the documentation correctly, then the proxy provider sounds like it is made for exactly this scenario:
vivaldi_sreveIxdWg
This supports traefik's forwardAuth labels, so in theory, a few simple flags could already solve this.

Scope of this proposal:

  • Discuss if this is suitable. This probably must be added to every service, but after it has been set up once it can probaby be copy-pasted to every other container.
  • Discuss which to use: Forward Auth (single application) or Forward Auth (domain level). From my understanding, both have their benefits. The latter seems "simpler" and "easier to setup", but I did not yet read a lot in the documentation.
  • Investigate: Can we also securely route internal adresses via the "Proxy" setting (e.g. router, smart home, proxmox, etc)?
  • Implement a proof-of-concept for one service, then migrate it to all other services with http endpoint/s.

Benefits of this proposal:

  • With this feature, we can add services that have no own authentication and still make them available to the public (behind Authentic), thus mitigating risk of exposure.

Drawbacks of this proposal:

  • We have to integrate the flags for every container. Unless we can simply add all extra (traefik) flags without touching every container, then that would make it quite simple. Otherwise, the setup should be the same for every service and can thus be re-used in a template.

Service wishlist: What services would be nice to support

  • etke.cc/cleanup (requires adjustments)
  • etke.cc/etherpad
  • etke.cc/grafana
  • mdad/prometheus (requires full overhaul)
  • etke.cc/languagetool
  • etke.cc/ntfy
  • etke.cc/dnsmasq (requires full overhaul)
  • etke.cc/soft-serve (requires full overhaul)
  • etke.cc/prometheus_postgres_exporter
  • etke.cc/prometheus_blackbox_exporter (requires full overhaul)
  • etke.cc/security (requires adjustments)
  • etke.cc/swap (requires adjustments)
  • Vikunja - todo app - I may do this one some day, out of curiousity. I've been running my own todo app for many years and the chance of switching to another one is small, but who knows.
  • appsmith - no-code app building platform. I don't know much about this, but am researching it for work and it seems curious. Perhaps some people would find it useful.
  • RustDesk - I will likely do this one. I'm currently hosting 2 instances of this manually and I'd like to convert them to mash.
  • PhotoPrism - AI-Powered Photos App
  • Healthchecks.io - Cron Job Monitoring. I may do this one. I'm currently hosting this manually (docker-compose) and I'd like to convert it to mash
  • wg-easy - Wireguard. I may do this one if I don't like our Firezone service. I'm currently operating a few wg-easy instances hosted manually. I may try switching to Firezone and forgetting about wg-easy.

mash-playbook not integrating with matrix-traeffik

I have the matrix-playbook with traeffik, fronted by a Caddy2. I'm trying to let the mash-playbook use the traeffik from the matrix-playbook. I got a lot of log-lines from matrix-treaffik right after the start and both subdomains running into a 404 not found.

Jun 24 10:13:52 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:13:52Z" level=error msg="the router mash-gitea@docker uses a non-existent resolver: default"
Jun 24 10:13:52 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:13:52Z" level=error msg="the router mash-uptime-kuma@docker uses a non-existent resolver: default"
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="entryPoint \"web-secure\" doesn't exist" routerName=mash-gitea@docker entryPointName=web-secure
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="no valid entryPoint for this router" routerName=mash-gitea@docker
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="entryPoint \"web-secure\" doesn't exist" entryPointName=web-secure routerName=mash-uptime-kuma@docker
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="no valid entryPoint for this router" routerName=mash-uptime-kuma@docker
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="entryPoint \"web-secure\" doesn't exist" entryPointName=web-secure routerName=mash-gitea@docker
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="no valid entryPoint for this router" routerName=mash-gitea@docker
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="entryPoint \"web-secure\" doesn't exist" routerName=mash-uptime-kuma@docker entryPointName=web-secure
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="no valid entryPoint for this router" routerName=mash-uptime-kuma@docker
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="the router mash-gitea@docker uses a non-existent resolver: default"
Jun 24 10:14:21 tbaer.de matrix-traefik[2976747]: time="2023-06-24T08:14:21Z" level=error msg="the router mash-uptime-kuma@docker uses a non-existent resolver: default"

mash-playbook config:

mash_playbook_reverse_proxy_type: other-traefik-container
mash_playbook_reverse_proxyable_services_additional_network: traefik

Everything looks ok, but I guess, I have missed some config somewhere or there is a issue.

Consider using smaller container networks to avoid exhausting the networks pool

Ref: https://straz.to/2021-09-08-docker-address-pools/

This is a nice post. It's a bit intrusive for the playbook to be changing the default address pool.

Although.. I guess that if the playbook is managing your Docker installation (which is optional and some may decide to turn it off), it may as well do some reconfiguration as it sees fit.

Then the question is.. how do we do this nicely? It seems like the ansible-role-docker role we're currently using has some docker_daemon_options variable, which influences /etc/docker/daemon.json.

So.. we may be able to set some options.

I suppose networks that had already been created will not be affected.. they'd need to be recreated to become small.

Or worse yet.. the new Docker pool definition (with the tiny networks) may be in conflict with the existing large ones.. and we'd need to force-delete them first and recreate them.

There are many things to figure out, but.. it seems like it's a possibility we should research.

Support for external PostgreSQL?

Hey all,

Thanks for the great playbook, it looks great, and the documentation is solid as well. Are there any plans to be able to support an external PostgreSQL server? Currently I'm running an external instance, and would love to be able to use it.

For example, I'm able to provide database variables to the Nextcloud playbook via nextcloud_database_* variables, but unfortunately they don't do anything with the devture_postgres* disabled.

Borg also references an external Postgres server, but it just 404s.

By default, if you're using the integrated Postgres database server (as opposed to [an external Postgres server](configuring-playbook-external-postgres.md)) or MariaDB as MySQL server, Borg backups will also include dumps of your databases. An alternative solution for backing up the Postgres database is [postgres backup](configuring-playbook-postgres-backup.md). If you decide to go with another solution, you can disable Postgres-backup support for Borg using the `backup_borg_postgresql_enabled` variable.

[Question] Nextcloud php increase upload/memory size?

I've a Nextcloud 28 Instance running but i want to increase the PHP Upload and Memory Limit.

I found in the mash-nextcloud-server the php production ini but im not sure if it's used in /usr/local/etc/php ?

I'm not into traefik yet so i dont know if maybe i need to tweak some settings there? It would be nice if someone could point me in the right direction!

Thanks!

Woodpecker CI server hosted at subpath

Hi,

According to woodpecker-ci/woodpecker#1636 (and woodpecker-ci/woodpecker@b90e790), it seems that it should now be possible to run Woodpecker CI at its own subpath. I decided to give it a try but I'm still seeing problems with the setup.

Here are the mash-traefik logs:

[21/Feb/2024:03:19:35 +0000] "GET /ci/ HTTP/2.0" 200 857 "-" "-" 65 "mash-woodpecker-ci-server@docker" "http://172.20.0.4:8000" 0ms
[21/Feb/2024:03:19:35 +0000] "GET /assets/custom.css HTTP/2.0" 404 0 "-" "-" 69 "mash-forgejo@docker" "http://172.20.0.3:3000" 0ms
[21/Feb/2024:03:19:35 +0000] "GET /assets/index-u3ngPXwk.css HTTP/2.0" 404 0 "-" "-" 68 "mash-forgejo@docker" "http://172.20.0.3:3000" 1ms
[21/Feb/2024:03:19:35 +0000] "GET /assets/index-XiIDA5Cc.js HTTP/2.0" 404 0 "-" "-" 67 "mash-forgejo@docker" "http://172.20.0.3:3000" 1ms
[21/Feb/2024:03:19:35 +0000] "GET /assets/custom.js HTTP/2.0" 404 0 "-" "-" 70 "mash-forgejo@docker" "http://172.20.0.3:3000" 0ms
[21/Feb/2024:03:19:35 +0000] "GET /web-config.js HTTP/2.0" 404 11 "-" "-" 66 "mash-forgejo@docker" "http://172.20.0.3:3000" 8ms
[21/Feb/2024:03:19:35 +0000] "GET /assets/index-u3ngPXwk.css HTTP/2.0" 404 0 "-" "-" 71 "mash-forgejo@docker" "http://172.20.0.3:3000" 0ms
[21/Feb/2024:03:19:35 +0000] "GET /assets/custom.css HTTP/2.0" 404 0 "-" "-" 72 "mash-forgejo@docker" "http://172.20.0.3:3000" 0ms
[21/Feb/2024:03:19:35 +0000] "GET /assets/custom.js HTTP/2.0" 404 0 "-" "-" 73 "mash-forgejo@docker" "http://172.20.0.3:3000" 0ms

The problem seems to happen because the first request goes to the right container, but the subsequent requests (without the /ci/ prefix) are going to the wrong container (Forgejo, in this case).

It's not clear to me if this is a problem with the playbook, Woodpecker, or both, but I decided to start by filing a bug here.

mash_playbook_generic_secret_key not defined but it is

Hello,
i am quite new to Ansible but i think i followed your documentation to 100% as far as i can tell: https://github.com/mother-of-all-self-hosting/mash-playbook/blob/main/docs/configuring-playbook.md

I've populated the mash_playbook_generic_secret_key in inventory/host_vars/<your-domain>/vars.yml (of course <your-domain> is replaced) but if i try to run the playbook the following error gets reported:

TASK [mash/playbook_base : Fail if required mash playbook settings not defined] ***********************************************************************************************************************************************************************************************
failed: [<your-domain>] (item=mash_playbook_generic_secret_key) => {"ansible_loop_var": "item", "changed": false, "item": "mash_playbook_generic_secret_key", "msg": "You need to define a required configuration setting (`mash_playbook_generic_secret_key`) for using this role."}

Is there an error in the documentation, in the playbook or am i doing something wrong?
Thanks!

Add kanidm and lldap services

It would be neat to add both kanidm and lldap. Kanidm is an all-in-one IDM solution while lldap is a lightweight LDAP server. (I'd suggest OpenLDAP as well, but OpenLDAP is, from my experience, an absolute pain to set up.)

Unable to find image 'localhost/nextcloud:27.1.4-apache-customized' locally

I am running just install-all on a fresh Ubuntu 22.04 (ARM)

I am getting this error for nextcloud:

failed: [XXX.de] (item={'name': 'mash-nextcloud-server.service', 'priority': 2000, 'groups': ['mash', 'nextcloud', 'nextcloud-server']}) => changed=false 
  ansible_loop_var: item
  item:
    groups:
    - mash
    - nextcloud
    - nextcloud-server
    name: mash-nextcloud-server.service
    priority: 2000
  msg: |-
    Unable to start service mash-nextcloud-server.service: Job for mash-nextcloud-server.service failed because the control process exited with error code.
    See "systemctl status mash-nextcloud-server.service" and "journalctl -xeu mash-nextcloud-server.service" for details.

The image is downloaded.
nextcloud 27.1.4-apache 4ffe31245259 11 days ago 1.21GB

The journalctl log:

Dec 10 15:55:46 ubuntu-4gb-nbg1-2 systemd[1]: mash-nextcloud-server.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ An ExecStartPre= process belonging to unit mash-nextcloud-server.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 systemd[1]: mash-nextcloud-server.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit mash-nextcloud-server.service has entered the 'failed' state with result 'exit-code'.
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 systemd[1]: Failed to start Nextcloud Server (mash-nextcloud-server).
░░ Subject: A start job for unit mash-nextcloud-server.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit mash-nextcloud-server.service has finished with a failure.
░░ 
░░ The job identifier is 4659 and the job result is failed.
...skipping...
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit mash-nextcloud-server.service has finished.
░░ 
░░ The job identifier is 4659 and the job result is done.
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 systemd[1]: Starting Nextcloud Server (mash-nextcloud-server)...
░░ Subject: A start job for unit mash-nextcloud-server.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit mash-nextcloud-server.service has begun execution.
░░ 
░░ The job identifier is 4659.
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 mash-nextcloud-server[8901]: Unable to find image 'localhost/nextcloud:27.1.4-apache-customized' locally
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 mash-nextcloud-server[8901]: Error response from daemon: error parsing HTTP 404 response body: invalid character 'p' after top-level value: "404 page not found\n"
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 systemd[1]: mash-nextcloud-server.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ An ExecStartPre= process belonging to unit mash-nextcloud-server.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 systemd[1]: mash-nextcloud-server.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit mash-nextcloud-server.service has entered the 'failed' state with result 'exit-code'.
Dec 10 15:55:46 ubuntu-4gb-nbg1-2 systemd[1]: Failed to start Nextcloud Server (mash-nextcloud-server).
░░ Subject: A start job for unit mash-nextcloud-server.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit mash-nextcloud-server.service has finished with a failure.
░░ 
░░ The job identifier is 4659 and the job result is failed.

I dont have the parameter nextcloud_container_image_customized set. I don't know why the script would want to use a customized container. Any idea?

New Authentik installation: can not connect to redis & can not uninstall

Greetings!

I wanted to test and validate the new Authentik changes.

While doing so, I ran into two issues:

  1. neither worker nor service can not connect to the redis (using the default redis right from the configuration). I verified that redis is running and open for connections (using journalctl -fu mash-redis.service).
  2. When disabling authentik, you run into another issue: It can not find authentik_frontend_identifier and thus fails when it tries to remove it.

Sadly I do not have the time to troubleshoot right now, but I wanted to let you know about this issue nonetheless.

Config

########################################################################
#                                                                      #
# redis                                                                #
#                                                                      #
# Required for Authentik                                               #
########################################################################

redis_enabled: true

########################################################################
#                                                                      #
# /redis                                                               #
#                                                                      #
########################################################################

########################################################################
#                                                                      #
# authentik                                                            #
#                                                                      #
########################################################################

authentik_enabled: true
authentik_hostname: auth.DOMAIN.net
authentik_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

# Base configuration as shown above

# Point authentik to the shared Redis instance
authentik_config_redis_hostname: "{{ redis_identifier }}"

# Make sure the authentik service (mash-authentik.service) starts after the shared Redis service (mash-redis.service)
authentik_systemd_required_services_list_custom:
  - "{{ redis_identifier }}.service"

# Make sure the authentik container is connected to the container network of the shared Redis service (mash-redis)
authentik_container_additional_networks_custom:
  - "{{ redis_identifier }}"


########################################################################
#                                                                      #
# /authentik                                                           #
#                                                                      #
########################################################################

1. Errors after starting

Neither worker nor server can connect to the redis, and thus can not start (and spam the logs).

> sudo journalctl -fu mash-authentik-worker.service

Apr 23 12:05:18 matrix mash-authentik-worker[1103356]: {"event": "Redis Connection failed, retrying... (Error -2 connecting to mash-redis:6379. Name or service not known.)", "level": "info", "logger": "authentik.lib.config", "timestamp": 1682244318.5348532, "redis_url": "redis://:@mash-redis:6379/0"}

2. Error after disabling redis and authentik:

When disabling authentik, you get another error, which stops the entire playbook to setup and start things.
This stays over multiple runs of the command (one can hope, right?).

> ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start

TASK [galaxy/authentik : Ensure authentik services are stopped and removed] *********************************************************************************************************
fatal: [DOMAIN.net]: FAILED! => {"msg": "'authentik_frontend_identifier' is undefined"}

Rocky Linux 9.1 support

I was able to install docker on Rocky Linux 9.1 by:

  1. adding the centos docker-ce repo
  2. enabling epel-release to install python3-docker
  3. disabling timesyncd in the vars.yml and opting for chronyd that's already installed.

add_prereqs.sh

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install epel-release -y
sudo dnf install python3-docker -y

vars.yml

devture_timesync_installation_enabled: false

Proposal: Allow the user to add traefik middlewares in order to add IPWhitelist to services

Hello there!

We just wanted to set up a traefik ipwhitelist, and got it to work, too.
For this to work we need to do the following:

  • Add a middleware to traefik
  • Add the middleware to the service(s)

I propose two working methods that provide this functionality. As an example, I will modify roles/galaxy/nextcloud/templates/labels.j2 to add the IPWhitelist functionality to Nextcloud.

Note: For the sake of keeping this issue simple, I used {% if VARIABLE is defined %} instead of adding the variable(s) to roles/galaxy/nextcloud/defaults/main.yml.

1. Proposal: Adding a NAME_container_labels_additional_middleware variable to register traefik middlewares

In this scenario we simply allow the modification of the middlewares variable. This is the most versatile variant, as it will allow us to add every middleware we may want to add in the future. I propose that we add this (or similar) functionality to all roles.
We will add nextcloud_container_labels_additional_middleware, which will simply be appended to the middlewares variable.

# We find this line in `roles/galaxy/nextcloud/templates/labels.j2`
{% set middlewares = [] %}

# The following lines are new:
{% if nextcloud_container_labels_additional_middleware is defined %}
{% set middlewares = middlewares + [nextcloud_container_labels_additional_middleware ] %}
{% endif %}

With this simple change I can do the following in our vars.yaml:

nextcloud_container_labels_additional_labels: |
  traefik.http.middlewares.mash-nextcloud-whitelist.ipwhitelist.sourcerange=172.23.0.0/24, 10.0.0.0/24, 192.168.160.0/24

nextcloud_container_labels_additional_middleware: mash-nextcloud-whitelist

That's it. Nextcloud now has a middleware that provides an IP Whitelist.
This can also be used to add any amount of different middlewares we want.

2. Proposal: Adding NAME_container_labels_ipwhitelist_sourcerange to allow a simplified and automated setting of the iprange

I will now describe a second scenario, just for sake of completion.
In this scenario we create a new variable nextcloud_container_labels_ipwhitelist_sourcerange in roles/galaxy/nextcloud/templates/labels.j2. Setting this variable automates the settings form the first proposal.

{% if nextcloud_container_labels_ipwhitelist_sourcerange is defined %}
traefik.http.middlewares.{{ nextcloud_identifier }}-whitelist.ipwhitelist.sourcerange={{nextcloud_container_labels_ipwhitelist_sourcerange}}
{% set middlewares = middlewares + [nextcloud_identifier + '-whitelist'] %}
{% endif %}

In the vars.yml we can now simply write:

nextcloud_container_labels_ipwhitelist_sourcerange: 172.23.0.0/24, 10.0.0.0/24, 192.168.160.0/24

The second approach may be a bit easier to use, but is not as versatile as the first approach. We could, however, use both approaches at the same time.


I think a change like Proposal 1 would be beneficial for the entire project as it will allow more finegrained control over traefik middlewares.

Vaultwarden 1.28.0 released

Please update playbook to use a new 1.28.0 release of Vaultwarden server.

Also simple addition of var vaultwarden_version: 1.28.0 doesn't work. mash-vaultwarden.service exits with an error, in the log:

мар 29 10:13:53 mash systemd[1]: Starting Vaultwarden (mash-vaultwarden)...
мар 29 10:13:54 mash mash-vaultwarden[1567406]: 9c9b3eddd1afcd635187f5486762da020d983b1ab0639861726bfd2abc7486e8
мар 29 10:13:54 mash systemd[1]: Started Vaultwarden (mash-vaultwarden).
мар 29 10:13:55 mash mash-vaultwarden[1567491]: /start.sh: exec: line 25: /vaultwarden: Operation not permitted
мар 29 10:13:55 mash systemd[1]: mash-vaultwarden.service: Main process exited, code=exited, status=126/n/a
мар 29 10:13:55 mash systemd[1]: mash-vaultwarden.service: Failed with result 'exit-code'.
мар 29 10:14:25 mash systemd[1]: mash-vaultwarden.service: Scheduled restart job, restart counter is at 1.
мар 29 10:14:25 mash systemd[1]: Stopped Vaultwarden (mash-vaultwarden).

It seems docker --cap-drop=ALL option is the root of this issue. But on 1.27.0 it works well.

After removing --cap-drop=ALL line from service template the container run successfully. This is a workaround, not a fix though.

Change Detection - Playwright Driver Issues & Proposed Fixes

There are two problems with ansible-role-changedetection regarding the playwright driver

I was able to mess around with a couple files on my local instance to get things working.

The first issue is this error: chrome_crashpad_handler: --database is required which can be fixed by adding the following environment variables to the container /mash/changedetection/playwright-env

XDG_CONFIG_HOME=/tmp/.chromium
XDG_CACHE_HOME=/tmp/.chromium

The second issue is this error: TargetCloseError: Protocol error (Target.setAutoAttach): Target closed which can be fixed by modifying the tmpfs flag specified in the Systemd unit file

This is the original:

--tmpfs=/tmp:rw,noexec,nosuid,size=128m \

And this is modified flag, which results in a working container:

--tmpfs=/tmp:rw,noexec,nosuid,size=512m \

It is not clear to me why increasing this value fixes the problem, I only discovered so by changing a bunch of things randomly until it worked.

Soon I will make a merge request in ansible-role-changedetection, but wanted to atleast document by findings in the meantime

Questions about access restriction and configuration preservation.

First of all, great project!
Тhanks for sharing your work and knowledge.

The questions I have:

  • Is there any way to restrict access to some/all of the installed services by IP address or country?

For example: I want to make Healthchecks and Uptime Kuma available from one country, while WG Easy accessible from two IP addresses. Suppose I would have to add additional lines in /mash/<service>/labels file. And if it is true, then here comes the second question:

  • How can I make this custom configuration persistent on subsequent Ansible playbook runs?

Thanks.

"just run-tags adjust-nextcloud-config" should probably also update trusted_domains

to reproduce:

create mash-nextcloud for testing using a temporary url.

then change the url and run just setup-all again, and then run just run-tags adjust-nextcloud-config

in config/config.php, trusted_domains still points to the old url, and throws an error.

workaround:

  • docker cp mash-nextcloud-server:/var/www/html/config/config.php .
  • vi config.php
  • change the domain
  • docker cp config.php mash-nextcloud-server:/var/www/html/config/config.php
  • however, now has the wrong user
  • enter the docker container as root. note that this is tricky as apparently you can't just use -u root for some reason (this root does not have superuser privs)
  • instead, use
    • docker inspect --format {{.State.Pid}} mash-nextcloud-server
    • then sudo nsenter --target {pid of process} --mount --uts --ipc --net --pid
    • then use ls -al and chown to fix owner of config/config.php

setup-all error: swapoff: File or directory not found

Hi!
First off, thanks for the work on this. Setting it up is fast and fun.

When I run ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start I run into the following issue:

TASK [galaxy/swap : Ensure swap is disabled] 
****************************************************************************************************************************************
fatal: [domain]: FAILED! => {"changed": true, "cmd": ["swapoff", "/var/swap"], "delta": "0:00:00.002638", "end": "2023-03-23 20:29:58.919579", "msg": "non-zero return code", "rc": 4, "start": "2023-03-23 20:29:58.916941", "stderr": "swapoff: /var/swap: swapoff failed: Datei oder Verzeichnis nicht gefunden", "stderr_lines": ["swapoff: /var/swap: swapoff failed: Datei oder Verzeichnis nicht gefunden"], "stdout": "", "stdout_lines": []}

Sorry for the german logs. Datei oder Verzeichnis nicht gefunden means "File or directory not found".
install-all runs without an issue.
Explicitly setting system_swap_enabled: false does not fix it either.

If you need any further information, please let me know.

Nextcloud not working with matrix-docker-ansible-deploy and non-root nextcloud_path_prefix

Hello,
i've configed mash with nextcloud. The playbook finishes without error but if i access the nextcloud url it looks wrong:
grafik
The browser console throws errors, that indicate that assets are not found, e.g.:

GET
https://domain.de/core/css/server.css?v=ba222ded25d957b900c03bef914333cd

	
GET
	https://domain.de/core/css/server.css?v=ba222ded25d957b900c03bef914333cd
Status
404
Not Found
VersionHTTP/2
Übertragen175 B (0 B Größe)
Referrer Policyno-referrer

    	
    content-length
    	19
    content-type
    	text/plain; charset=utf-8
    date
    	Sat, 27 May 2023 11:44:27 GMT
    x-content-type-options
    	nosniff
    X-Firefox-Spdy
    	h2
    	
    Accept
    	text/css,*/*;q=0.1
    Accept-Encoding
    	gzip, deflate, br
    Accept-Language
    	de,en-US;q=0.7,en;q=0.3
    Connection
    	keep-alive
    Cookie
    	oc5w7620bgj3=12b1e895fc534e91251e5eab6cf45ea2
    DNT
    	1
    Host
    	domain.de
    Sec-Fetch-Dest
    	style
    Sec-Fetch-Mode
    	no-cors
    Sec-Fetch-Site
    	same-origin
    Sec-GPC
    	1
    TE
    	trailers
    User-Agent
    	Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/112.0

I am wondering why the url is https://domain.de/core/css/server.css... . Shouldn't it be https://domain.de/nextcloud/core/css/server.css...?

My Nextcloud config in vars.yml is

########################################################################
#                                                                      #
# nextcloud                                                            #
#                                                                      #
########################################################################

nextcloud_enabled: true

nextcloud_hostname: domain.de
nextcloud_path_prefix: /nextcloud

# Redis configuration, as described below

########################################################################
#                                                                      #
# /nextcloud                                                           #
#                                                                      #
########################################################################

I've substituted the real domain with "domain" everywhere here.

I use Traefik from the Matrix-Playbook and configured it accodingly in the mash vars:

mash_playbook_reverse_proxy_type: other-traefik-container
mash_playbook_reverse_proxyable_services_additional_network: traefik

Any idea whats wrong?

Changedetection - changedetection_path_prefix does not work

I could not get changedetection to be accessible in a subpath.

changedetection_enabled: true
changedetection_hostname: change.myhost.org
changedetection_path_prefix: /changedetection
changedetection_playwright_driver_enabled: true

This did not work, I get a 404 error when trying to access myhost.org/changedetection.

The only way I could get it to work was to comment out changedetection_path_prefix:

changedetection_enabled: true
changedetection_hostname: change.myhost.org
#changedetection_path_prefix:
changedetection_playwright_driver_enabled: true

It's not great because the example in the docs actually use changedetection_path_prefix.

Issues When Running mash aloingside matrix-docker-ansible-deploy

Howdy!

Apologies if this doesn;t belogn here and should be re-opened on matrix-docker-ansible-deploy. I can do so if you wish.

I used both the mash playbook and the matrix-docker-ansible-deploy playbook yesterday and had both services working for a brief period of time. I also followed the steps described in matrix-docker-ansible-deploy to deploy a static site, and that's where the issue came about. I followed the step to stop the playbook from modifying the index.html, and now the traefik service is failing when I run the matrix-docker-ansible-deploy playbook with this error

failed: [matrix.edgerunners.club] (item=matrix-traefik.service) => changed=false
  ansible_loop_var: item
  item: matrix-traefik.service
  msg: matrix-traefik.service was not detected to be running. It's possible that there's a configuration problem or another service on your server interferes with it (uses the same ports, etc.). Try running `systemctl status matrix-traefik.service` and `journalctl -fu matrix-traefik.service` on the server to investigate. If you're on a slow or overloaded server, it may be that services take a longer time to start and that this error is a false-positive. You can consider raising the value of the `devture_systemd_service_manager_up_verification_delay_seconds` variable. See `/home/alex/matrix-docker-ansible-deploy/roles/galaxy/com.devture.ansible.role.systemd_service_manager/defaults/main.yml` for more details about that.

And when I check the systemd service I get this

matrix-traefik.service - Traefik (matrix-traefik)
     Loaded: loaded (/etc/systemd/system/matrix-traefik.service; enabled; preset: enabled)
     Active: activating (auto-restart) (Result: exit-code) since Tue 2023-11-14 18:49:29 UTC; 28s ago
    Process: 648521 ExecStartPre=/usr/bin/env sh -c /usr/bin/env docker kill matrix-traefik 2>/dev/null || true (code=e>
    Process: 648529 ExecStartPre=/usr/bin/env sh -c /usr/bin/env docker rm matrix-traefik 2>/dev/null || true (code=exi>
    Process: 648536 ExecStartPre=/usr/bin/env docker create --rm --name=matrix-traefik --log-driver=none --user=999:100>
    Process: 648542 ExecStartPre=/usr/bin/env docker network connect matrix-container-socket-proxy matrix-traefik (code>
    Process: 648548 ExecStart=/usr/bin/env docker start --attach matrix-traefik (code=exited, status=1/FAILURE)
   Main PID: 648548 (code=exited, status=1/FAILURE)
        CPU: 112ms

I checked journalctl as the error suggested and it returned this

Nov 14 18:50:30 edgerunners.club systemd[1]: matrix-traefik.service: Failed with result 'exit-code'.
Nov 14 18:51:00 edgerunners.club systemd[1]: matrix-traefik.service: Scheduled restart job, restart counter is at 2442.
Nov 14 18:51:00 edgerunners.club systemd[1]: Stopped matrix-traefik.service - Traefik (matrix-traefik).
Nov 14 18:51:00 edgerunners.club systemd[1]: Starting matrix-traefik.service - Traefik (matrix-traefik)...
Nov 14 18:51:01 edgerunners.club matrix-traefik[649321]: c012b83bdd26d758a391d1773bd6a459862ab58af81ee06d1d6542849bb394f6
Nov 14 18:51:01 edgerunners.club systemd[1]: Started matrix-traefik.service - Traefik (matrix-traefik).
Nov 14 18:51:01 edgerunners.club matrix-traefik[649334]: time="2023-11-14T18:51:01Z" level=error msg="error waiting for container: "
Nov 14 18:51:01 edgerunners.club matrix-traefik[649334]: Error response from daemon: driver failed programming external connectivity on endpoint matrix-traefik (96a4296d9f15d92d0173d6c5ac22f542d0c8bf365883adc7027df486e0a38cce): Bind for 0.0.0.0:80 failed: port is already allocated
Nov 14 18:51:01 edgerunners.club systemd[1]: matrix-traefik.service: Main process exited, code=exited, status=1/FAILURE
Nov 14 18:51:01 edgerunners.club systemd[1]: matrix-traefik.service: Failed with result 'exit-code'.

Here is my mash vars file, with keys and passwords redacted

---

# Below is an example which installs a few services on the host, in different configuration.
# You should tweak this example as you see fit and enable the services that you need.

########################################################################
#                                                                      #
# Playbook                                                             #
#                                                                      #
########################################################################

# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
# Various other secrets will be derived from this secret automatically.
mash_playbook_generic_secret_key: 'redacted'

########################################################################
#                                                                      #
# /Playbook                                                            #
#                                                                      #
########################################################################


########################################################################
#                                                                      #
# Docker                                                               #
#                                                                      #
########################################################################

# To disable Docker installation (in case you'd be installing Docker in another way),
# remove the line below.
mash_playbook_docker_installation_enabled: true

# To disable Docker SDK for Python installation (in case you'd be installing the SDK in another way),
# remove the line below.
devture_docker_sdk_for_python_installation_enabled: true

########################################################################
#                                                                      #
# /Docker                                                              #
#                                                                      #
########################################################################



########################################################################
#                                                                      #
# com.devture.ansible.role.timesync                                    #
#                                                                      #
########################################################################

# To ensure the server's clock is synchronized (using systemd-timesyncd/ntpd),
# we enable the timesync service.

devture_timesync_installation_enabled: true

########################################################################
#                                                                      #
# /com.devture.ansible.role.timesync                                   #
#                                                                      #
########################################################################



########################################################################
#                                                                      #
# devture-traefik                                                      #
#                                                                      #
########################################################################

# Most services require a reverse-proxy, so we enable Traefik here.
#
# Learn more about the Traefik service in docs/services/traefik.md

mash_playbook_reverse_proxy_type: other-traefik-container
mash_playbook_reverse_proxyable_services_additional_network: traefik



########################################################################
#                                                                      #
# /devture-traefik                                                     #
#                                                                      #
########################################################################



########################################################################
#                                                                      #
# devture-postgres                                                     #
#                                                                      #
########################################################################

# Most services require a Postgres database, so we enable Postgres here.
#
# Learn more about the Postgres service in docs/services/postgres.md

devture_postgres_enabled: true

# Put a strong password below, generated with `pwgen -s 64 1` or in another way
devture_postgres_connection_password: 'redacted'

########################################################################
#                                                                      #
# /devture-postgres                                                    #
#                                                                      #
########################################################################



########################################################################
#                                                                      #
# miniflux                                                             #
#                                                                      #
########################################################################

# Learn more about the Miniflux service in docs/services/miniflux.md
#
# This service is only here as an example. If you don't wish to use the
# Miniflux service, remove the whole section.

miniflux_enabled: true

miniflux_hostname: news.edgerunners.club

miniflux_admin_login: redacted
miniflux_admin_password: redacted

########################################################################
#                                                                      #
# /miniflux                                                            #
#                                                                      #
########################################################################



########################################################################
#                                                                      #
# uptime-kuma                                                          #
#                                                                      #
########################################################################

# Learn more about the Uptime-kuma service in docs/services/uptime-kuma.md
#
# This service is only here as an example. If you don't wish to use the
# Uptime-kuma service, remove the whole section.

uptime_kuma_enabled: true

uptime_kuma_hostname: uptime-kuma.edgerunners.club

# For now, hosting uptime-kuma under a path is not supported.
# See: https://github.com/louislam/uptime-kuma/issues/147
# uptime_kuma_path_prefix: /uptime-kuma

########################################################################
#                                                                      #
# /uptime-kuma                                                         #
#                                                                      #
########################################################################


# You can add additional services here, as you see fit.
# To discover new services and configuration, see docs/supported-services.md

########################################################################
#                                                                      #
# forgejo                                                              #
#                                                                      #
########################################################################

forgejo_enabled: true

# Forgejo uses port 22 by default.
# We recommend that you move your regular SSH server to another port,
# and stick to this default.
#
# If you wish to use another port, uncomment the variable below
# and adjust the port as you see fit.
# forgejo_ssh_port: 222

forgejo_hostname: git.edgerunners.club

########################################################################
#                                                                      #
# /forgejo                                                             #
#                                                                      #
########################################################################


########################################################################
#                                                                      #
# nextcloud                                                            #
#                                                                      #
########################################################################

nextcloud_enabled: true

nextcloud_hostname: cloud.edgerunners.club

# Redis configuration, as described below

########################################################################
#                                                                      #
# /nextcloud                                                           #
#                                                                      #
########################################################################

And now my Matrix vars file

# The bare domain name which represents your Matrix identity.
# Matrix user ids for your server will be of the form (`@user:<matrix-domain>`).
#
# Note: this playbook does not touch the server referenced here.
# Installation happens on another server ("matrix.<matrix-domain>").
#
# If you've deployed using the wrong domain, you'll have to run the Uninstalling step,
# because you can't change the Domain after deployment.
#
# Example value: example.com
matrix_domain: edgerunners.club

# The Matrix homeserver software to install.
# See:
#  - `roles/custom/matrix-base/defaults/main.yml` for valid options
# - the `docs/configuring-playbook-IMPLEMENTATION_NAME.md` documentation page, if one is available for your implementation choice
matrix_homeserver_implementation: synapse

# A secret used as a base, for generating various other secrets.
# You can put any string here, but generating a strong one is preferred (e.g. `pwgen -s 64 1`).
matrix_homeserver_generic_secret_key: 'redacted'

# By default, the playbook manages its own Traefik (https://doc.traefik.io/traefik/) reverse-proxy server.
# It will retrieve SSL certificates for you on-demand and forward requests to all other components.
# For alternatives, see `docs/configuring-playbook-own-webserver.md`.
matrix_playbook_reverse_proxy_type: playbook-managed-traefik

# This is something which is provided to Let's Encrypt when retrieving SSL certificates for domains.
#
# In case SSL renewal fails at some point, you'll also get an email notification there.
#
# If you decide to use another method for managing SSL certificates (different than the default Let's Encrypt),
# you won't be required to define this variable (see `docs/configuring-playbook-ssl-certificates.md`).
#
# Example value: [email protected]
devture_traefik_config_certificatesResolvers_acme_email: 'redacted'

# A Postgres password to use for the superuser Postgres user (called `matrix` by default).
#
# The playbook creates additional Postgres users and databases (one for each enabled service)
# using this superuser account.
devture_postgres_connection_password: 'redacted'

# By default, we configure Coturn's external IP address using the value specified for `ansible_host` in your `inventory/hosts` file.
# If this value is an external IP address, you can skip this section.
#
# If `ansible_host` is not the server's external IP address, you have 2 choices:
# 1. Uncomment the line below, to allow IP address auto-detection to happen (more on this below)
# 2. Uncomment and adjust the line below to specify an IP address manually
#
# By default, auto-detection will be attempted using the `https://ifconfig.co/json` API.
# Default values for this are specified in `matrix_coturn_turn_external_ip_address_auto_detection_*` variables in the Coturn role
# (see `roles/custom/matrix-coturn/defaults/main.yml`).
#
# If your server has multiple IP addresses, you may define them in another variable which allows a list of addresses.
# Example: `matrix_coturn_turn_external_ip_addresses: ['1.2.3.4', '4.5.6.7']`
#
# matrix_coturn_turn_external_ip_address: ```



#my additions
matrix_synapse_admin_enabled: true
matrix_nginx_proxy_base_domain_serving_enabled: true`

mash-playbook + matrix-docker-ansible-deploy playbook

What is the best way to combine this playbook with the matrix playbook to run on the same machine?
Is there any nuances of using traefik/postgre? As traefik service is present in both playbooks will it be enough to use it from only one of the playbooks (e.g. if I already use matrix playbook, can skip turning on traefik in mash playbook)?

[mash-netbox] Insufficient write permission to the media root

Hello everybody!

I'm having issues with image upload and i'm stuck. I think my permissions are correct but obviously not. Been looking for other threads for clues but no luck. This is what i get:

<class 'PermissionError'>

[Errno 13] Permission denied: '/opt/netbox/netbox/media/image-attachments'

Python version: 3.10.6
NetBox version: 3.4.6

folder chmod:

drwxr-x--- 2 mash mash 4096 Apr  1 09:45 media

What do i need to do?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.