Giter Site home page Giter Site logo

infra's Introduction

notthebee/infra

Superseded by nix-config

Original README:

An Ansible playbook that sets up an Ubuntu-based server with reasonable security, auto-updates, e-mail notifications for S.M.A.R.T. and Snapraid errors. Currently being completely rewritten

It assumes a fresh Ubuntu Server 20.04 install, access to a non-root user with sudo privileges and a public SSH key. This can be configured during the installation process.

The playbook is mostly being developed for personal use, so stuff is going to be constantly changing and breaking. Use at your own risk and don't expect any help in setting it up on your machine.

## Special thanks
* David Stephens for his [Ansible NAS](https://github.com/davestephens/ansible-nas) project. This is where I got the idea and "borrowed" a lot of concepts and implementations from.
* Jeff Geerling for his book, [Ansible for DevOps](https://www.ansiblefordevops.com/) and his [Ansible 101 series](https://www.youtube.com/watch?v=goclfp6a2IQ&list=PL2_OBreMn7FqZkvMYt6ATmgC0KAGGJNAN) on YouTube.
* Jonathan Hanson for his [SSH port juggling](https://gist.github.com/triplepoint/1ad6c6060c0f12112403d98180bcf0b4) implementation.
* Alex Kretzschmar and Chris Fisher from [Self Hosted Show](https://selfhosted.show/) for introducing me to the idea of Infrastracture as Code
* TylerAlterio for the [mergerfs](https://github.com/tyalt1/mediaserver/tree/master/roles/mergerfs) role
* Jake Howard and Alex Kretzschmar for the [snapraid](https://github.com/RealOrangeOne/ansible-role-snapraid/commits?author=IronicBadger) role

## Services included:
* [Home Assistant](https://hub.docker.com/r/homeassistant/home-assistant) 
* [Phoscon-GW](https://hub.docker.com/r/marthoc/deconz) 
* [nginx-proxy-manager](https://nginxproxymanager.com/) 

infra's People

Contributors

davidculley avatar nedkohristov avatar notthebee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

infra's Issues

Add "minute" parameter to crontab daily tasks

- name: Schedule a library scan at 1 AM every day
cron:
name: Scan the PhotoPrism library
hour: "1"
job: "/usr/bin/docker exec photoprism /photoprism/bin/photoprism index"

- name: Schedule a Nextcloud library scan at midnight every day
cron:
name: Scan the Nextcloud library
hour: "0"
job: "docker exec nextcloud sudo -u abc php8 /config/www/nextcloud/occ files:scan --all"

Add minute: "0" or you will have those executed at every minute.

TASK [network/swag : Populate the dictionary of all containers]

Installed a Ubuntu 20.04 in Vmware for test, only failed on:

TASK [network/swag : Populate the dictionary of all containers] ******************************************************** fatal: [home]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'nextcloud'\n\nThe error appears to be in '/home/hpower/infra/tasks/list_services.yml': line 27, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Populate the dictionary of all containers\n ^ here\n"}

Missing variables

Hello,
There are a lot of variables that are not defined like: namecheap_host, blog_host, host_local etc.
Could you update the host_vars/all/vars.yml file?

Thank you!

external_services.yml

Hey Wolfgang, would you mind sharing an example of this? Thanks for sharing the project, great work mate

[WARNING]: Could not match supplied host pattern, ignoring: fleet

Hi,
I was trying to adapt this repo to my use, but I can't get it to work. I get the "[WARNING]: Could not match supplied host pattern, ignoring: fleet"
I'm trying to run in locally on my ubuntu vm, so I changed hosts file to

[kombajn]
127.0.0.1  ansible_user=[redacted] ansible_connection=local

I tried for the subfolder in the host_vars to be named both "kombajn" and "127.0.0.1", but it doesn't seem to make a difference.
I am a complete noob at ansible, the only playbook that I used in the past is Jeff Geerling's internet-pi, but I hope you can help me.

Needs two runs to complete the script

Some disclaimers before I continue this:

I've been heavily editing the code for my own purposes and the bug may not show up, but I did try to make sure it appeared in the original.

Essentially the code breaks down at "Stop and remove all the disabled containers" inside the docker role. This seems to be because the docker user isn't set up yet even though the user is added to the docker group above.

I found two fixes to this but there may be better ways about this.

  1. change become to yes for all subsequent docker tasks.
  2. perform a reboot before this step
    -- here I ran into several issues:
    the default reboot in ansible will try to reconnect via the original port 22. So if you change your ssh port this task will infinitely hang.
    --to get around this: I essentially made another port jungle with a reboot delay. I tried to set the ansible_port before reboot, but this causes errors, so I went with a more brute force solution and left it at that.

SWAG for internal network

Hi Wolfgang, newbie question. I understood SWAG comes in two instances: one internal with most services supposed to be used in the LAN and one public (eg. nextCloud, authelia, ...). I understand that for the public you need either a personal domain you can validate with cloudflare or a free subdomain (like duckdns) that you can validate with the specific plugin. But what about the internal one? Has to be validated anyway? I see you are using *.box, that's fine.

  • I was going with *.server, I though any name could work, but nginx complains about that not being a valid domain name (i.e. it wants something like domain.com). Is your a compliant one?
  • Do we need dns validation even for names used in the lan only?
    Thank you in advance

Error on [containers/swag : Populate the dictionary of all containers]

Have followed the steps in your readme, only run into the following:

fatal: [home]: FAILED! => { "msg": "The task includes an option with an undefined variable. The error was: 'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'nextcloud'\n\nThe error appears to be in '/home/hpower/infra/roles/containers/swag/tasks/main.yml': line 36, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Populate the dictionary of all containers\n ^ here\n" }

License

Hi,
can you please add a license file to this repo? Without a license this repo should be considered as non-free software and therefore unusable by third parties.

Thank you, have a nice day

Homer config.yml Format

{% for item in web_applications if "media" in item.category and item.url != "" %}

could you send a example of the config for that?

Error in crowdsec installation

(installing on an Ubuntu server 20.04)
Hey, i am trying to adapt your playbook to my usage and during my exploration of every role, found an error during the installation of crowdsec.

- name: Add Crowdsec GPG apt Key
  apt_key:
    url: https://packagecloud.io/crowdsec/crowdsec/gpgkey
    state: present
https://docs.github.com/github/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax
- name: Add Crowdsec Repository
  apt_repository:
    repo: deb https://packagecloud.io/crowdsec/crowdsec/ubuntu bionic stable
    state: present

These two tasks do not allow for crowdsec to be installed with apt (with the next task in the file). I tried a bit on my side, and found this workaround :

- name: add Crowdsec repo and gpg apt key
  shell:
    cmd: curl -s https://packagecloud.io/install/repositories/crowdsec/crowdsec/script.deb.sh | sudo bash

This task fixes the issue and allows to install crowdsec.

(BTW, ty very much for this repository and your videos, you have given me a nice objective to work for as i try to create my own private server. Hope this will help!)

Container specific networks

If it is not too sensitive, can I ask you why you want each container joining a kind of personal network apart from the internal/external one? I think Docker will create those as "bridge/nat" by default and by the way each container will be the only peer in its own network. I guess I am missing something.

EDIT: I noticed it's just photoprism and nextcloud, so maybe it's something related to mariadb?

Ubuntu port

Can you make a Ubuntu 20.04 port or at least a guide on Mac replacement commands for Ubuntu

In depth Video

Hi, is it possible to create an in-depth video/documentation about your ansible playbook and if possible make it available for Linux/Windows Subsystem for Linux as well, as these options are more often used than MacOS. Thanks.

[FEATURE] Post-Installation Documentation

Hey,

sorry for bothering (the community) but what are the supposed post-installation steps? I hoped to gain that experience from your Ansible videos, but sadly they're discontinued.
I built my server and ran the Ansible playbook successfully, how to continue?

Thanks in advance.

SWAG not routing to homer or other services

The issue I am facing is that SWAG is serving the its default welcome page (the one mentioning linuxserver.io), either when going to 192.168.1.x or using its public address (or name). I see that the two docker networks (internal and public) are created, all docker services are joined accordingly to one docker network or the other. Subdomain configurations are copied and loaded by SWAG. I don't get the reason or where to look deeper.

Some info:

$ docker network inspect swag_internal_network
---
...
            "1dffff2839ec87b3287634735ae2da7574dde6bc34913a625f12c30ee4b8e6f9": {
                "Name": "homer",
                "EndpointID": "9cdc8d599f26ca01e11dc1472bf70440295b3655fe3b4dfb3548172b43590b49",
                "MacAddress": "02:42:ac:12:00:08",
                "IPv4Address": "172.18.0.8/16",
                "IPv6Address": ""
            },
            "39b227b6a17ec4f898f1cad059d897cf6774707e47814ef11aa60e46ea9521a3": {
                "Name": "radarr",
                "EndpointID": "594c489df6e30d1302e625d15eb6654a293fc5f3d570e6ff4db44c8265f77c2b",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
...
$ docker logs swag_internal
---
...
**** The following reverse proxy confs have different version dates than the samples that are shipped. ****
**** This may be due to user customization or an update to the samples. ****
**** You should compare them to the samples in the same folder to make sure you have the latest updates. ****
/config/nginx/proxy-confs/sonarr.subdomain.conf
/config/nginx/proxy-confs/radarr.subdomain.conf
/config/nginx/proxy-confs/photoprism.subdomain.conf
/config/nginx/proxy-confs/jackett.subdomain.conf
/config/nginx/proxy-confs/homer.subdomain.conf

[cont-init.d] 70-templates: exited 0.
[cont-init.d] 90-custom-folders: executing...
[cont-init.d] 90-custom-folders: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready
...

I don't want to bother much here, I think it's something specific to my setup, I am just asking here maybe a little hint where I should look closely to debug this issue. Is there a default access log for swag?

EDIT: I found an hint. Basically, 192.168.1.x is served by swag_public. I don't think that is expected, isn't it?

...   ghcr.io/linuxserver/swag             "/init"         ...         80/tcp, 0.0.0.0:443->443/tcp   swag_public
...   ghcr.io/linuxserver/swag             "/init"         ...         80/tcp, 443/tcp                swag_internal

Backup strategy when cache is involved

First, let me see if I understood the logic behind your filesystem layout (once MergerFS is in action):

  • all over, the services usually point to {{ mergerfs_root }} ...maybe the only exception being the transcoding folder that bypasses cache directly.
  • MergerFS root folder is an array of [cache1 + cache2 + ... + disk1 + disk2 ...].
  • Because of MergerFS policies the cache has priority and is filled first. I could see that MergerFS policy is LFS (i.e. gives priority to the disk with the least free space).
  • The {{ cache_root }} has MFS policy, which balances the occupancy between disks. However, it doesn't really seem to be directly invoked by services.
  • A smart cronjob moves not-accessed files from cache disks to non-cache disks so that the cache available space is at least a specific percentage.
  • At beginning of time cache has priority due to the fact of being smaller in size. This somehow imposes a sorting mechanism from the smallest (in capacity) to the biggest. There is also a hidden necessary condition that wants the slow disks never to be filled more than the cache, or they could gain priority over the cache. This is accomplished by the minfreespace=500G, which must be bigger than the percentage of free space preserved by the un-caching script.
  • Snapraid works on (slow) disk level, hence the cache is not considered.

Correct?

My question now is:

  1. did you consider a backup strategy for those elements that will live (only) in the cache? Maybe they are accessed often and they never have the chance of being "promoted" to the slow disks. For instance docker data backup folders.
  2. Are you aware of an inverse solution for caching back files that have been moved to slow disks that for any reason become useful again?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.