davestephens / ansible-nas Goto Github PK
View Code? Open in Web Editor NEWBuild a full-featured home server or NAS replacement with an Ubuntu box and this playbook.
License: MIT License
Build a full-featured home server or NAS replacement with an Ubuntu box and this playbook.
License: MIT License
Short problem description
The older docker role is causing deprecation warnings during playbook execution. Related to closed issue #12
Environment
git rev-parse --short HEAD
): 56b28faansible --version
on the machine you run the playbook from):ansible 2.5.1
config file = /home/craig/ansible-nas/ansible.cfg
configured module search path = [u'/home/craig/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
uname -a
on the Ansible-NAS box):Linux nas-test 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
python --version
on the Ansible-NAS box): Python 2.7.15rc1Expected behavior
TASK [geerlingguy.docker : Ensure curl is present (on older systems without SNI).] **********************************
skipping: [localhost]
TASK [geerlingguy.docker : Add Docker apt key (alternative for older systems without SNI).] *************************
skipping: [localhost]
Actual behavior
TASK [geerlingguy.docker : Ensure curl is present (on older systems without SNI).] **********************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|failed` instead use `result is
failed`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
skipping: [localhost]
TASK [geerlingguy.docker : Add Docker apt key (alternative for older systems without SNI).] *************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|failed` instead use `result is
failed`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
skipping: [localhost]
Steps to reproduce
Run the playbook using the older docker role as defined by the current release requirements.yml
It would be nice to implement a whiptail based bash installer that will build the all.yml, inventory and set up other options as necessary (I.E. Transmission VPN) before running the playbook. It would make the whole package a lot easier for linux noobs to run. They get twitchy when they have to work out how to use nano (or worse, vi!) and edit files themselves. I feel that this would help take the software one step closer to a FreeNAS replacement and make a great V1 milestone!
On this note; If after initial install the git package is updated with more built in containers does everything require a full reinstall or can the new inventory be edited and the playbook run again? If the new inventory can be edited and playbook run again without a fresh install of ubuntu it would be worth implementing a cache into the whiptail script - I.E. if "inventory" already exists, load the values from it or use a separate cache file to remember the choices and load them into the whiptail script if grabbing specific lines without a major recode of the inventory file is hard.
I have been searching for an easy method to integrate hass.io but unfortunately my docker and ansible knowledge is not sufficient enough.
During my investigation I found following script that might make it easier:
https://github.com/home-assistant/hassio-installer/blob/master/hassio_install.sh
All ports are currently mapped within each container's task, and not centralised in one place.
It should be possible to see and edit what service runs on what port, and not have to find/change numerous files to do so.
Short problem description
After changes to config files, attempt to run test version with tests/test-vagrant.sh
as specified in README.md produces the following error message in a slightly scary red font:
This Vagrant environment has specified that it requires the Vagrant
version to satisfy the following version requirements:= 2.2.2
You are running Vagrant 2.0.2, which does not satisfy
these requirements. (...)
Then the Vagrant VM cannot be accessed. This is on stock Ubuntu 18.04.02 LTS with Vagrant as installed with sudo apt install vagrant
.
Environment
git rev-parse --short HEAD
): 417f997ansible --version
on the machine you run the playbook from):ansible 2.5.1
config file = /home/scot/Programming/ansible-nas/ansible.cfg
configured module search path = [u'/home/scot/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
cat /etc/lsb-release
on the Ansible-NAS box) - If this is anything other than Ubuntu 18.04 help will be limited:DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"
cat /etc/lsb-release
on the Ansible-NAS box): (as above)python --version
on the Ansible-NAS box): (as above)docker --version
on the Ansible-NAS box): Docker version 18.09.2, build 6247962journalctl -u docker.service
on the Ansible-NAS box): n/a and too longvagrant --version
): Vagrant 2.0.2Expected behavior
Create virtual machine for testing.
Actual behavior
Error message as above, VM won't run.
Steps to reproduce
Attempt to run tests/test-vagrant.sh
as specified in the docs.
Playbook Output
n/a
It is pretty likely that users will have one or more ZFS datasets ("shares") for Plex/Emby/Jellyfish, so it might be worth provided an example Sanoid configuration for snapshots of these filesystems. This could possibly be made into an Ansible thingie to automate?
This is going to come up sooner or later, and since we've just had a major security problem with Docker (https://duo.com/decipher/docker-bug-allows-root-access-to-host-file-system), this might be the time to discuss this: Would Ansible-NAS, which is supposed to be a home user class NAS, be better served with Podman (https://podman.io/) instead of Docker, which is industrial-scale?
For those unfamiliar with Podman, it runs the same containers as Docker transparently with the same commands (their slogan is alias podman=docker
, see https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users/). The main difference is that containers can be run with a normal user's permissions (not root), and there is no daemon running them, but they use the normal fork-exec process. So even if there is some bug somewhere, evil people might get into the system, but not as root. Also, less complex because no daemon.
The disadvantages of Podman as I understand them (and I'd like to stress I'm really not an expert), are that it seems to be less mature and (in our case) that its origins are with RedHat -- Canonical doesn't exactly seem to be bending over backwards to support it, you need a PPA to install it on Ubuntu (https://github.com/containers/libpod/blob/master/install.md).
The best introduction I have found so far is (unfortunately) in German at https://www.heise.de/developer/artikel/Podman-Linux-Container-einfach-gemacht-Teil-1-4329067.html . Google Translate from German to English is rather good, or you can use the https://github.com/containers/libpod/blob/master/docs/tutorials/podman_tutorial.md tutorials from Podman themselves.
When building and rebuilding containers that will reach out to letsencrypt for certs, we can very quickly hit the rate limit for production certs.
It is better to, by default, use the letsencrypt staging environment. This ensures that any testing and build processes being followed don't hit the prod rate limits. When you are ready to do a production run of your ansible playbook, you can just change the setting to use prod environment, delete your traefik container and let it run. This would assume that router port forwarding is setup and working (define test method in doco?)
Documentation and a "How To" for this would be required. Even if you run without firewall port forwarding, you can still consume rate limits if you aren't careful.
The docker-transmission-openvpn
container provides a web proxy that tunnels traffic through the VPN tunnel it creates. This should be exposed as a service provided by Ansible-NAS.
thanks for the great playbook.
onlyoffice is a system can be integrated with nextcloud to open office documents online like G suite or Office 365 online, and it also have docker version.
Can you please consider to add the task to install onlyoffice integrated with nextcloud ?
Short problem description
I keep getting an error while trying to install docker using the playbook. I am running in a virtual machine to test until I can get it stable to deploy on my server. In the all.yml I have my docker home directory set to /tank/docker on my zfs tank and zfs is set the storage driver
Environment
git rev-parse --short HEAD
): 867441eansible --version
on the machine you run the playbook from): 2.5.1cat /etc/lsb-release
on the Ansible-NAS box) - If this is anything other than Ubuntu 18.04 help will be limited: Ubuntu 18.04.2cat /etc/lsb-release
on the Ansible-NAS box): Not listedpython --version
on the Ansible-NAS box): 2.7.15docker --version
on the Ansible-NAS box): 19.03.1 Build 74ble89journalctl -u docker.service
on the Ansible-NAS box): posted belowvagrant --version
)Expected behavior
Playbook should run successfully
Actual behavior
Playbook errors out at TASK [geerlingguy.docker : Ensure Docker is started and enabled at boot.]
Steps to reproduce
What does someone need to do to reproduce this?
Attempt to install on a clean install of Ubuntu 18.04.2 LTS
Paste the output of the playbook at the problematic point.
TASK [geerlingguy.docker : Reload systemd daemon if template is changed.] ***********************************************************************************************************************************
task path: /root/.ansible/roles/geerlingguy.docker/tasks/docker-1809-shim.yml:13
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}
TASK [geerlingguy.docker : Ensure Docker is started and enabled at boot.] ***********************************************************************************************************************************
task path: /root/.ansible/roles/geerlingguy.docker/tasks/main.yml:18
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/systemd.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~ && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1566747879.76-34933015883010 `" && echo ansible-tmp-1566747879.76-34933015883010="` echo /root/.ansible/tmp/ansible-tmp-1566747879.76-34933015883010 `" ) && sleep 0'
<localhost> PUT /root/.ansible/tmp/ansible-local-22197oGN7Pr/tmp8o7BaX TO /root/.ansible/tmp/ansible-tmp-1566747879.76-34933015883010/systemd.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1566747879.76-34933015883010/ /root/.ansible/tmp/ansible-tmp-1566747879.76-34933015883010/systemd.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1566747879.76-34933015883010/systemd.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1566747879.76-34933015883010/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"daemon_reload": false,
"enabled": true,
"masked": null,
"name": "docker",
"no_block": false,
"state": "started",
"user": false
}
},
"msg": "Unable to start service docker: Job for docker.service failed because the control process exited with error code.\nSee \"systemctl status docker.service\" and \"journalctl -xe\" for details.\n"
}
PLAY RECAP **************************************************************************************************************************************************************************************************
localhost : ok=24 changed=0 unreachable=0 failed=1
Output of systemctl status docker.service
โ docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-08-25 11:44:46 EDT; 7min ago
Docs: https://docs.docker.com
Process: 23469 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 23469 (code=exited, status=1/FAILURE)
Aug 25 11:44:46 ansible-nas systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Aug 25 11:44:46 ansible-nas systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 25 11:44:46 ansible-nas systemd[1]: Stopped Docker Application Container Engine.
Aug 25 11:44:46 ansible-nas systemd[1]: docker.service: Start request repeated too quickly.
Aug 25 11:44:46 ansible-nas systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 25 11:44:46 ansible-nas systemd[1]: Failed to start Docker Application Container Engine.
output of journalctl -xe
-- the configured Restart= setting for the unit.
Aug 25 11:44:46 ansible-nas systemd[1]: Stopped Docker Application Container Engine.
-- Subject: Unit docker.service has finished shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.service has finished shutting down.
Aug 25 11:44:46 ansible-nas systemd[1]: Closed Docker Socket for the API.
-- Subject: Unit docker.socket has finished shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.socket has finished shutting down.
Aug 25 11:44:46 ansible-nas systemd[1]: Stopping Docker Socket for the API.
-- Subject: Unit docker.socket has begun shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.socket has begun shutting down.
Aug 25 11:44:46 ansible-nas systemd[1]: Starting Docker Socket for the API.
-- Subject: Unit docker.socket has begun start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.socket has begun starting up.
Aug 25 11:44:46 ansible-nas systemd[1]: Listening on Docker Socket for the API.
-- Subject: Unit docker.socket has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.socket has finished starting up.
--
-- The start-up result is RESULT.
Aug 25 11:44:46 ansible-nas systemd[1]: docker.service: Start request repeated too quickly.
Aug 25 11:44:46 ansible-nas systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 25 11:44:46 ansible-nas systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.service has failed.
--
-- The result is RESULT.
Aug 25 11:44:46 ansible-nas systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.
If anymore info is needed I will do my best to provide.
Ansible-NAS currently exports Samba shares, but some people live in a world of Linux machines and would rather work with NFS instead. An ansible thingie for that would make life easier. See https://advishnuprasad.com/blog/2016/03/29/setup-nfs-server-and-client-using-ansible/ for an example and https://help.ubuntu.com/community/SettingUpNFSHowTo for the background information.
(I dimly remember reading a recommendation that you shouldn't use the same directory/share for NFS and Samba because, uh, it was very bad and mice will eat your brain or something. Not sure if we should check for that?)
For a DevOps Engineer, I've done a pretty shitty job of managing releases of Ansible-NAS. Right now everything is on master and branches which is fine for those with Git skills, but merging updates into your own installation can be a ballache if not.
Ansible-NAS should have proper versioned releases with specific instructions of how to upgrade from one version to the next, with the specific changes required to your all.yml
in release notes.
For those wanting to live on the edge or have the latest features, instructions should be provided how to run from the master branch.
Feature request: As you probably know, Pi-hole (https://pi-hole.net/) is a DNS server that blocks ads. It was originally made for the RPi (hence the name) but also is available as a Docker image at https://hub.docker.com/r/pihole/pihole/ . If you have a NAS running anyway 24/7, you might want to have it running. (Note: AFAIK there is no way of running Pi-hole on FreeNAS/FreeBSD without installing a Linux VM).
And thanks for all the work!
As of now ansible-nas
is untested on Ubuntu 18.04, although it should work fine.
Test;
If transmission_with_openvpn_enabled
is set to true, we should check vpn_credentials.yml
exists at the start of the playbook.
We currently suggest that people test Ansible-NAS in VirtualBox (https://github.com/davestephens/ansible-nas/blob/master/docs/overview.md) without giving much detail how. It might be worth adding a few more lines including the required steps like creating two extra virtual drives to test ZFS installation etc.
Add integration of a system that monitors containers, checks upsteam images and, where required, self updates running containers. Notifications a bonus
I keep my Docker home directory on the boot drive, an SSD with limited space. I'd like to avoid using it for large files such as backup data.
Other services tend to separate the config and data directories - it would be good if this was also true for the Time Machine service so that logs were kept under the Docker home directory and backups had the option of being stored elsewhere.
Flagging this feature enhancement idea: Automatic CD and DVD ripping.
Since our laptops, tablets, and phones lack optical drives, it would be extremely convenient to have the optical drive of the Ansible NAS rip CDs and DVDs directly to the media shares automatically on insertion, and then eject when finished. This could be done using clever scripted abcde and handbrake-cli.
Most Ansible NAS boxes will lack a GUI, and you don't necessarily want to have to SSH into your box to rip a CD with a command-line interface. I envision that you pop in the CD and 5 minutes later it ejects and you see it in your music share. Hands-free.
If I ever get this working I will submit a PR. If anyone beats me to it, let us all know here.
hey @davestephens, thanks for putting this together! i can't seem to get samba shares working... it's not connectable, and i've tried messing around with some obvious settings/permissions and i'm getting nowhere
i also can't get the playbook to run on ubuntu server without commenting out this line: https://github.com/davestephens/ansible-nas/blob/master/tasks/general.yml#L20
this is my first time using ansible... am i missing something obvious?
Docker error
error with docker when running ansible
Environment
git rev-parse --short HEAD
): d7f97aeansible --version
on the machine you run the playbook from): ansible 2.5.1cat /etc/lsb-release
on the Ansible-NAS box) - If this is anything other than Ubuntu 18.04 help will be limited: Ubuntu 18.04.2 LTScat /etc/lsb-release
on the Ansible-NAS box):python --version
on the Ansible-NAS box): 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]docker --version
on the Ansible-NAS box): Docker version 18.09.3, build 774a1f4vagrant --version
)Unknown, first time
What actually happens?
What does someone need to do to reproduce this?
ansible-playbook -i inventory nas.yml -b -K -vvv
Playbook Output
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"daemon_reload": false,
"enabled": true,
"masked": null,
"name": "docker",
"no_block": false,
"state": "started",
"user": false
}
},
"msg": "Unable to start service docker: Job for docker.service failed because the control process exited with error code.\nSee "systemctl status docker.service" and "journalctl -xe" for details.\n"
}
Check disks for SMART alert statuses and alert accordingly.
Please, I need help. I'm new to Linux. Please step explain how to fix the error. I didn't know how to solve it.
#66
#63
Docker error
error with docker when running ansible
Environment
git rev-parse --short HEAD
):ansible --version
on the machine you run the playbook from): ansible 2.5.1cat /etc/lsb-release
on the Ansible-NAS box) - If this is anything other than Ubuntu 18.04 help will be limited: Ubuntu 18.04.2 LTScat /etc/lsb-release
on the Ansible-NAS box): Ubuntu 18.04.2 LTSpython --version
on the Ansible-NAS box): Python 2.7.15rc1docker --version
on the Ansible-NAS box): Docker version 18.09.3, build 774a1f4vagrant --version
)Unknown, first time
What actually happens?
What does someone need to do to reproduce this?
ansible-playbook -i inventory nas.yml -b -K -vvv
Playbook Output
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"daemon_reload": false,
"enabled": true,
"masked": null,
"name": "docker",
"no_block": false,
"state": "started",
"user": false
}
},
"msg": "Unable to start service docker: Job for docker.service failed because the control process exited with error code.\nSee "systemctl status docker.service" and "journalctl -xe" for details.\n"
}
As title.
The roles/samba and roles/docker directories are causing me some issues. GitKraken (on Windows) thinks they are supposed to be submodules, but I can't see a .git directory in there at all.
Regards,
CJ
The Sickrage container used is no longer supported and have moved over to a new container called sickchill: linuxserver-archive/docker-sickrage#45
Would be nice to see ansible nas move over to this container instead: https://github.com/linuxserver/docker-sickchill
Ansible-NAS currently assumes that the user will know how to setup and configure ZFS, including snapshot management. It would be nice to at least point out that a program such as Sanoid (https://github.com/jimsalterjrs/sanoid/) can be used to automate the snapshot part of the NAS.
(Possibly this could be included in a "default ZFS scenario" for people not so familiar with these things as a reference where to start: Install root on "normal" ext4 file system, create pool assuming two mirrored hard drives, set up datasets for TV, Samba shares, movies, etc, set up snapshots of said datasets.)
Short problem description
Docker daemon is unable to start on Ubuntu 18.10.
Environment
git rev-parse --short HEAD
): 183fee3
ansible --version
on the machine you run the playbook from):ansible 2.6.5
config file = /home/manderso/ansible-nas/ansible.cfg
configured module search path = [u'/home/manderso/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 2 2018, 22:12:08) [GCC 8.2.0]
Ansible-NAS operating system (uname -a
on the Ansible-NAS box):
Linux anderson_nas 4.18.0-16-generic #17-Ubuntu SMP Fri Feb 8 00:06:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Ansible-NAS Python version (python --version
on the Ansible-NAS box):Python 2.7.15+
Vagrant version, if testing (vagrant --version
)
Expected behavior
Docker should start. Also, expecting a service file in /etc/systemd/system/, but no docker* exists in that location.
Actual behavior
Docker does not start, also no docker* exists in etc/systemd/system/.
Steps to reproduce
Run playbook on Ubuntu 18.10?
Playbook Output
TASK [geerlingguy.docker : Ensure Docker is started and enabled at boot.] *****************************************************************************************************
task path: /home/manderso/.ansible/roles/geerlingguy.docker/tasks/main.yml:18
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: manderso
<localhost> EXEC /bin/sh -c 'echo ~manderso && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/manderso/.ansible/tmp/ansible-tmp-1552048474.75-45135276588821 `" && echo ansible-tmp-1552048474.75-45135276588821="` echo /home/manderso/.ansible/tmp/ansible-tmp-1552048474.75-45135276588821 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/systemd.py
<localhost> PUT /home/manderso/.ansible/tmp/ansible-local-29160_HKPmy/tmp9JoKVl TO /home/manderso/.ansible/tmp/ansible-tmp-1552048474.75-45135276588821/systemd.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/manderso/.ansible/tmp/ansible-tmp-1552048474.75-45135276588821/ /home/manderso/.ansible/tmp/ansible-tmp-1552048474.75-45135276588821/systemd.py && sleep 0'
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=jmjugdguhbelnzhmmojntibzfwohebda] password: " -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-jmjugdguhbelnzhmmojntibzfwohebda; /usr/bin/python3 /home/manderso/.ansible/tmp/ansible-tmp-1552048474.75-45135276588821/systemd.py'"'"' && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/manderso/.ansible/tmp/ansible-tmp-1552048474.75-45135276588821/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"daemon_reload": false,
"enabled": true,
"force": null,
"masked": null,
"name": "docker",
"no_block": false,
"state": "started",
"user": false
}
},
"msg": "Unable to start service docker: Job for docker.service failed because the control process exited with error code.\nSee \"systemctl status docker.service\" and \"journalctl -xe\" for details.\n"
}
PLAY RECAP ********************************************************************************************************************************************************************
localhost : ok=18 changed=0 unreachable=0 failed=1
systemctl status docker
โ docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2019-03-08 07:34:41 EST; 7min ago
Docs: https://docs.docker.com
Process: 31129 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 31129 (code=exited, status=1/FAILURE)
Mar 08 07:34:41 anderson_nas systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Mar 08 07:34:41 anderson_nas systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Mar 08 07:34:41 anderson_nas systemd[1]: Stopped Docker Application Container Engine.
Mar 08 07:34:41 anderson_nas systemd[1]: docker.service: Start request repeated too quickly.
Mar 08 07:34:41 anderson_nas systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 08 07:34:41 anderson_nas systemd[1]: Failed to start Docker Application Container Engine.
journalctl -xe
Mar 08 07:34:41 anderson_nas systemd[1]: Listening on Docker Socket for the API.
-- Subject: Unit docker.socket has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.socket has finished starting up.
--
-- The start-up result is RESULT.
Mar 08 07:34:41 anderson_nas systemd[1]: docker.service: Start request repeated too quickly.
Mar 08 07:34:41 anderson_nas systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 08 07:34:41 anderson_nas systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.service has failed.
--
-- The result is RESULT.
Mar 08 07:34:41 anderson_nas systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.
Apologies for not following the bug template - seems overkill for such a simple bug.
It would appear that recent versions of Ansible enforce checks on ambiguous environment options:
TASK [Create Heimdall container] ****************************************************************************************************************************************************
fatal: [[email protected]]: FAILED! => {"changed": false, "msg": "Non-string value found for env option. Ambiguous env options must be wrapped in quotes to avoid them being interpreted. Key: PUID"}
The above was when running the playbook remotely on Ansible 2.8.1, macOS. Manually quoting the options in tasks/heimdall.yml
also fixed the issue in this instance, although it would go on to fail during later steps for the same reason.
When running the playbook locally with an older version of Ansible, 2.5.1, everything ran fine. I suppose quoting all the environment options would resolve this.
The number of entries in the lower parts of all.yml is getting large enough that finding things is becoming hard. Starting about line 243 - the part the user shouldn't touch anyway - we should sort them by name to make everybody's life easier. (I should be able to do a PR later today)
Hey!
I've never heard of Ansible before, so I might have some real noobish questions:
Should I have a clean Ubuntu server 18.04 installation on the machine that will run the "nas"?
What is the partition map that you recommend? I mean, how much space on each partition do you recommend?
Do you have a "real-life" example of a configuration?
Thanks!
The Grafana gets created, but fails to fully launch. Container logs shows:
GF_PATHS_DATA='/var/lib/grafana' is not writable.,
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later,
mkdir: cannot create directory '/var/lib/grafana/plugins': Permission denied
The referenced URL mentions some major changes to how storage is handled from grafana version 5.1 and above. The recommended solution is to use a docker volume instead of bind mountpoints. Seeing as the whole docker directory has been moved over a custom location, it probably isn't a big deal to use a volume.
I'll work up a fix and raise a PR.
Regards,
CJ
If new parameters are added to all.yml.dist
but tests/test.yml
is forgotten, it becomes impossible to spin up a Vagrant box with an error like:
fatal: [default]: FAILED! => {"msg": "The conditional check 'watchtower_enabled' failed....
When I set keep_packages_updated = true
in my all.yml
I received this error -
TASK [Upgrade all packages] ******************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable to find APTITUDE in path. Please make sure to have APTITUDE in path or use 'force_apt_get=True'"}
PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=1
SOLUTION:
After then running apt install aptitude
it worked. So I'm thinking of adding in a condition that if this setting is true check if aptitude is installed, if not install it. I've got no idea if this is doable - I very new to ansible - but I thought this would be a great start - hopefully a PR soon.
Short problem description
The playbook assumes that the universe repository is enabled. This is fine on a local host run where the installation instructions say to enable the repo (needed to install ansible itself). This shouldn't be necessary for running on a remote host
Environment
git rev-parse --short HEAD
): b7d94d2ansible --version
on the machine you run the playbook from):ansible 2.5.1
config file = /home/craig/source/ansible-nas/ansible.cfg
configured module search path = [u'/home/craig/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
uname -a
on the Ansible-NAS box):Linux utest 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
python --version
on the Ansible-NAS box):craig@utest:~$ python --version
Command 'python' not found, but can be installed with:
sudo apt install python3
sudo apt install python
sudo apt install python-minimal
You also have python3 installed, you can run 'python3' instead.
Expected behavior
Playbook should run through smoothly with no errors during the "install some packages" task.
Actual behavior
Playbook fails to install the "lm-sensors" package.
Steps to reproduce
Run the playbook on a fresh server that does not have the universe repo enabled
Playbook Output
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/packaging/os/apt.py
<nas-test.thecave> ESTABLISH SSH CONNECTION FOR USER: None
<nas-test.thecave> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/craig/.ansible/cp/6473105f95 nas-test.thecave '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<nas-test.thecave> (0, '/home/craig\n', '')
<nas-test.thecave> ESTABLISH SSH CONNECTION FOR USER: None
<nas-test.thecave> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/craig/.ansible/cp/6473105f95 nas-test.thecave '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113 `" && echo ansible-tmp-1546667728.98-114848792558113="` echo /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113 `" ) && sleep 0'"'"''
<nas-test.thecave> (0, 'ansible-tmp-1546667728.98-114848792558113=/home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113\n', '')
<nas-test.thecave> PUT /home/craig/.ansible/tmp/ansible-local-21114rj2Vwf/tmppUFSID TO /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113/apt.py
<nas-test.thecave> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/craig/.ansible/cp/6473105f95 '[nas-test.thecave]'
<nas-test.thecave> (0, 'sftp> put /home/craig/.ansible/tmp/ansible-local-21114rj2Vwf/tmppUFSID /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113/apt.py\n', '')
<nas-test.thecave> ESTABLISH SSH CONNECTION FOR USER: None
<nas-test.thecave> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/craig/.ansible/cp/6473105f95 nas-test.thecave '/bin/sh -c '"'"'chmod u+x /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113/ /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113/apt.py && sleep 0'"'"''
<nas-test.thecave> (0, '', '')
<nas-test.thecave> ESTABLISH SSH CONNECTION FOR USER: None
<nas-test.thecave> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/craig/.ansible/cp/6473105f95 -tt nas-test.thecave '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=nhfhgbiupaoitjwkhkvudpxvocxwpceo] password: " -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nhfhgbiupaoitjwkhkvudpxvocxwpceo; /usr/bin/python3 /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113/apt.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<nas-test.thecave> (1, '\r\n\r\n\r\n{"msg": "No package matching \'lm-sensors\' is available", "failed": true, "exception": " File \\"/tmp/ansible_qok2538s/ansible_module_apt.py\\", line 326, in package_status\\n pkg = cache[pkgname]\\n File \\"/usr/lib/python3/dist-packages/apt/cache.py\\", line 266, in __getitem__\\n raise KeyError(\'The cache has no package named %r\' % key)\\n", "invocation": {"module_args": {"name": ["smartmontools", "htop", "zfsutils-linux", "bonnie++", "unzip", "lm-sensors"], "state": "present", "package": ["smartmontools", "htop", "zfsutils-linux", "bonnie++", "unzip", "lm-sensors"], "cache_valid_time": 0, "purge": false, "force": false, "dpkg_options": "force-confdef,force-confold", "autoremove": false, "autoclean": false, "only_upgrade": false, "force_apt_get": false, "allow_unauthenticated": false, "update_cache": null, "deb": null, "default_release": null, "install_recommends": null, "upgrade": null}}}\r\n', 'Shared connection to nas-test.thecave closed.\r\n')
<nas-test.thecave> ESTABLISH SSH CONNECTION FOR USER: None
<nas-test.thecave> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/craig/.ansible/cp/6473105f95 nas-test.thecave '/bin/sh -c '"'"'rm -f -r /home/craig/.ansible/tmp/ansible-tmp-1546667728.98-114848792558113/ > /dev/null 2>&1 && sleep 0'"'"''
<nas-test.thecave> (0, '', '')
The full traceback is:
File "/tmp/ansible_qok2538s/ansible_module_apt.py", line 326, in package_status
pkg = cache[pkgname]
File "/usr/lib/python3/dist-packages/apt/cache.py", line 266, in __getitem__
raise KeyError('The cache has no package named %r' % key)
fatal: [nas-test.thecave]: FAILED! => {
"attempts": 3,
"changed": false,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoclean": false,
"autoremove": false,
"cache_valid_time": 0,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"force_apt_get": false,
"install_recommends": null,
"name": [
"smartmontools",
"htop",
"zfsutils-linux",
"bonnie++",
"unzip",
"lm-sensors"
],
"only_upgrade": false,
"package": [
"smartmontools",
"htop",
"zfsutils-linux",
"bonnie++",
"unzip",
"lm-sensors"
],
"purge": false,
"state": "present",
"update_cache": null,
"upgrade": null
}
},
"msg": "No package matching 'lm-sensors' is available"
}
PLAY RECAP ********************************************************************************************************************
nas-test.thecave : ok=25 changed=0 unreachable=0 failed=1
Currently, Ansible-NAS is setup to only support torrent downloads. The defined directory structure is limited to only this configuration.
I suggest that a couple of core changes need to be made to support multiple download clients.
Running script gets to creating * container, then waits for a bit, then fails. If the script is run again, that container is created and the next container fails.
Environment
git rev-parse --short HEAD
): 183fee3
ansible --version
on the machine you run the playbook from):ansible 2.6.5
config file = /home/manderso/ansible-nas/ansible.cfg
configured module search path = [u'/home/manderso/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 2 2018, 22:12:08) [GCC 8.2.0]
Ansible-NAS operating system (uname -a
on the Ansible-NAS box): Linux anderson_nas 4.18.0-16-generic #17-Ubuntu SMP Fri Feb 8 00:06:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Ansible-NAS Python version (python --version
on the Ansible-NAS box): Python 2.7.15+
Vagrant version, if testing (vagrant --version
)
Expected behavior
Ansible script should complete and containers be created on first try.
Actual behavior
Container fails to create on first try.
Steps to reproduce
Run the ansible playbook.
TASK [Watchtower Docker Container] *****************************************************************************************************************************************************************
task path: /home/manderso/ansible-nas/tasks/watchtower.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: manderso
<localhost> EXEC /bin/sh -c 'echo ~manderso && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/manderso/.ansible/tmp/ansible-tmp-1552064809.88-273857946939080 `" && echo ansible-tmp-1552064809.88-273857946939080="` echo /home/manderso/.ansible/tmp/ansible-tmp-1552064809.88-273857946939080 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/docker/docker_container.py
<localhost> PUT /home/manderso/.ansible/tmp/ansible-local-7452RnWvzH/tmp4Yu2Ay TO /home/manderso/.ansible/tmp/ansible-tmp-1552064809.88-273857946939080/docker_container.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/manderso/.ansible/tmp/ansible-tmp-1552064809.88-273857946939080/ /home/manderso/.ansible/tmp/ansible-tmp-1552064809.88-273857946939080/docker_container.py && sleep 0'
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=oqitjckthtyotebiaxnjjxdvlxvdphtb] password: " -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-oqitjckthtyotebiaxnjjxdvlxvdphtb; /usr/bin/python3 /home/manderso/.ansible/tmp/ansible-tmp-1552064809.88-273857946939080/docker_container.py'"'"' && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/manderso/.ansible/tmp/ansible-tmp-1552064809.88-273857946939080/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_b9ewlwjq/ansible_module_docker_container.py", line 1986, in container_create
new_container = self.client.create_container(image, **create_parameters)
File "/usr/local/lib/python3.6/dist-packages/docker/api/container.py", line 135, in create_container
return self.create_container_from_config(config, name)
File "/usr/local/lib/python3.6/dist-packages/docker/api/container.py", line 145, in create_container_from_config
res = self._post_json(u, data=config, params=params)
File "/usr/local/lib/python3.6/dist-packages/docker/client.py", line 198, in _post_json
return self._post(url, data=json.dumps(data2), **kwargs)
File "/usr/local/lib/python3.6/dist-packages/docker/utils/decorators.py", line 47, in inner
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/docker/client.py", line 135, in _post
return self.post(url, **self._set_request_timeout(kwargs))
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 567, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 520, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 630, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 521, in send
raise ReadTimeout(e, request=request)
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "auto",
"auto_remove": false,
"blkio_weight": null,
"cacert_path": null,
"capabilities": null,
"cert_path": null,
"cleanup": false,
"command": "--schedule '0 0 5 * * *' --debug",
"cpu_period": null,
"cpu_quota": null,
"cpu_shares": null,
"cpuset_cpus": null,
"cpuset_mems": null,
"debug": false,
"detach": true,
"devices": null,
"dns_opts": null,
"dns_search_domains": null,
"dns_servers": null,
"docker_host": "unix://var/run/docker.sock",
"domainname": null,
"entrypoint": null,
"env": {
"TZ": "America/New_York"
},
"env_file": null,
"etc_hosts": null,
"exposed_ports": null,
"force_kill": false,
"groups": null,
"hostname": null,
"ignore_image": false,
"image": "v2tec/watchtower",
"init": false,
"interactive": false,
"ipc_mode": null,
"keep_volumes": true,
"kernel_memory": null,
"key_path": null,
"kill_signal": null,
"labels": null,
"links": null,
"log_driver": null,
"log_options": null,
"mac_address": null,
"memory": "1g",
"memory_reservation": null,
"memory_swap": null,
"memory_swappiness": null,
"name": "watchtower",
"network_mode": null,
"networks": null,
"oom_killer": null,
"oom_score_adj": null,
"paused": false,
"pid_mode": null,
"privileged": false,
"published_ports": null,
"pull": true,
"purge_networks": false,
"read_only": false,
"recreate": false,
"restart": false,
"restart_policy": "unless-stopped",
"restart_retries": null,
"security_opts": null,
"shm_size": null,
"ssl_version": null,
"state": "started",
"stop_signal": null,
"stop_timeout": null,
"sysctls": null,
"timeout": 60,
"tls": false,
"tls_hostname": "localhost",
"tls_verify": false,
"tmpfs": null,
"trust_image_content": false,
"tty": false,
"ulimits": null,
"user": null,
"userns_mode": null,
"uts": null,
"volume_driver": null,
"volumes": [
"/var/run/docker.sock:/var/run/docker.sock"
],
"volumes_from": null,
"working_dir": null
}
},
"msg": "Error creating container: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)"
}
PLAY RECAP *****************************************************************************************************************************************************************************************
localhost : ok=46 changed=3 unreachable=0 failed=1
When running the ansible playbook I receive these depreciation warnings. I'll try and fix it a put in a PR.
TASK [geerlingguy.docker : Ensure curl is present (on older systems without SNI).] ***********************************************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|failed` instead use `result is failed`. This feature will be removed in version 2.9.
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
Add Plex for media transcoding/streaming
Docker container: https://hub.docker.com/r/linuxserver/plex/
Homepage: http://plex.tv
Short problem description
When doing a completely vanilla (no additional packages) Ubuntu Server 18.04.1 LTS install, python 2 is not installed by default. This means that when doing a remote playbook execution, errors are generated.
Environment
git rev-parse --short HEAD
): b7d94d2ansible --version
on the machine you run the playbook from):ansible 2.5.1
config file = /home/craig/source/ansible-nas/ansible.cfg
configured module search path = [u'/home/craig/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
uname -a
on the Ansible-NAS box):Linux utest 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
python --version
on the Ansible-NAS box):craig@utest:~$ python --version
Command 'python' not found, but can be installed with:
sudo apt install python3
sudo apt install python
sudo apt install python-minimal
You also have python3 installed, you can run 'python3' instead.
Expected behavior
Ansible-NAS playbook should run remotely without pre-installing any prerequisites (other than ZFS if you are using ZFS)
Actual behavior
playbook fails with:
fatal: [nas-test.thecave]: FAILED! => {
"changed": false,
"module_stderr": "Shared connection to nas-test.thecave closed.\r\n",
"module_stdout": "\r\n\r\n/bin/sh: 1: /usr/bin/python: not found\r\n",
"msg": "MODULE FAILURE",
"rc": 127
}
Steps to reproduce
http://mirror.aarnet.edu.au/pub/ubuntu/archive/
Use Entire Disk
. No LVM.Playbook Output
As above.
ansible-nas/templates/traefik/traefik.toml lines 181 onwards.
Currently traefik will be requesting certificates for all subdomains whether we require them or not. This is not good practice as we will be hitting the letsencrypt daily limits when initially setting up. It would be better to use an ansible task to create the traefik.toml file based upon the services loaded. This is a little beyond my ansible expertise though! Currently I can only think that a series of plays would be required to do it building onto a variable string if a task is enabled before using a sed command to merge into the traefik.toml file... Something like: (shameless copy/paste of code form elsewhere to save typing!)
- name: Initialize an empty list for our strings
set_fact:
my_strings: []
- name: Setup a string variable
set_fact:
my_name: "Max"
- name: Append string to list
set_fact:
my_strings: "{{ my_strings + [ my_name ] }}"
- debug: var=my_strings
- name: Append another item to the list
set_fact:
my_strings: "{{ my_strings + [ 'Power' ] }}"
- debug: var=my_strings
Music and podcasts paths are not using the music_root and podcast_root path.
I will create a PR today.
I've found that when working with a lot of containers ctop can be an invaluable tool when something is going wrong. It makes it very simple to see exactly which container is eating cpu/ram or causing the whole box to lag. While top and htop are useful here they don't quite come as close as ctop since you can use the features such as exec shell and view container logs straight from the top-style overview.
While it can be implemented as a container, I find it much easier to place it into /usr/local/bin/ on the host.
I saw you recently removed nginx_enabled, and replaced nginx with Traefik. Looks like maybe you missed a spot? I can't get the ansible to go all the way through.
I worked around it by adding a line that says nginx_enabled: false
, and then it moved past that point.
TASK [Create Nginx Directories] **************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'nginx_enabled == true' failed. The error was: error while evaluating conditional (nginx_enabled == true): 'nginx_enabled' is undefined\n\nThe error appears to have been in '/home/peter/ansible-nas/tasks/nginx.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Create Nginx Directories\n ^ here\n"}
PLAY RECAP ***********************************************************************************************************************************************************
localhost
Could be used for notifications generated by Ansible-NAS, check out https://github.com/gotify/server.
We've already included two media servers, Plex and Emby, so this is probably a "nice to have". However, since it is a fork of Emby, it should be trivial to add (I know, I know, famous last words).
https://github.com/jellyfin/jellyfin
https://hub.docker.com/r/jellyfin/jellyfin/
Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps. Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. There are no strings attached, no premium licenses or features, and no hidden agendas (...)
Would like to add Gogs for private Git source control
how about netdata instead of Grafana? It supports multiple monitoring parameters out of box. It even supports zfs parameters
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.