Giter Site home page Giter Site logo

linuxfabrik / lfops Goto Github PK

View Code? Open in Web Editor NEW
38.0 6.0 10.0 3.49 MB

LFOps is an Ansible Collection of generic Roles, Playbooks and Plugins for managing Linux-based Cloud Infrastructures.

Home Page: https://linuxfabrik.ch

License: The Unlicense

Python 14.42% Jinja 84.44% Shell 1.14%
ansible ansible-roles ansible-playbooks ansible-plugins debops playbook sysadmin data-center cloud self-hosted

lfops's People

Contributors

bzblf avatar geertsky avatar lfchris avatar markuslf avatar navidsassan avatar paasi6666 avatar philipp-zollinger avatar rooso avatar sandrolf avatar slalomsk8er avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

lfops's Issues

role:hetzner_vm: improve handling of ip addresses (new hetzner features)

new hetzner features:

CHOOSE FROM 3 NETWORKING OPTIONS
Server with two public IP addresses (IPv4 and IPv6). Server with one public IP address (IPv4 or IPv6). Server without any public IP addresses.
SERVERS WITHOUT A PUBLIC NETWORK
Adding the server to a private network will enable you to create the server without any public IPs and thus without a connection to a public network.
NEW FLEXIBILITY WIH PRIMARY IPS
After a server was created, you can still add, remove, or swap the server’s Primary IPs. To keep your Primary IP even if you delete the server it is assigned to, you can simply disable the “Auto Delete” option.

role:repo_epel: Wrong name message

Happened on h164 (Rocky 8, not CentOS 7):

TASK [linuxfabrik.lfops.repo_epel : copy /tmp/ansible.RPM-GPG-KEY-EPEL-7 to /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7] changed: [h164...]

After that a filelisting on h164 shows:

ll /etc/pki/rpm-gpg/
total 16
-rw-r--r--. 1 root root 1627  2. Dez 14:29 RPM-GPG-KEY-EPEL-8
-rw-r-----. 1 root root 1723  2. Dez 14:40 RPM-GPG-KEY-ICINGA
-rw-r--r--. 1 root root 1672 22. Dez 03:25 RPM-GPG-KEY-rockyofficial
-rw-r--r--. 1 root root 1672 22. Dez 03:25 RPM-GPG-KEY-rockytesting

basic_setup: Failed to set locale, defaulting to C.UTF-8

I set up a Rocky 8 Minimal (on VMware) using the graphical Anaconda Installer, with Swiss German Keyboard and English language setting. After running basic_setup, I get Failed to set locale, defaulting to C.UTF-8 if running a dnf list | grep php (for example).

$ localectl
   System Locale: LANG=en_US.UTF-8
       VC Keymap: ch
      X11 Layout: ch

On a machine without that error:

$ localectl
   System Locale: LANG=en_US.UTF-8
       VC Keymap: us
      X11 Layout: us

Still happens after localectl set-keymap us, localectl set-x11-keymap us, SSH logout and login.

Implement Dynamic Scaling

How about this scenario?

  • Imagine an Ansible deployment host
  • In inventory, for each VM, define the VM's working hours:
    • Mo-Fr 08:00-18:00 t-shirt-size "medium"
    • otherwise "small"
    • 15th of each month 14:00-16:45 "superbig" because of peak usage
  • A systemd-timer job runs (regulary) and customizes the T-shirt-size of each VM

Is this possible (including up- and downscaling of the VM without downtime)? Could save costs and improves response times.

role:cockpit: remove cockpit users

user               ! pw ! uid  ! gid  ! comment                       ! home_dir                ! user_shell    
-------------------+----+------+------+-------------------------------+-------------------------+---------------
cockpit-ws         ! x  ! 995  ! 993  ! User for cockpit web service  ! /nonexisting            ! /sbin/nologin 
cockpit-wsinstance ! x  ! 994  ! 992  ! User for cockpit-ws instances ! /nonexisting            ! /sbin/nologin 

role:php: flush_handlers task does not support when conditional

TASK [linuxfabrik.lfops.php : flush handlers so that the mariadb can be used by other roles later] *****************************************************************************************************
Tuesday 14 June 2022  17:41:19 +0200 (0:00:00.396)       0:00:20.368 ********** 
[WARNING]: flush_handlers task does not support when conditional

role:login: Needs a switch to be aggressive or not

Today, the login role distributes SSH keys that it knows from the host/group variables and deletes any other keys that already exist on the target system. On systems that were born before applying LFOps, this behavior causes various problems.

A switch login__aggressive_key_management: true/false might help:

  • login__aggressive_key_management: true (default): Behavior as today - delete all keys in .authorized_keys and distribute only the defined ones.
  • login__aggressive_key_management: false: handle only the keys defined in the host/group variables (these then need a present or absent status) and leave all other keys untouched.

CONTRIBUTING, all roles: define how to handle "OS not supported"

Currently, we are partly relying on the os-specific tasks. However, this only works if we can include the tasks at the start of the role, and that is not possible for the monitoring_plugins role.
We also are adding when statements based on the OS in the playbooks, is that enough?

Sometimes /usr/local/bin is removed from $PATH

Which causes some side effects (duplicity, glances and update-and-reboot are of course not working anymore).

Currently we are not sure what the cause is, but it happened on a machine

  • fresh setup with Rocky 8.5 minimal (not the DVD)
  • did a dnf upgrade to 8.6 and a reboot
  • ran the linuxfabrik.lfops.basic_setup
  • after that $PATH was missing /usr/local/bin

Although the admin is not sure anymore if these was all he did (and in what order), we had the effect (missing PATH) on another machine a few days ago, too.

Could be Rocky 8.5, Rocky 8.6, Minimal vs. DVD image - or LFOps.

role:tools: prompt.sh - We should see the Distro

To be aware what we are working on - this is very important.

Instead of today [15:26:54 root@fw01 ~]$ it should be:

  • [15:26:54 root@fw01 fedora35 ~]$
  • [15:26:54 root@fw01 rhel8 ~]$
  • [15:26:54 root@fw01 ubuntu18 ~]$
  • [15:26:54 root@fw01 debian11 ~]$
    etc.

module:bitwarden_item: Improve handling of attachments

Currently, if there is an attachment specified in the task, and there already is one uploaded with the same basename, we assume it is the same and do not change anything.
This could lead to outdated files when a server is re-installed, for example. Should we always re-upload attachments? This would lead to the task to be always changed if there is any attachment specified. Or actually download the existing file from bitwarden and diff them (md5sum)?

role:sshd: ModuleNotFoundError: No module named 'seobject'

TASK [linuxfabrik.lfops.sshd : semanage port --add --type ssh_port_t --proto tcp 22] **************************************************************************************
Tuesday 31 May 2022  16:41:46 +0200 (0:00:01.007)       0:01:46.800 *********** 
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'seobject'
fatal: [hostname]: FAILED! => changed=false 
  msg: Failed to import the required Python library (policycoreutils-python) on hostname's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter

role:duplicity: duba should delete orphaned files in Swift storage

Imagine a folder nextcloud/data that is backed up in parallel (divide: True). If at some point a subfolder is deleted, duplicity has no way to apply its retention mechanism to it, so duba has to do this directly via the Swift interface (using the same retention times).

role:duplicity: duba.json is not re-configured correctly

Assume you have a duba.json with just the defaults, no further host-based duplicity__ config settings. The generated config then looks like this:

...
  "backup_sources": [
    {
     	"divide": false,
        "path": "/backup"
    },
    {
     	"divide": false,
        "path": "/etc"
    },
    {
     	"divide": false,
        "path": "/home"
    },
    {
     	"divide": false,
        "path": "/opt"
    },
    {
     	"divide": false,
        "path": "/root"
    },
    {
     	"divide": false,
        "path": "/var/spool/cron"
    },
...

Configure a host-specific setting:

duplicity__host_backup_sources:
  - path: '/opt'
    divide: true

A wrong config file is generated (/opt is configured twice with contrary values):

  "backup_sources": [
    {
     	"divide": false,
        "path": "/backup"
    },
    {
     	"divide": false,
        "path": "/etc"
    },
    {
     	"divide": false,
        "path": "/home"
    },
    {
     	"divide": false,
        "path": "/opt"
    },
    {
     	"divide": false,
        "path": "/root"
    },
    {
     	"divide": false,
        "path": "/var/spool/cron"
    },
    {
     	"divide": true,
        "path": "/opt"
    }
...

Even if you change duba.json on the server and delete

    {
     	"divide": false,
        "path": "/opt"
    },

beforehand.

role:stig: Improve aide.conf

Ignore these files and directories:

.cache
/boot/grub2/grubenv
/etc/pihole/pihole-FTL.db-journal
/root/.cache/duplicity
/root/.gnupg
/root/.gnupg/S.gpg-agent
/root/.gnupg/S.gpg-agent.browser
/root/.gnupg/S.gpg-agent.extra
/root/.gnupg/S.gpg-agent.ssh
/var/log/boot.log
/var/log/dmesg
/var/log/dmesg.old
/var/log/fail2ban.log
/var/log/hawkey.log
/var/log/lighttpd/error.log
/var/log/maillog
/var/log/php-fpm
/var/log/sssd/sssd.log
/var/log/sssd/sssd_implicit_files.log
/var/log/sssd/sssd_kcm.log
/var/log/sssd/sssd_nss.log
/var/log/sssd/sssd_pac.log
/var/log/sssd/sssd_pam.log
/var/log/sssd/sssd_ssh.log
/var/log/sssd/sssd_sudo.log
/var/run/utmp

Installation issue: FileNotFoundError: [Errno 2] No such file or directory: b'/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa/plugins/modules/lib'

$ /home/ansible/.local/bin/ansible-galaxy collection install git+https://github.com/Linuxfabrik/lfops.git -vvv

[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current
version: 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]. This feature will be removed
from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
ansible-galaxy [core 2.11.9]
  config file = /home/ansible/ansible/ansible.cfg
  configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/ansible/.local/lib/python3.6/site-packages/ansible
  ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/ansible/.local/bin/ansible-galaxy
  python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
  jinja version = 3.0.3
  libyaml = True
Using /home/ansible/ansible/ansible.cfg as config file
Cloning into '/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa'...
remote: Enumerating objects: 4208, done.
remote: Counting objects: 100% (1039/1039), done.
remote: Compressing objects: 100% (465/465), done.
remote: Total 4208 (delta 555), reused 878 (delta 469), pack-reused 3169
Receiving objects: 100% (4208/4208), 725.49 KiB | 3.15 MiB/s, done.
Resolving deltas: 100% (1908/1908), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Found installed collection linuxfabrik.lfops:1.0.1 at '/home/ansible/.ansible/collections/ansible_collections/linuxfabrik/lfops'
Process install dependency map
Starting collection install process
Installing 'linuxfabrik.lfops:1.0.1' to '/home/ansible/.ansible/collections/ansible_collections/linuxfabrik/lfops'
Skipping '/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa/.git' for collection build
Skipping '/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa/.github' for collection build
Skipping '/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa/.gitignore' for collection build
Skipping '/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa/galaxy.yml' for collection build
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa/plugins/modules/lib'
the full traceback was:

Traceback (most recent call last):
  File "/home/ansible/.local/bin/ansible-galaxy", line 135, in <module>
    exit_code = cli.run()
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 552, in run
    return context.CLIARGS['func']()
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
    return wrapped_method(*args, **kwargs)
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 1188, in execute_install
    artifacts_manager=artifacts_manager,
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 1217, in _execute_install_collection
    artifacts_manager=artifacts_manager,
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/galaxy/collection/__init__.py", line 539, in install_collections
    install(concrete_coll_pin, output_path, artifacts_manager)
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/galaxy/collection/__init__.py", line 1079, in install
    install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/galaxy/collection/__init__.py", line 1162, in install_src
    collection_manifest, file_manifest,
  File "/home/ansible/.local/lib/python3.6/site-packages/ansible/galaxy/collection/__init__.py", line 1005, in _build_collection_dir
    existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
FileNotFoundError: [Errno 2] No such file or directory: b'/home/ansible/.ansible/tmp/ansible-local-19314c4wtoxpj/tmpfh46yqw1/lfops18sjzssa/plugins/modules/lib'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.