Giter Site home page Giter Site logo

ansible-mariadb-galera-cluster's Introduction

Table of Contents generated with DocToc

ansible-mariadb-galera-cluster

An Ansible role to install/configure a MariaDB-Galera Cluster

Requirements

  • Collections:
    • community.mysql

Role Variables

defaults/main.yml

Dependencies

None

Example Playbook

Example playbook

License

MIT

Author Information

Larry Smith Jr.

Buy Me A Coffee

ansible-mariadb-galera-cluster's People

Contributors

anthonytissot avatar bodenhaltung avatar bufanda avatar chsilae avatar dependabot[bot] avatar disaster avatar elcomtik avatar eradical avatar error756 avatar fhufenreuter avatar hamza-tumturk avatar jpylypiw avatar kwizart avatar mrlesmithjr avatar oukooveu avatar redochka avatar sdwilsh avatar styks1987 avatar swiff avatar tvenieris avatar vrelk avatar zerwes avatar zllovesuki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-mariadb-galera-cluster's Issues

Documentation for setting variables

There's no guide for customizing variables per our environment. For example, I got an error for the interface, and then I understood that I should go to main.yml to replace the interface name (eth0) with mine (which is ens160 by default in Ubuntu).

May you document the required variables to be replaced? Thanks

dict object' has no attribute 'galera-cluster-nodes

Hi!

I'm new in ansible, and I try to install a galera cluster on my test enviroment. But I have this error:

Traceback (most recent call last):
File "/usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/ansible/template/init.py", line 1096, in do_template
res = self.environment.concat(rf)
File "/usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/ansible/template/native_helpers.py", line 70, in ansible_eval_concat
head = list(islice(nodes, 2))
File "", line 16, in root
File "/usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/jinja2/runtime.py", line 852, in _fail_with_undefined_error
raise self._undefined_exception(self._undefined_message)
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'galera-cluster-nodes'

I tried to find out what was the problem, but I wont find this variable anywhere.

Ansible version:
ansible [core 2.13.1]
config file = None
configured module search path = ['/Users/imagyar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/imagyar/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:15:25) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True

playbook.yml:

  • name: Example Playbook
    hosts: all
    tasks:
    • name: Include ansible-mariadb-galera-cluster
      include_role:
      name: mrlesmithjr.mariadb_galera_cluster

Thanks!

Adding PyMySQL dependency

When running a playbook (on Ubuntu 18 LTS) that sets the python_interpreter to python3 before the mariadb-galera-cluster role I get this error:

TASK [mariadb-galera-cluster : debian | adding debian-sys-maintenance permissions to mysql] *****************************************************************
System info:
  Ansible 2.10.3; Darwin
---------------------------------------------------
The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module
is required.
fatal: [xxx]: FAILED! => {"changed": false}

When adding python3-pymysql to the vars/Debian.yml -> mariadb_pre_req_packages it seems to work properly.

Solution found in this ansible-role-mysql issue.

Fix Cluster Setup on Ubuntu 14.04

After changing previous cluster bootstrap process to use galera_new_cluster this does not work on Ubuntu 14.04 as it requires systemd.

Adding users through mariadb_mysql_users variable fails because mariadb isn't running yet

Hello,

I'm using this role to install on CentOS 8 stream and it works pretty well, thanks ๐Ÿ˜„

The problem arises when I set mariadb_mysql_users and do a clean install.

For example:

mariadb_mysql_users:
  - name: test
    password: test
    hosts:
      - "%"

The mysql_users task fails and complains about not being able to connect to the server. When I check the server I notice that the mariadb daemon is indeed not running yet.

If I run the playbook once before setting the variable then the user is added just fine.

What am I missing here?

Thanks in advance.

I have error about task setup_cluster | joining galera cluster

fatal: [galera02]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service mysql: Job for mariadb.service failed because the control process exited with error code. See \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}
fatal: [galera03]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service mysql: Job for mariadb.service failed because the control process exited with error code. See \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}

DEPRECATION WARNING with mariadb_tls_files on ansible version 2.12

Describe the bug
If we set mariadb_tls_files variables,these tasks will be executed :

  • setup_cluster | create TLS certificates directory
  • setup_cluster | copy TLS CA cert, server cert & private key

The when condition on them are badly written and we receive a DEPRECATION WARNING on them :

TASK [ansible-mariadb-galera-cluster : setup_cluster | create TLS certificates directory] ****************************************************************************************************
[DEPRECATION WARNING]: evaluating 'mariadb_tls_files' as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see 
CONDITIONAL_BARE_VARS configuration toggle. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

TASK [ansible-mariadb-galera-cluster : setup_cluster | copy TLS CA cert, server cert & private key] ******************************************************************************************
[DEPRECATION WARNING]: evaluating 'mariadb_tls_files' as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see 
CONDITIONAL_BARE_VARS configuration toggle. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

To Reproduce
Steps to reproduce the behaviour:

  1. Set the variable mariadb_tls_files
    like in my example :
mariadb_tls_files:
  ca_cert:
    name: "ca.pem"
    content: "{{ lookup('file', 'files/STAR..ca') }}"
  server_key:
    name: "server-key.pem"
    content: "{{ lookup('file', 'STAR.key') }}"
  server_cert:
    name: "server-cert.pem"
    content: "{{ lookup('file', 'files/STAR.crt') }}"
  1. run the playbook
  2. See the warning

Expected behavior
A clear and concise description of what you expected to happen.
A better when condution on these tasks

No package matching 'python-pymysql' is available

Describe the bug
At the step "mrlesmithjr.mariadb_galera_cluster : debian | installing pre-reqs", the installation fails because it tries to install the apt package python-pymysql which doesn't exist on Ubuntu 22.04 LTS.

To Reproduce
Steps to reproduce the behavior:

  1. Install the role mrlesmithjr.mariadb_galera_cluster on a Ubuntu 22.04 LTS host
  2. Wait until step "mrlesmithjr.mariadb_galera_cluster : debian | installing pre-reqs" is executed
  3. See error

Expected behavior
The python-pymysql package should not be installed on Ubuntu 22.04 hosts

Screenshots
image

Desktop (please complete the following information):

  • OS: Ubuntu 22.04 LTS

Unrecognized Option evs.delayed_margin

I tried to set the options on my galera cluster but when it try to start mysql it says "Unrecognized option evs.delayed_margin" in error.log. Can anyone help me with this? my wsrep_provider_version is 3.24(rf216443).

wsrep_provider_options = "evs.auto_evict = 1; evs.delayed_keep_period = PT45S; evs.delayed_margin = PT5S"

Originally posted by @mrlesmithjr in #9 (comment)

Some additions that should be made to galera.cnf

The official Galera deployment guidelines make a few recommendations that I'd love to see implemented into this playbook.

  1. Each datacenter should have a unique gmcast.segment ID so that the nodes know who is closest and who is in each DC (Helps with replication optimization). It would be great to be able to define this in the host or group inventory.

  2. Use the server private IP address in ist.recv_bind. This is optional, but it allows SSTs to complete faster as long as you have at least 2 servers in each location. Again, if we could configure this on a per-host basis in the inventory, that would be great.

  3. Some way to enable encryption between the nodes would be very desirable. Setting up SSL protection between the nodes is actually very simple in Galera (http://galeracluster.com/documentation-webpages/ssl.html) and it would be great if this was included as part of this playbook.

  4. Some way to define wsrep_node_address in the host vars. This setting is necessary in some environments where Galera can't properly guess the correct IP address to use due to NAT, multiple network cards, multiple nodes on a host, and some other situations.

This is a very nice piece of work. If I find some extra time, I will try to submit pull requests to implement these changes.

[Help] ansible.vars.hostvars.HostVarsVars object has no attribute 'ansible_eth'

Deployment dropped by this error:

fatal: [controller01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was:
{{ galera_cluster_bind_address }}: {{ hostvars[inventory_hostname]['ansible_' + galera_cluster_bind_interface]['ipv4']['address'] }}: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_eth-conf'
\n\nThe error appears to be in '/home/prd/roles/ansible-mariadb-galera-cluster/tasks/setup_cluster.yml': line 45, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.
\n\nThe offending line appears to be:\n\n  block:\n    - name: WSREP ist.recv_\n      ^ here\n"}

and I can't find which variable is still unconfigured. May you help me?

  • eth-conf is my interface name

To Reproduce
Steps to reproduce the behavior:

  1. Deploy the playbook.yml

Server:*

  • OS: Ubuntu 22.04

Don't print root password

Consider using no_log: True to prevent outputting the root password when setting the root password

mrlesmithjr.mariadb-galera-cluster : mysql_root_pw | setting
skipping: [app1.domain.com] => (item={u'question': u'mysql-server/root_password', u'value': u'supersecurepasswordhere'})

Remove deprecated/removed InnoDB variables - innodb_buffer_pool_instances

Is your feature request related to a problem? Please describe.
Some InnoDB variables were deprecated or removed.
I need to compile a full list of deprecated variables.

Describe the solution you'd like
Remove the variables from the role.

Describe alternatives you've considered
Only use the variables for versions up to the one where they were deprecated... but I think it's cumbersome.

Additional context
I'll soon prepare pull requests but first I need to compile a full list of deprecated variables.

Can not be installed via Ansible-Galaxy anmore

Describe the bug

Our Ansible CI was not able to download the Role via Ansible Galaxy.

grafik

Since Yesterday it was working.

To Reproduce

Steps to reproduce the behavior:

  1. Remove the directory ~/.ansible/roles/mrlesmithjr.mariadb-galera-cluster if it is there.
  2. Create a requirements.yml
---
roles:
  - name: mrlesmithjr.mariadb-galera-cluster
    src: mrlesmithjr.mariadb-galera-cluster
    version: v0.3.0

ansible-galaxy install -r requirements.yml

Gives the following Error:

Starting galaxy role install process
- downloading role 'mariadb-galera-cluster', owned by mrlesmithjr
[WARNING]: - mrlesmithjr.mariadb-galera-cluster was NOT installed successfully: - sorry, mrlesmithjr.mariadb-galera-cluster was not found on https://galaxy.ansible.com/api/.
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.

Expected behavior

It will installed.

Feature request: Allow arbitrary additions to wsrep_provider_options

Simply put - I need to configure auto eviction in our cluster, but as far as I can tell, it isn't possible to do this within this role. Because of the huge amount of options that can be added to wsrep_provider_options, I think it would be great to modify this role so that I, as an end user, can define 'extra_wsrep_provider_options' that are then tacked on to the end of the wsrep_provider_options line in the galera configuration file. For example:

extra_wsrep_provider_options:
  evs.auto_evict: 1
  evs.delayed_margin: 'PT5S'
  evs.delayed_keep_period: 'PT45S'

The above would append the end of the wsrep_provider_options line like follows:

wsrep_provider_options = "....auto generated options; evs.auto_evict = 1; evs.delayed_margin = PT5S; evs.delayed_keep_period = PT45S"

By adopting a structure similar to this, it would allow us to very tightly control the configuration of Galera without needing the individual options to be added to this role first.

Users with table permissions fail to create

Describe the bug
When trying to create users with permissions over specific tables, the user creation task fails as the table hasn't yet been created.

In my testing, moving the user creation task to after the database creation task in main.yml fixes these issues.

To Reproduce
Steps to reproduce the behaviour:

  1. Get the playbook to create a user with table specific permissions IE
  - name: api
    hosts:
      - "%"
      - "127.0.0.1"
      - "::1"
      - "localhost"
    password: "{{ api_password }}"
    encrypted: no
    priv:
      "users.logins": "SELECT,INSERT,UPDATE,DELETE"

Expected behavior

The users and new tables should be created.

Desktop (please complete the following information):

  • Ansible controller OS: Linux Mint 20.2
  • Ansible Host OS: Centos 7

Centos stream hosts are not added to cluster

Describe the bug
After running the role, the nodes are not added to the cluster. No error messages are shown

To Reproduce
Steps to reproduce the behavior:
run the role on centos stream 8 with three nodes on the same network. (x86)

vars used:
galera_cluster_nodes_group: "database"
mariadb_version: "10.5"
galera_cluster_bind_interface: "eth1"
galera_cluster_name: "db-cluster"
galera_reconfigure_galera: true
mariadb_databases:

  • name: monitoring

Expected behavior
The nodes should be added to the galera cluster

Screenshots
This is the output I get from this role
image

Also it skips ALOT of tasks

image

On all nodes after deployment it shows the following:

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_%';
+-------------------------------+----------------------+
| Variable_name                 | Value                |
+-------------------------------+----------------------+
| wsrep_applier_thread_count    | 0                    |
| wsrep_cluster_capabilities    |                      |
| wsrep_cluster_conf_id         | 18446744073709551615 |
| wsrep_cluster_size            | 0                    |
| wsrep_cluster_state_uuid      |                      |
| wsrep_cluster_status          | Disconnected         |
| wsrep_connected               | OFF                  |
| wsrep_local_bf_aborts         | 0                    |
| wsrep_local_index             | 18446744073709551615 |
| wsrep_provider_capabilities   |                      |
| wsrep_provider_name           |                      |
| wsrep_provider_vendor         |                      |
| wsrep_provider_version        |                      |
| wsrep_ready                   | OFF                  |
| wsrep_rollbacker_thread_count | 0                    |
| wsrep_thread_count            | 0                    |
+-------------------------------+----------------------+

Additional context
I have tried enabling the reconfigure cluster variable. Produces the output shown above. The installs of mariadb and galera are NOT fresh. Fresh installs seem to work fine.

Missing package name : MySQL-python No match for argument: MySQL-python

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing pre-reqs] ***************************************************************************************************************************************
fatal: [mysql01.example.lan]: FAILED! => {"changed": false, "failures": ["MySQL-python No match for argument: MySQL-python"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [mysql02.example.lan]: FAILED! => {"changed": false, "failures": ["MySQL-python No match for argument: MySQL-python"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [mysql03.example.lan]: FAILED! => {"changed": false, "failures": ["MySQL-python No match for argument: MySQL-python"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}

MySQL-python maybe python3-PyMySQL

Ubuntu 20.04 Molecule tests fail

Describe the bug
Ubuntu 20.04 Molecule tests are failing and need to be figured out.

Expected behavior
Ubuntu 20.04 Molecule tests should work as expected.

  TASK [ansible-mariadb-galera-cluster : setup_cluster | joining galera cluster] ***
331
  skipping: [node1]
332
  fatal: [node2]: FAILED! => {"msg": "The conditional check '_mariadb_galera_cluster_joined.status.ActiveState == \"active\"' failed. The error was: error while evaluating conditional (_mariadb_galera_cluster_joined.status.ActiveState == \"active\"): 'dict object' has no attribute 'status'"}
333
  
334
  TASK [ansible-mariadb-galera-cluster : setup_cluster | sleep for 15 seconds to wait for node WSREP prepared state] ***
335
  ok: [node1]
336
  
337
  TASK [ansible-mariadb-galera-cluster : setup_cluster | configuring final galera config for first node] ***
338
  ok: [node1] => (item=etc/mysql/debian.cnf)
339
  ok: [node1] => (item=etc/mysql/my.cnf)
340
  changed: [node1] => (item=etc/mysql/conf.d/galera.cnf)
341
  
342
  TASK [ansible-mariadb-galera-cluster : setup_cluster | restarting first node with final galera config] ***
343
  fatal: [node1]: FAILED! => {"changed": false, "msg": "Unable to restart service mysql: Job for mariadb.service failed because the control process exited with error code.\nSee \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}

invalid repositories - Red Hat 8

Hi Larry,
Firstly I want say to you thanks a lot for your times and job.

Could you fixed rhel8 repo issue?

roles did not find packages for Red Hat 8**

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | adding mariadb repo] ***************************************************************************************************************************************
ok: [mysql03.example.lan]
ok: [mysql01.example.lan]
ok: [mysql02.example.lan]

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | disable appstream mysql/mariadb modules] *******************************************************************************************************************
fatal: [mysql03.example.lan]: FAILED! => {"changed": false, "cmd": ["dnf", "-y", "module", "disable", "mysql", "mariadb"], "delta": "0:00:02.387188", "end": "2021-06-28 09:15:54.141276", "msg": "non-zero return code", "rc": 1, "start": "2021-06-28 09:15:51.754088", "stderr": "Errors during downloading metadata for repository 'mariadb':\n - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)\nError: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'mariadb':", " - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)", "Error: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "Updating Subscription Management repositories.\nMariaDB 312 B/s | 345 B 00:01 ", "stdout_lines": ["Updating Subscription Management repositories.", "MariaDB 312 B/s | 345 B 00:01 "]}
fatal: [mysql02.example.lan]: FAILED! => {"changed": false, "cmd": ["dnf", "-y", "module", "disable", "mysql", "mariadb"], "delta": "0:00:02.414772", "end": "2021-06-28 09:15:54.164538", "msg": "non-zero return code", "rc": 1, "start": "2021-06-28 09:15:51.749766", "stderr": "Errors during downloading metadata for repository 'mariadb':\n - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)\nError: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'mariadb':", " - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)", "Error: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "Updating Subscription Management repositories.\nMariaDB 299 B/s | 345 B 00:01 ", "stdout_lines": ["Updating Subscription Management repositories.", "MariaDB 299 B/s | 345 B 00:01 "]}
fatal: [mysql01.example.lan]: FAILED! => {"changed": false, "cmd": ["dnf", "-y", "module", "disable", "mysql", "mariadb"], "delta": "0:00:02.449515", "end": "2021-06-28 09:15:54.183524", "msg": "non-zero return code", "rc": 1, "start": "2021-06-28 09:15:51.734009", "stderr": "Errors during downloading metadata for repository 'mariadb':\n - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.217.28)\nError: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'mariadb':", " - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.217.28)", "Error: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "Updating Subscription Management repositories.\nMariaDB 287 B/s | 345 B 00:01 ", "stdout_lines": ["Updating Subscription Management repositories.", "MariaDB 287 B/s | 345 B 00:01 "]}

PLAY RECAP *****************************************************************************************************************************************************************************************************
mysql01.example.lan : ok=4 changed=0 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0
mysql02.example.lan : ok=4 changed=0 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0
mysql03.example.lan : ok=4 changed=0 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0

This
https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml
should be
https://yum.mariadb.org/10.5.11/rhel8-amd64/

Thanks a lot.

updating_root_passwords

Is your feature request related to a problem? Please describe.
Actually with this code, it's no way to update the mysql root password as it's first modify the

Describe the solution you'd like
The role, actually

  • first modify the /root/.my.cnf
  • then update the mysql root password

So, it's impossible to change the root password with the ansible role as the roles will failed to connect

{"ansible_loop_var": "item", "changed": false, "item": "sql3", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, u\"Access denied for user 'root'@'localhost' (using password: YES)\")"}

Describe alternatives you've considered
a verify simple alternative is to

  • add a tags updating_root_passwords on tasks configure_root_access | updating root passwords and (optionaly in configure_root_access | updating root passwords (allow from anywhere)
    So when need to update the mysql root password, will have to execute two time the roles :
  • First with --tags updating_root_passwords
  • Second with no tags

Additional context
A better way can also be implemented but will need more work

Fix for wait_for error 'sudo: a password is required'

In some stages of the installation process there is a wait_for set with a delegate_to: localhost. This is mainly done in setup_cluster.yml.

Like here:

- name: setup_cluster | sleep for 15 seconds to wait for nodes to fully shut down
  wait_for:
    timeout: 15
  delegate_to: localhost
  when: >
    not galera_cluster_configured.stat.exists

This returns errors for me:

TASK [mariadb-galera-cluster : setup_cluster | stopping mysql to (re)configure cluster (other nodes)] *********************************************
skipping: [db1]
FAILED - RETRYING: setup_cluster | stopping mysql to (re)configure cluster (other nodes) (60 retries left).
ok: [db2]

TASK [mariadb-galera-cluster : setup_cluster | stopping mysql to (re)configure cluster (first node)] **********************************************
skipping: [db2]
FAILED - RETRYING: setup_cluster | stopping mysql to (re)configure cluster (first node) (60 retries left).
ok: [db1]

TASK [mariadb-galera-cluster : setup_cluster | sleep for 15 seconds to wait for nodes to fully shut down] *****************************************
System info:
  Ansible 2.10.5; Darwin
---------------------------------------------------
MODULE FAILURE
See stdout/stderr for the exact error
sudo: a password is required
fatal: [db1]: FAILED! => {"changed": false, "module_stdout": "", "rc": 1}
---------------------------------------------------
MODULE FAILURE
See stdout/stderr for the exact error
sudo: a password is required
fatal: [db2]: FAILED! => {"changed": false, "module_stdout": "", "rc": 1}

I've managed to fix this by adding a become: false to the delegate_to steps, like so:

 - name: setup_cluster | sleep for 15 seconds to wait for nodes to fully shut down
   wait_for:
     timeout: 15
   delegate_to: localhost
+  become: false
   when: >
     not galera_cluster_configured.stat.exists

Cluster joining - serial?

- name: setup_cluster | joining galera cluster
service:
name: "mysql"
state: "restarted"
become: true
when: >
not galera_cluster_configured.stat.exists and
inventory_hostname != galera_mysql_master_node

What do you think about adding serial: 1 in here so ansible will do this host by host instead of all joining the cluster at once?

SST user - plugin line deletes the password

mariadb_sst_username: "mysql"
mariadb_sst_password: ""
mariadb_sst_user:
- name: "{{ mariadb_sst_username }}"
hosts:
- "localhost"
plugin: "unix_socket"
password: "{{ mariadb_sst_password }}"
priv: "*.*:RELOAD,PROCESS,LOCK TABLES,BINLOG MONITOR"

In the above because of the line:

plugin: "unix_socket"

the password is removed and the grant looks like:

GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR ON *.* TO `mariadb-sst`@`localhost` IDENTIFIED VIA unix_socket;

and the SST freezes.

If I remove the line the grant will look like:

GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR ON *.* TO `mariadb-sst`@`localhost` IDENTIFIED BY PASSWORD '*13D253680F898B77D9661FACDE29E71EA9A96DFF';

and all works well.

Is it ok to change the lines to:

#mariadb_auth_plugin: "unix_socket"
mariadb_sst_user:
  - name: "{{ mariadb_sst_username }}"
    hosts:
      - "localhost"
    plugin: "{{ mariadb_auth_plugin | default(omit) }}"
    password: "{{ mariadb_sst_password }}"
    priv: "*.*:RELOAD,PROCESS,LOCK TABLES,BINLOG MONITOR"

The problem is that the user must exist as a system user to work ok, otherwise it will not work w/ unix_socket and you need a passwod.

Thoughts?

Fix Ubuntu 16.04 Cluster Setup

Currently Ubuntu 16.04 does not install using the included repos nor setup the cluster correctly. Need to update the Debian repo to install latest 10.1 version which includes Galera now.

Something appears to not be working in the IP definition and setup.

I am trying to upgrade an existing cluster using the latest version and am hitting a brick wall. When I run the playbook, I receive:

fatal: [app1.site.com]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute u'ansible_enp3s0'\n\nThe error appears to have been in '/usr/local/etc/ansible/roles/mrlesmithjr.mariadb-galera-cluster/tasks/update_etc_hosts.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Needed for name resolution in the case of DNS issues\n- name: update_etc_hosts | Updating /etc/hosts For Name Resolution\n  ^ here\n"}

I have defined the settings as follows:

galera_cluster_bind_interface: 'enp3s0'
galera_cluster_bind_address: "{{ hostvars[inventory_hostname]['ansible_' + galera_cluster_bind_interface']['ipv4']['address'] }}"

And I know that this is the correct path, because a

  debug:
    var: hostvars[inventory_hostname]['ansible_enp3s0']['ipv4']['address']

correctly prints the IP address for each of the nodes. Any ideas?

Tag

Hello,

Is it possible you create tag to set it in requirements.yml, your role is very interesting but we can not force a given version...

Thank you !

Florent,

Ubuntu 20.04.3 LTS: Apt Cache Update Failed

Hey everybody. First of all, thank you for your work!

I'm using this module to create a Galera Cluster with two Ubuntu Nodes with MariaDB Server 10.6.7 already installed.
The problem that I'm facing is in the moment when the Playbook try to execute the next task:

TASK [/root/.ansible/roles/ansible_galera_dos : debian | adding mariadb repo]

But before this task complete it gives me the next error:


The full traceback is:
File "/tmp/ansible_apt_repository_payload_qa34_r21/ansible_apt_repository_payload.zip/ansible/modules/packaging/os/apt_repository.py", line 548, in main
File "/usr/lib/python3/dist-packages/apt/cache.py", line 591, in update
raise FetchFailedException(e)
fatal: [server2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"codename": null,
"filename": null,
"install_python_apt": true,
"mode": null,
"repo": "deb [arch=amd64,x86_64,ppc64el] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.6.7/ubuntu focal main",
"state": "present",
"update_cache": true,
"validate_certs": true
}
},
"msg": "apt cache update failed"
}

For what that I can understand, this is trying to made an update on my Ubuntu Nodes, but when I try to run the 'apt update' directly on my nodes, I've found the next text at the bottom of the command:


N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.6/ubuntu focal InRelease' doesn't support architecture 'i386

I've been trying to fix this problem with Google help, but nothing helped me by this moment.

Please I ask for your help to resolve this, maybe I'm doing something wrong with the Playbook but really I don't know.
ยกยกThank you!!

Add molecule tests

I looked into tests and found that we only do syntax checks. I would like to add standard molecule tests to cover the common test suite.

New cluster setup fails on CentOS 8 when using mariabackup for sst

Describe the bug
When setting up a new cluster from scratch on CentOS 8 (probably all CentOS and all RedHat) the cluster fails if using mariabackup for sst.

To Reproduce
Starting a new cluster on CentOS 8

Error
failed: [cluster-machine-1 -> cluster-machine-1] (item=[{'name': 'mariadb-sst', 'plugin': 'unix_socket', 'password': '..., 'priv': '.:RELOAD,PROCESS,LOCK TABLES,BINLOG MONITOR'}, 'localhost']) => {"ansible_loop_var": "item", "changed": false, "item": [{"name": "mariadb-sst", "password": "...", "plugin": "unix_socket", "priv": ".:RELOAD,PROCESS,LOCK TABLES,BINLOG MONITOR"}, "localhost"], "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)")"}

Additional context
Most probably this was introduces in b1c69f3#diff-3d0ff1709ca48add100327bb2a468e6c508fb92a159c64c4f99ad1df89d9bdde by moving the users task above cluster setup. But on CentOS (and RedHat) the service is not started after install.

role does not work on CentOS

It is OK on Ubuntu 16.04 but not on Centos 7:

Here is my Ansible output :

PLAY [Provision DB Servers] ***********************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************
ok: [db1.xXx.loc]
ok: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | unconfiguring galera cluster (Debian)] *******************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | unconfiguring galera cluster (RedHat)] *******************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | unconfiguring galera cluster] ****************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | restarting mysql when reconfiguring galera cluster] ******************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : mysql_root_pw | setting] *******************************************************************
skipping: [db2.xXx.loc] => (item=(censored due to no_log))
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db2.xXx.loc] => (item=(censored due to no_log))

TASK [mrlesmithjr.mariadb-galera-cluster : debian | update package list] **************************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : debian | installing pre-reqs] **************************************************************
skipping: [db1.xXx.loc] => (item=[])
skipping: [db2.xXx.loc] => (item=[])

TASK [mrlesmithjr.mariadb-galera-cluster : debian | Adding MariaDB Repo Keys] *********************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : debian | Adding MariaDB Repo Keys] *********************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : debian | Pinning MariaDB Repo] *************************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : debian | adding mariadb repo] **************************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : debian | installing mariadb-galera packages] ***********************************************
skipping: [db1.xXx.loc] => (item=[])
skipping: [db2.xXx.loc] => (item=[])

TASK [mrlesmithjr.mariadb-galera-cluster : debian | configuring root my.cnf] **********************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : debian | adding debian-sys-maintenance permissions to mysql] *******************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : mysql_root_pw | setting] *******************************************************************
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db2.xXx.loc] => (item=(censored due to no_log))
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db2.xXx.loc] => (item=(censored due to no_log))

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | adding mariadb repo] **************************************************************
ok: [db2.xXx.loc]
ok: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing pre-reqs] **************************************************************
ok: [db2.xXx.loc] => (item=[u'MySQL-python', u'socat'])
ok: [db1.xXx.loc] => (item=[u'MySQL-python', u'socat'])

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing mariadb mysql] *********************************************************
ok: [db1.xXx.loc] => (item=[u'MariaDB-server', u'galera'])
ok: [db2.xXx.loc] => (item=[u'MariaDB-server', u'galera'])

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing mariadb mysql] *********************************************************
skipping: [db1.xXx.loc] => (item=[])
skipping: [db2.xXx.loc] => (item=[])

TASK [mrlesmithjr.mariadb-galera-cluster : redhat | ensuring mariadb mysql is enabled on boot and started] ****************************
ok: [db1.xXx.loc]
ok: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | checking if galera cluster setup] ******************************************
ok: [db1.xXx.loc]
ok: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring db server and galera] ******************************************
skipping: [db1.xXx.loc] => (item=etc/mysql/debian.cnf)
skipping: [db1.xXx.loc] => (item=etc/mysql/my.cnf)
skipping: [db2.xXx.loc] => (item=etc/mysql/debian.cnf)
skipping: [db1.xXx.loc] => (item=etc/mysql/conf.d/galera.cnf)
skipping: [db2.xXx.loc] => (item=etc/mysql/my.cnf)
skipping: [db2.xXx.loc] => (item=etc/mysql/conf.d/galera.cnf)

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring db server and galera] ******************************************
changed: [db1.xXx.loc] => (item=etc/my.cnf.d/server.cnf)
changed: [db2.xXx.loc] => (item=etc/my.cnf.d/server.cnf)

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | stopping mysql] ************************************************************
changed: [db1.xXx.loc]
changed: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | killing lingering mysql processes] *****************************************
fatal: [db2.xXx.loc]: FAILED! => {"changed": true, "cmd": ["pkill", "mysqld"], "delta": "0:00:00.019258", "end": "2017-09-07 08:38:21.491789", "failed": true, "rc": 1, "start": "2017-09-07 08:38:21.472531", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
fatal: [db1.xXx.loc]: FAILED! => {"changed": true, "cmd": ["pkill", "mysqld"], "delta": "0:00:00.019891", "end": "2017-09-07 08:38:21.460670", "failed": true, "rc": 1, "start": "2017-09-07 08:38:21.440779", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring temp galera config] ********************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring temp galera config] ********************************************
skipping: [db2.xXx.loc]
changed: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | bootstrapping galera cluster] **********************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | bootstrapping galera cluster] **********************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | joining galera cluster] ****************************************************
skipping: [db1.xXx.loc]
fatal: [db2.xXx.loc]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service mysql: Job for mariadb.service failed because the control process exited with error code. See \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring galera on mysql_master] ****************************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring galera on mysql_master] ****************************************
changed: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | restarting galera on mysql_master] *****************************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | marking galera cluster as configured] **************************************
changed: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Restarting MySQL (Master)] *************************************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Waiting For MySQL (Master)] ************************************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Restarting MySQL (Non-Master)] *********************************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Waiting For MySQL (Non-Master)] ********************************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : Create MySQL users] ************************************************************************

TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | configuring monitor script for galera] *********************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | restarting mysql on master] ********************************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | waiting for mysql to start on master] **********************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | restarting mysql on additional servers] ********************************
skipping: [db1.xXx.loc]

TASK [mrlesmithjr.mariadb-galera-cluster : configure_root_access | updating root passwords] *******************************************
failed: [db1.xXx.loc] (item=db1) => {"failed": true, "item": "db1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}
failed: [db1.xXx.loc] (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}
failed: [db1.xXx.loc] (item=::1) => {"failed": true, "item": "::1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}
failed: [db1.xXx.loc] (item=localhost) => {"failed": true, "item": "localhost", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}

NO MORE HOSTS LEFT ********************************************************************************************************************
	to retry, use: --limit @/Users/fjousseaume/Projets/XxX/XxX/ansible/provision_mysql.retry

PLAY RECAP ****************************************************************************************************************************
db1.xXx.loc              : ok=12   changed=6    unreachable=0    failed=1
db2.xXx.loc              : ok=9    changed=3    unreachable=0    failed=1

socat has to be installed for mariabackup to work

I'm not sure on other platforms, but on RedHat, when using mariabackup (recommended, and default on this library when using TLS certs) socat is required.

Would you accept a PR installing socat on RedHat (and maybe other platforms, I'm not sure) if and only if mariabackup is the selected SST method for Galera?

Thanks.

Limited options and not all of them are being really used

Hi,

Acording to the defaults file

mariadb_mysql_settings:
collation_server: 'latin1_swedish_ci'
character_set_client: 'latin1'
datadir: /var/lib/mysql
expire_logs_days: 10
# ON|OFF
innodb_buffer_pool_size: 128M
innodb_doublewrite: 'ON'
innodb_flush_log_at_timeout: 1
innodb_read_io_threads: 4
innodb_write_io_threads: 4
join_buffer_size: 1M
#Default is 16M
key_buffer_size: '{{ (ansible_memtotal_mb | int * mariadb_mysql_mem_multiplier) | round | int }}M'
max_allowed_packet: 16M
max_binlog_size: 100M
max_connections: 150
max_heap_table_size: 16M
query_cache_limit: 1M
query_cache_size: 16M
thread_cache_size: 8
thread_stack: 192K
tmp_table_size: 16M

you can make some changes but unfortunately there is no one size fits all and some of them are not being used in

[mysqld]
basedir = /usr
datadir = {{ mariadb_mysql_settings['datadir'] }}
expire_logs_days = {{ mariadb_mysql_settings['expire_logs_days'] }}
key_buffer = {{ mariadb_mysql_settings['key_buffer_size'] }}
lc-messages-dir = /usr/share/mysql
log_error = /var/log/mysql/error.log
max_allowed_packet = {{ mariadb_mysql_settings['max_allowed_packet'] }}
max_binlog_size = {{ mariadb_mysql_settings['max_binlog_size'] }}
myisam-recover = BACKUP
pid-file = /var/run/mysqld/mysqld.pid
port = {{ mariadb_mysql_port }}
query_cache_limit = {{ mariadb_mysql_settings['query_cache_limit'] }}
query_cache_size = {{ mariadb_mysql_settings['query_cache_size'] }}
skip-external-locking
socket = /var/run/mysqld/mysqld.sock
thread_cache_size = {{ mariadb_mysql_settings['query_cache_size'] }}
thread_stack = {{ mariadb_mysql_settings['thread_stack'] }}
tmpdir = /tmp
user = mysql

@mrlesmithjr do you have any plans/thoughts on enhancing this?
[For example: I have bigger transaction than the default ib_logfile of 48MB ]

debian | Python3 installing pre-reqs

If you are using Python3 for ansible the following perquisites were required:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
With the current role you can add it as your playbook (that uses this role) pre_task:

pre_tasks:
- name: Installing ansible pre-reqs for mariadb
apt:
name: "{{ item }}"
state: present
with_items:
- python3-dev
- mysql-client
- libmysqlclient-dev
- python3-mysqldb

python3-dev in case the remote machine does not have python installed.

Various improvements

I would like to add some features to the role:

  • support for Centos8
  • update default mariadb_version "10.1" to the latest "10.5"
  • do some performance tuning to MySQL server conf by default (or suggest their usage in mariadb_config_overrides)
  • add option to configure systemd timeout to start MariaDB service - this is necessary when DB sync on startup takes longer than default systemd timeout and creates restart loop
  • add option to configure systems oom score to prevent killing service (what is very common if system has no swap, which is by the way bad practice for DB server)
  • add option to configure TLS encrypted WSREP and SST communication

Most of the mentioned features I have already developed, it need only some refactoring and documentation. Would you accept such PRs?

vars error

I am sorry to ask this basic question , but I didn't find the inventory file
then I did some tries then I create my invetory outside the project and compy playbook outside the project
then found this error
TASK [ansible-mariadb-galera-cluster : setup_cluster | configuring settings for mariadb and galera] *******************************************************************************************************
changed: [hello] => (item=etc/mysql/debian.cnf)
changed: [hardening] => (item=etc/mysql/debian.cnf)
changed: [hello] => (item=etc/mysql/my.cnf)
changed: [hardening] => (item=etc/mysql/my.cnf)
failed: [hello] (item=etc/mysql/conf.d/galera.cnf) => {"ansible_loop_var": "item", "changed": false, "item": "etc/mysql/conf.d/galera.cnf", "msg": "AnsibleUndefinedVariable: {% set _galera_cluster_nodes = [] %}{% for host in groups[ galera_cluster_nodes_group ] %}{{ _galera_cluster_nodes.append( host ) }}{% endfor %}{{ _galera_cluster_nodes }}: 'dict object' has no attribute 'galera-cluster-nodes'"}
failed: [hardening] (item=etc/mysql/conf.d/galera.cnf) => {"ansible_loop_var": "item", "changed": false, "item": "etc/mysql/conf.d/galera.cnf", "msg": "AnsibleUndefinedVariable: {% set _galera_cluster_nodes = [] %}{% for host in groups[ galera_cluster_nodes_group ] %}{{ _galera_cluster_nodes.append( host ) }}{% endfor %}{{ _galera_cluster_nodes }}: 'dict object' has no attribute 'galera-cluster-nodes'"}

Need to add delay between "setup_cluster | restarting node to apply config changes (other nodes)"

Is your feature request related to a problem? Please describe.
When a configuration change on our Galera Cluster, the ansible tasks setup_cluster | restarting node to apply config changes (other nodes) happen too quickly between other nodes so the cluster is not in a good state for dozen a second.

as example, the restart happen at 5h28m14 for the second node and at 5h28m24s for the third node.
10 seconds between them it's too short in our case as the second node it's not yet Synced in the Galera cluster

The throttle :1 is working as expected but need to add a wait for X second (or even better wait for server integrate to the cluster pool + wait for X second)

Describe the solution you'd like
i'm thinking to move this specific tasks on a include tasks and add a wait_for delay on it

What it's you point of view ?

Bootstrapping galera cluster on Debian

Hi, just wanted to let you know that I just used your module to deploy a Galera cluster on 3 Debian Jessie nodes but that I had to make a small modification to the file setup_cluster.yml in orde to get this working.

The next part contains validation only to allow the execution of galera_new_cluster on Ubuntu nodes:

- name: setup_cluster | bootstrapping galera cluster
  command: "/usr/bin/galera_new_cluster"
  when: >
        not galera_cluster_configured.stat.exists and
        inventory_hostname == mysql_master_node and
        (ansible_distribution == "Ubuntu" and
        ansible_distribution_version >= '16.04')

So on my Debian Jessie servers this failed.
Maybe change this part to

- name: setup_cluster | bootstrapping galera cluster
  command: "/usr/bin/galera_new_cluster"
  when: >
        not galera_cluster_configured.stat.exists and
        inventory_hostname == mysql_master_node and
        ((ansible_distribution == "Ubuntu" and
        ansible_distribution_version >= '16.04') or
        ansible_distribution == "Debian")

which did it for me.
Cheers

Can't find any matching row in the user table

Describe the bug
With new version of ansible and community mysql I started getting the following error:

failed: [services-mariadb-cluster-01.staging.example.com] (item=ip-172-21-1-53) => {"ansible_loop_var": "item", "changed": false, "item": "ip-172-21-1-53", "msg": "(1133, \"Can't find any matching row in the user table\")"}
ok: [services-mariadb-cluster-01.staging.example.com] => (item=127.0.0.1)
ok: [services-mariadb-cluster-01.staging.example.com] => (item=::1)
ok: [services-mariadb-cluster-01.staging.example.com] => (item=localhost)

To Reproduce
community.mysql 3.5.1

Expected behavior
Either ignore or cleanup root accounts and keep only the localhost one.

Additional context
To get past my reconfiguration I added:

ignore_errors: true

but I not sure if we can keep this or came up w/ a better solution.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.