Table of Contents generated with DocToc
An Ansible role to install/configure a MariaDB-Galera Cluster
- Collections:
- community.mysql
None
MIT
Larry Smith Jr.
License: MIT License
Table of Contents generated with DocToc
An Ansible role to install/configure a MariaDB-Galera Cluster
None
MIT
Larry Smith Jr.
There's no guide for customizing variables per our environment. For example, I got an error for the interface, and then I understood that I should go to main.yml
to replace the interface name (eth0) with mine (which is ens160 by default in Ubuntu).
May you document the required variables to be replaced? Thanks
Hi!
I'm new in ansible, and I try to install a galera cluster on my test enviroment. But I have this error:
Traceback (most recent call last):
File "/usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/ansible/template/init.py", line 1096, in do_template
res = self.environment.concat(rf)
File "/usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/ansible/template/native_helpers.py", line 70, in ansible_eval_concat
head = list(islice(nodes, 2))
File "", line 16, in root
File "/usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/jinja2/runtime.py", line 852, in _fail_with_undefined_error
raise self._undefined_exception(self._undefined_message)
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'galera-cluster-nodes'
I tried to find out what was the problem, but I wont find this variable anywhere.
Ansible version:
ansible [core 2.13.1]
config file = None
configured module search path = ['/Users/imagyar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/6.1.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/imagyar/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:15:25) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
Thanks!
When running a playbook (on Ubuntu 18 LTS) that sets the python_interpreter to python3
before the mariadb-galera-cluster
role I get this error:
TASK [mariadb-galera-cluster : debian | adding debian-sys-maintenance permissions to mysql] *****************************************************************
System info:
Ansible 2.10.3; Darwin
---------------------------------------------------
The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module
is required.
fatal: [xxx]: FAILED! => {"changed": false}
When adding python3-pymysql
to the vars/Debian.yml
-> mariadb_pre_req_packages
it seems to work properly.
Solution found in this ansible-role-mysql issue.
Describe the bug
These files must be in sync.
Currently different version are on RHEL (and derivatives) and Ubuntu (and derivatives).
Expected behavior
The server should have the same settings regardless of the OS.
Additional context
See #115
Will work on this later.
After changing previous cluster bootstrap process to use galera_new_cluster
this does not work on Ubuntu 14.04 as it requires systemd.
Hello,
I'm using this role to install on CentOS 8 stream and it works pretty well, thanks ๐
The problem arises when I set mariadb_mysql_users
and do a clean install.
For example:
mariadb_mysql_users:
- name: test
password: test
hosts:
- "%"
The mysql_users task fails and complains about not being able to connect to the server. When I check the server I notice that the mariadb daemon is indeed not running yet.
If I run the playbook once before setting the variable then the user is added just fine.
What am I missing here?
Thanks in advance.
fatal: [galera02]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service mysql: Job for mariadb.service failed because the control process exited with error code. See \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}
fatal: [galera03]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service mysql: Job for mariadb.service failed because the control process exited with error code. See \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}
ansible-playbook 2.6.3
I get this when trying to run the playbook:
[WARNING]: Ignoring invalid attribute: serial
Describe the bug
If we set mariadb_tls_files variables,these tasks will be executed :
setup_cluster | create TLS certificates directory
setup_cluster | copy TLS CA cert, server cert & private key
The when condition
on them are badly written and we receive a DEPRECATION WARNING
on them :
TASK [ansible-mariadb-galera-cluster : setup_cluster | create TLS certificates directory] ****************************************************************************************************
[DEPRECATION WARNING]: evaluating 'mariadb_tls_files' as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see
CONDITIONAL_BARE_VARS configuration toggle. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
TASK [ansible-mariadb-galera-cluster : setup_cluster | copy TLS CA cert, server cert & private key] ******************************************************************************************
[DEPRECATION WARNING]: evaluating 'mariadb_tls_files' as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see
CONDITIONAL_BARE_VARS configuration toggle. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
To Reproduce
Steps to reproduce the behaviour:
mariadb_tls_files
mariadb_tls_files:
ca_cert:
name: "ca.pem"
content: "{{ lookup('file', 'files/STAR..ca') }}"
server_key:
name: "server-key.pem"
content: "{{ lookup('file', 'STAR.key') }}"
server_cert:
name: "server-cert.pem"
content: "{{ lookup('file', 'files/STAR.crt') }}"
Expected behavior
A clear and concise description of what you expected to happen.
A better when condution on these tasks
Describe the bug
At the step "mrlesmithjr.mariadb_galera_cluster : debian | installing pre-reqs", the installation fails because it tries to install the apt package python-pymysql which doesn't exist on Ubuntu 22.04 LTS.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The python-pymysql
package should not be installed on Ubuntu 22.04 hosts
Desktop (please complete the following information):
I tried to set the options on my galera cluster but when it try to start mysql it says "Unrecognized option evs.delayed_margin" in error.log. Can anyone help me with this? my wsrep_provider_version is 3.24(rf216443).
wsrep_provider_options = "evs.auto_evict = 1; evs.delayed_keep_period = PT45S; evs.delayed_margin = PT5S"
Originally posted by @mrlesmithjr in #9 (comment)
The official Galera deployment guidelines make a few recommendations that I'd love to see implemented into this playbook.
Each datacenter should have a unique gmcast.segment ID so that the nodes know who is closest and who is in each DC (Helps with replication optimization). It would be great to be able to define this in the host or group inventory.
Use the server private IP address in ist.recv_bind. This is optional, but it allows SSTs to complete faster as long as you have at least 2 servers in each location. Again, if we could configure this on a per-host basis in the inventory, that would be great.
Some way to enable encryption between the nodes would be very desirable. Setting up SSL protection between the nodes is actually very simple in Galera (http://galeracluster.com/documentation-webpages/ssl.html) and it would be great if this was included as part of this playbook.
Some way to define wsrep_node_address in the host vars. This setting is necessary in some environments where Galera can't properly guess the correct IP address to use due to NAT, multiple network cards, multiple nodes on a host, and some other situations.
This is a very nice piece of work. If I find some extra time, I will try to submit pull requests to implement these changes.
Deployment dropped by this error:
fatal: [controller01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was:
{{ galera_cluster_bind_address }}: {{ hostvars[inventory_hostname]['ansible_' + galera_cluster_bind_interface]['ipv4']['address'] }}: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_eth-conf'
\n\nThe error appears to be in '/home/prd/roles/ansible-mariadb-galera-cluster/tasks/setup_cluster.yml': line 45, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.
\n\nThe offending line appears to be:\n\n block:\n - name: WSREP ist.recv_\n ^ here\n"}
and I can't find which variable is still unconfigured. May you help me?
To Reproduce
Steps to reproduce the behavior:
Server:*
Consider using no_log: True to prevent outputting the root password when setting the root password
mrlesmithjr.mariadb-galera-cluster : mysql_root_pw | setting
skipping: [app1.domain.com] => (item={u'question': u'mysql-server/root_password', u'value': u'supersecurepasswordhere'})
Is your feature request related to a problem? Please describe.
Some InnoDB variables were deprecated or removed.
I need to compile a full list of deprecated variables.
Describe the solution you'd like
Remove the variables from the role.
Describe alternatives you've considered
Only use the variables for versions up to the one where they were deprecated... but I think it's cumbersome.
Additional context
I'll soon prepare pull requests but first I need to compile a full list of deprecated variables.
Describe the bug
Our Ansible CI was not able to download the Role via Ansible Galaxy.
Since Yesterday it was working.
To Reproduce
Steps to reproduce the behavior:
---
roles:
- name: mrlesmithjr.mariadb-galera-cluster
src: mrlesmithjr.mariadb-galera-cluster
version: v0.3.0
ansible-galaxy install -r requirements.yml
Gives the following Error:
Starting galaxy role install process
- downloading role 'mariadb-galera-cluster', owned by mrlesmithjr
[WARNING]: - mrlesmithjr.mariadb-galera-cluster was NOT installed successfully: - sorry, mrlesmithjr.mariadb-galera-cluster was not found on https://galaxy.ansible.com/api/.
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
Expected behavior
It will installed.
Simply put - I need to configure auto eviction in our cluster, but as far as I can tell, it isn't possible to do this within this role. Because of the huge amount of options that can be added to wsrep_provider_options, I think it would be great to modify this role so that I, as an end user, can define 'extra_wsrep_provider_options' that are then tacked on to the end of the wsrep_provider_options line in the galera configuration file. For example:
extra_wsrep_provider_options:
evs.auto_evict: 1
evs.delayed_margin: 'PT5S'
evs.delayed_keep_period: 'PT45S'
The above would append the end of the wsrep_provider_options line like follows:
wsrep_provider_options = "....auto generated options; evs.auto_evict = 1; evs.delayed_margin = PT5S; evs.delayed_keep_period = PT45S"
By adopting a structure similar to this, it would allow us to very tightly control the configuration of Galera without needing the individual options to be added to this role first.
Describe the bug
When trying to create users with permissions over specific tables, the user creation task fails as the table hasn't yet been created.
In my testing, moving the user creation task to after the database creation task in main.yml
fixes these issues.
To Reproduce
Steps to reproduce the behaviour:
- name: api
hosts:
- "%"
- "127.0.0.1"
- "::1"
- "localhost"
password: "{{ api_password }}"
encrypted: no
priv:
"users.logins": "SELECT,INSERT,UPDATE,DELETE"
Expected behavior
The users and new tables should be created.
Desktop (please complete the following information):
Describe the bug
After running the role, the nodes are not added to the cluster. No error messages are shown
To Reproduce
Steps to reproduce the behavior:
run the role on centos stream 8 with three nodes on the same network. (x86)
vars used:
galera_cluster_nodes_group: "database"
mariadb_version: "10.5"
galera_cluster_bind_interface: "eth1"
galera_cluster_name: "db-cluster"
galera_reconfigure_galera: true
mariadb_databases:
Expected behavior
The nodes should be added to the galera cluster
Screenshots
This is the output I get from this role
Also it skips ALOT of tasks
On all nodes after deployment it shows the following:
MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_%';
+-------------------------------+----------------------+
| Variable_name | Value |
+-------------------------------+----------------------+
| wsrep_applier_thread_count | 0 |
| wsrep_cluster_capabilities | |
| wsrep_cluster_conf_id | 18446744073709551615 |
| wsrep_cluster_size | 0 |
| wsrep_cluster_state_uuid | |
| wsrep_cluster_status | Disconnected |
| wsrep_connected | OFF |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 18446744073709551615 |
| wsrep_provider_capabilities | |
| wsrep_provider_name | |
| wsrep_provider_vendor | |
| wsrep_provider_version | |
| wsrep_ready | OFF |
| wsrep_rollbacker_thread_count | 0 |
| wsrep_thread_count | 0 |
+-------------------------------+----------------------+
Additional context
I have tried enabling the reconfigure cluster variable. Produces the output shown above. The installs of mariadb and galera are NOT fresh. Fresh installs seem to work fine.
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing pre-reqs] ***************************************************************************************************************************************
fatal: [mysql01.example.lan]: FAILED! => {"changed": false, "failures": ["MySQL-python No match for argument: MySQL-python"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [mysql02.example.lan]: FAILED! => {"changed": false, "failures": ["MySQL-python No match for argument: MySQL-python"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [mysql03.example.lan]: FAILED! => {"changed": false, "failures": ["MySQL-python No match for argument: MySQL-python"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
MySQL-python maybe python3-PyMySQL
Need to fix the Apt Repo Key for Ubuntu 14.04. The key is different for Ubuntu 16.04 and 14.04
Describe the bug
Ubuntu 20.04 Molecule tests are failing and need to be figured out.
Expected behavior
Ubuntu 20.04 Molecule tests should work as expected.
TASK [ansible-mariadb-galera-cluster : setup_cluster | joining galera cluster] ***
331
skipping: [node1]
332
fatal: [node2]: FAILED! => {"msg": "The conditional check '_mariadb_galera_cluster_joined.status.ActiveState == \"active\"' failed. The error was: error while evaluating conditional (_mariadb_galera_cluster_joined.status.ActiveState == \"active\"): 'dict object' has no attribute 'status'"}
333
334
TASK [ansible-mariadb-galera-cluster : setup_cluster | sleep for 15 seconds to wait for node WSREP prepared state] ***
335
ok: [node1]
336
337
TASK [ansible-mariadb-galera-cluster : setup_cluster | configuring final galera config for first node] ***
338
ok: [node1] => (item=etc/mysql/debian.cnf)
339
ok: [node1] => (item=etc/mysql/my.cnf)
340
changed: [node1] => (item=etc/mysql/conf.d/galera.cnf)
341
342
TASK [ansible-mariadb-galera-cluster : setup_cluster | restarting first node with final galera config] ***
343
fatal: [node1]: FAILED! => {"changed": false, "msg": "Unable to restart service mysql: Job for mariadb.service failed because the control process exited with error code.\nSee \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}
Need to update the Vagrant included Galera role to include recent changes.
Hi Larry,
Firstly I want say to you thanks a lot for your times and job.
Could you fixed rhel8 repo issue?
roles did not find packages for Red Hat 8**
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | adding mariadb repo] ***************************************************************************************************************************************
ok: [mysql03.example.lan]
ok: [mysql01.example.lan]
ok: [mysql02.example.lan]
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | disable appstream mysql/mariadb modules] *******************************************************************************************************************
fatal: [mysql03.example.lan]: FAILED! => {"changed": false, "cmd": ["dnf", "-y", "module", "disable", "mysql", "mariadb"], "delta": "0:00:02.387188", "end": "2021-06-28 09:15:54.141276", "msg": "non-zero return code", "rc": 1, "start": "2021-06-28 09:15:51.754088", "stderr": "Errors during downloading metadata for repository 'mariadb':\n - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)\nError: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'mariadb':", " - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)", "Error: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "Updating Subscription Management repositories.\nMariaDB 312 B/s | 345 B 00:01 ", "stdout_lines": ["Updating Subscription Management repositories.", "MariaDB 312 B/s | 345 B 00:01 "]}
fatal: [mysql02.example.lan]: FAILED! => {"changed": false, "cmd": ["dnf", "-y", "module", "disable", "mysql", "mariadb"], "delta": "0:00:02.414772", "end": "2021-06-28 09:15:54.164538", "msg": "non-zero return code", "rc": 1, "start": "2021-06-28 09:15:51.749766", "stderr": "Errors during downloading metadata for repository 'mariadb':\n - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)\nError: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'mariadb':", " - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.219.197)", "Error: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "Updating Subscription Management repositories.\nMariaDB 299 B/s | 345 B 00:01 ", "stdout_lines": ["Updating Subscription Management repositories.", "MariaDB 299 B/s | 345 B 00:01 "]}
fatal: [mysql01.example.lan]: FAILED! => {"changed": false, "cmd": ["dnf", "-y", "module", "disable", "mysql", "mariadb"], "delta": "0:00:02.449515", "end": "2021-06-28 09:15:54.183524", "msg": "non-zero return code", "rc": 1, "start": "2021-06-28 09:15:51.734009", "stderr": "Errors during downloading metadata for repository 'mariadb':\n - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.217.28)\nError: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'mariadb':", " - Status code: 404 for https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml (IP: 142.4.217.28)", "Error: Failed to download metadata for repo 'mariadb': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "Updating Subscription Management repositories.\nMariaDB 287 B/s | 345 B 00:01 ", "stdout_lines": ["Updating Subscription Management repositories.", "MariaDB 287 B/s | 345 B 00:01 "]}
PLAY RECAP *****************************************************************************************************************************************************************************************************
mysql01.example.lan : ok=4 changed=0 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0
mysql02.example.lan : ok=4 changed=0 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0
mysql03.example.lan : ok=4 changed=0 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0
This
https://yum.mariadb.org/10.5.11/redhat8-amd64/repodata/repomd.xml
should be
https://yum.mariadb.org/10.5.11/rhel8-amd64/
Thanks a lot.
Is your feature request related to a problem? Please describe.
Actually with this code, it's no way to update the mysql root password as it's first modify the
Describe the solution you'd like
The role, actually
/root/.my.cnf
So, it's impossible to change the root password with the ansible role as the roles will failed to connect
{"ansible_loop_var": "item", "changed": false, "item": "sql3", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, u\"Access denied for user 'root'@'localhost' (using password: YES)\")"}
Describe alternatives you've considered
a verify simple alternative is to
updating_root_passwords
on tasks configure_root_access | updating root passwords
and (optionaly in configure_root_access | updating root passwords (allow from anywhere)
--tags updating_root_passwords
Additional context
A better way can also be implemented but will need more work
In some stages of the installation process there is a wait_for
set with a delegate_to: localhost
. This is mainly done in setup_cluster.yml.
Like here:
- name: setup_cluster | sleep for 15 seconds to wait for nodes to fully shut down
wait_for:
timeout: 15
delegate_to: localhost
when: >
not galera_cluster_configured.stat.exists
This returns errors for me:
TASK [mariadb-galera-cluster : setup_cluster | stopping mysql to (re)configure cluster (other nodes)] *********************************************
skipping: [db1]
FAILED - RETRYING: setup_cluster | stopping mysql to (re)configure cluster (other nodes) (60 retries left).
ok: [db2]
TASK [mariadb-galera-cluster : setup_cluster | stopping mysql to (re)configure cluster (first node)] **********************************************
skipping: [db2]
FAILED - RETRYING: setup_cluster | stopping mysql to (re)configure cluster (first node) (60 retries left).
ok: [db1]
TASK [mariadb-galera-cluster : setup_cluster | sleep for 15 seconds to wait for nodes to fully shut down] *****************************************
System info:
Ansible 2.10.5; Darwin
---------------------------------------------------
MODULE FAILURE
See stdout/stderr for the exact error
sudo: a password is required
fatal: [db1]: FAILED! => {"changed": false, "module_stdout": "", "rc": 1}
---------------------------------------------------
MODULE FAILURE
See stdout/stderr for the exact error
sudo: a password is required
fatal: [db2]: FAILED! => {"changed": false, "module_stdout": "", "rc": 1}
I've managed to fix this by adding a become: false
to the delegate_to
steps, like so:
- name: setup_cluster | sleep for 15 seconds to wait for nodes to fully shut down
wait_for:
timeout: 15
delegate_to: localhost
+ become: false
when: >
not galera_cluster_configured.stat.exists
ansible-mariadb-galera-cluster/tasks/setup_cluster.yml
Lines 86 to 93 in 7cf2f08
What do you think about adding serial: 1
in here so ansible will do this host by host instead of all joining the cluster at once?
We should add the following as well:
host: "{{ galera_cluster_bind_address }}"
Thanks
ansible-mariadb-galera-cluster/defaults/main.yml
Lines 163 to 171 in 9b188d6
In the above because of the line:
plugin: "unix_socket"
the password is removed and the grant looks like:
GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR ON *.* TO `mariadb-sst`@`localhost` IDENTIFIED VIA unix_socket;
and the SST freezes.
If I remove the line the grant will look like:
GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR ON *.* TO `mariadb-sst`@`localhost` IDENTIFIED BY PASSWORD '*13D253680F898B77D9661FACDE29E71EA9A96DFF';
and all works well.
Is it ok to change the lines to:
#mariadb_auth_plugin: "unix_socket"
mariadb_sst_user:
- name: "{{ mariadb_sst_username }}"
hosts:
- "localhost"
plugin: "{{ mariadb_auth_plugin | default(omit) }}"
password: "{{ mariadb_sst_password }}"
priv: "*.*:RELOAD,PROCESS,LOCK TABLES,BINLOG MONITOR"
The problem is that the user must exist as a system user to work ok, otherwise it will not work w/ unix_socket
and you need a passwod.
Thoughts?
Currently Ubuntu 16.04 does not install using the included repos nor setup the cluster correctly. Need to update the Debian repo to install latest 10.1 version which includes Galera now.
I am trying to upgrade an existing cluster using the latest version and am hitting a brick wall. When I run the playbook, I receive:
fatal: [app1.site.com]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute u'ansible_enp3s0'\n\nThe error appears to have been in '/usr/local/etc/ansible/roles/mrlesmithjr.mariadb-galera-cluster/tasks/update_etc_hosts.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Needed for name resolution in the case of DNS issues\n- name: update_etc_hosts | Updating /etc/hosts For Name Resolution\n ^ here\n"}
I have defined the settings as follows:
galera_cluster_bind_interface: 'enp3s0'
galera_cluster_bind_address: "{{ hostvars[inventory_hostname]['ansible_' + galera_cluster_bind_interface']['ipv4']['address'] }}"
And I know that this is the correct path, because a
debug:
var: hostvars[inventory_hostname]['ansible_enp3s0']['ipv4']['address']
correctly prints the IP address for each of the nodes. Any ideas?
Hello,
Is it possible you create tag to set it in requirements.yml, your role is very interesting but we can not force a given version...
Thank you !
Florent,
If you want to use a different data directory e.g. in case of using an EBS volume to store data. The /templates/etc/mysql/my.cnf.j2 should use a variable for datadir.
Hey everybody. First of all, thank you for your work!
I'm using this module to create a Galera Cluster with two Ubuntu Nodes with MariaDB Server 10.6.7 already installed.
The problem that I'm facing is in the moment when the Playbook try to execute the next task:
TASK [/root/.ansible/roles/ansible_galera_dos : debian | adding mariadb repo]
But before this task complete it gives me the next error:
For what that I can understand, this is trying to made an update on my Ubuntu Nodes, but when I try to run the 'apt update' directly on my nodes, I've found the next text at the bottom of the command:
I've been trying to fix this problem with Google help, but nothing helped me by this moment.
Please I ask for your help to resolve this, maybe I'm doing something wrong with the Playbook but really I don't know.
ยกยกThank you!!
I looked into tests and found that we only do syntax checks. I would like to add standard molecule tests to cover the common test suite.
Describe the bug
When setting up a new cluster from scratch on CentOS 8 (probably all CentOS and all RedHat) the cluster fails if using mariabackup for sst.
To Reproduce
Starting a new cluster on CentOS 8
Error
failed: [cluster-machine-1 -> cluster-machine-1] (item=[{'name': 'mariadb-sst', 'plugin': 'unix_socket', 'password': '..., 'priv': '.:RELOAD,PROCESS,LOCK TABLES,BINLOG MONITOR'}, 'localhost']) => {"ansible_loop_var": "item", "changed": false, "item": [{"name": "mariadb-sst", "password": "...", "plugin": "unix_socket", "priv": ".:RELOAD,PROCESS,LOCK TABLES,BINLOG MONITOR"}, "localhost"], "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)")"}
Additional context
Most probably this was introduces in b1c69f3#diff-3d0ff1709ca48add100327bb2a468e6c508fb92a159c64c4f99ad1df89d9bdde by moving the users task above cluster setup. But on CentOS (and RedHat) the service is not started after install.
It is OK on Ubuntu 16.04 but not on Centos 7:
Here is my Ansible output :
PLAY [Provision DB Servers] ***********************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************
ok: [db1.xXx.loc]
ok: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | unconfiguring galera cluster (Debian)] *******************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | unconfiguring galera cluster (RedHat)] *******************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | unconfiguring galera cluster] ****************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : unconfigure_cluster | restarting mysql when reconfiguring galera cluster] ******************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : mysql_root_pw | setting] *******************************************************************
skipping: [db2.xXx.loc] => (item=(censored due to no_log))
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db2.xXx.loc] => (item=(censored due to no_log))
TASK [mrlesmithjr.mariadb-galera-cluster : debian | update package list] **************************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : debian | installing pre-reqs] **************************************************************
skipping: [db1.xXx.loc] => (item=[])
skipping: [db2.xXx.loc] => (item=[])
TASK [mrlesmithjr.mariadb-galera-cluster : debian | Adding MariaDB Repo Keys] *********************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : debian | Adding MariaDB Repo Keys] *********************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : debian | Pinning MariaDB Repo] *************************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : debian | adding mariadb repo] **************************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : debian | installing mariadb-galera packages] ***********************************************
skipping: [db1.xXx.loc] => (item=[])
skipping: [db2.xXx.loc] => (item=[])
TASK [mrlesmithjr.mariadb-galera-cluster : debian | configuring root my.cnf] **********************************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : debian | adding debian-sys-maintenance permissions to mysql] *******************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : mysql_root_pw | setting] *******************************************************************
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db2.xXx.loc] => (item=(censored due to no_log))
skipping: [db1.xXx.loc] => (item=(censored due to no_log))
skipping: [db2.xXx.loc] => (item=(censored due to no_log))
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | adding mariadb repo] **************************************************************
ok: [db2.xXx.loc]
ok: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing pre-reqs] **************************************************************
ok: [db2.xXx.loc] => (item=[u'MySQL-python', u'socat'])
ok: [db1.xXx.loc] => (item=[u'MySQL-python', u'socat'])
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing mariadb mysql] *********************************************************
ok: [db1.xXx.loc] => (item=[u'MariaDB-server', u'galera'])
ok: [db2.xXx.loc] => (item=[u'MariaDB-server', u'galera'])
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | installing mariadb mysql] *********************************************************
skipping: [db1.xXx.loc] => (item=[])
skipping: [db2.xXx.loc] => (item=[])
TASK [mrlesmithjr.mariadb-galera-cluster : redhat | ensuring mariadb mysql is enabled on boot and started] ****************************
ok: [db1.xXx.loc]
ok: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | checking if galera cluster setup] ******************************************
ok: [db1.xXx.loc]
ok: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring db server and galera] ******************************************
skipping: [db1.xXx.loc] => (item=etc/mysql/debian.cnf)
skipping: [db1.xXx.loc] => (item=etc/mysql/my.cnf)
skipping: [db2.xXx.loc] => (item=etc/mysql/debian.cnf)
skipping: [db1.xXx.loc] => (item=etc/mysql/conf.d/galera.cnf)
skipping: [db2.xXx.loc] => (item=etc/mysql/my.cnf)
skipping: [db2.xXx.loc] => (item=etc/mysql/conf.d/galera.cnf)
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring db server and galera] ******************************************
changed: [db1.xXx.loc] => (item=etc/my.cnf.d/server.cnf)
changed: [db2.xXx.loc] => (item=etc/my.cnf.d/server.cnf)
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | stopping mysql] ************************************************************
changed: [db1.xXx.loc]
changed: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | killing lingering mysql processes] *****************************************
fatal: [db2.xXx.loc]: FAILED! => {"changed": true, "cmd": ["pkill", "mysqld"], "delta": "0:00:00.019258", "end": "2017-09-07 08:38:21.491789", "failed": true, "rc": 1, "start": "2017-09-07 08:38:21.472531", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
fatal: [db1.xXx.loc]: FAILED! => {"changed": true, "cmd": ["pkill", "mysqld"], "delta": "0:00:00.019891", "end": "2017-09-07 08:38:21.460670", "failed": true, "rc": 1, "start": "2017-09-07 08:38:21.440779", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring temp galera config] ********************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring temp galera config] ********************************************
skipping: [db2.xXx.loc]
changed: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | bootstrapping galera cluster] **********************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | bootstrapping galera cluster] **********************************************
skipping: [db1.xXx.loc]
skipping: [db2.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | joining galera cluster] ****************************************************
skipping: [db1.xXx.loc]
fatal: [db2.xXx.loc]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service mysql: Job for mariadb.service failed because the control process exited with error code. See \"systemctl status mariadb.service\" and \"journalctl -xe\" for details.\n"}
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring galera on mysql_master] ****************************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | configuring galera on mysql_master] ****************************************
changed: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | restarting galera on mysql_master] *****************************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | marking galera cluster as configured] **************************************
changed: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Restarting MySQL (Master)] *************************************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Waiting For MySQL (Master)] ************************************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Restarting MySQL (Non-Master)] *********************************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : setup_cluster | Waiting For MySQL (Non-Master)] ********************************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : Create MySQL users] ************************************************************************
TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | configuring monitor script for galera] *********************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | restarting mysql on master] ********************************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | waiting for mysql to start on master] **********************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : galera_monitoring | restarting mysql on additional servers] ********************************
skipping: [db1.xXx.loc]
TASK [mrlesmithjr.mariadb-galera-cluster : configure_root_access | updating root passwords] *******************************************
failed: [db1.xXx.loc] (item=db1) => {"failed": true, "item": "db1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}
failed: [db1.xXx.loc] (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}
failed: [db1.xXx.loc] (item=::1) => {"failed": true, "item": "::1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}
failed: [db1.xXx.loc] (item=localhost) => {"failed": true, "item": "localhost", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2002, 'Can\\'t connect to local MySQL server through socket \\'/var/lib/mysql/mysql.sock\\' (2 \"No such file or directory\")')"}
NO MORE HOSTS LEFT ********************************************************************************************************************
to retry, use: --limit @/Users/fjousseaume/Projets/XxX/XxX/ansible/provision_mysql.retry
PLAY RECAP ****************************************************************************************************************************
db1.xXx.loc : ok=12 changed=6 unreachable=0 failed=1
db2.xXx.loc : ok=9 changed=3 unreachable=0 failed=1
I'm not sure on other platforms, but on RedHat, when using mariabackup (recommended, and default on this library when using TLS certs) socat is required.
Would you accept a PR installing socat on RedHat (and maybe other platforms, I'm not sure) if and only if mariabackup is the selected SST method for Galera?
Thanks.
Hi,
Acording to the defaults file
ansible-mariadb-galera-cluster/defaults/main.yml
Lines 105 to 127 in 9622a6d
you can make some changes but unfortunately there is no one size fits all and some of them are not being used in
ansible-mariadb-galera-cluster/templates/etc/mysql/my.cnf.j2
Lines 11 to 30 in 9622a6d
@mrlesmithjr do you have any plans/thoughts on enhancing this?
[For example: I have bigger transaction than the default ib_logfile of 48MB ]
Ansible Galaxy releases not being created when tags are created.
If you are using Python3 for ansible the following perquisites were required:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
With the current role you can add it as your playbook (that uses this role) pre_task:
pre_tasks:
- name: Installing ansible pre-reqs for mariadb
apt:
name: "{{ item }}"
state: present
with_items:
- python3-dev
- mysql-client
- libmysqlclient-dev
- python3-mysqldb
python3-dev in case the remote machine does not have python installed.
Would you mind adding hacktoberfest topic to this repository? https://hacktoberfest.digitalocean.com/details#details
I would like to add some features to the role:
Most of the mentioned features I have already developed, it need only some refactoring and documentation. Would you accept such PRs?
Please could you push the latest version to Ansible Galaxy so we can benefit from last features and bugfixes?
Thanks.
I am sorry to ask this basic question , but I didn't find the inventory file
then I did some tries then I create my invetory outside the project and compy playbook outside the project
then found this error
TASK [ansible-mariadb-galera-cluster : setup_cluster | configuring settings for mariadb and galera] *******************************************************************************************************
changed: [hello] => (item=etc/mysql/debian.cnf)
changed: [hardening] => (item=etc/mysql/debian.cnf)
changed: [hello] => (item=etc/mysql/my.cnf)
changed: [hardening] => (item=etc/mysql/my.cnf)
failed: [hello] (item=etc/mysql/conf.d/galera.cnf) => {"ansible_loop_var": "item", "changed": false, "item": "etc/mysql/conf.d/galera.cnf", "msg": "AnsibleUndefinedVariable: {% set _galera_cluster_nodes = [] %}{% for host in groups[ galera_cluster_nodes_group ] %}{{ _galera_cluster_nodes.append( host ) }}{% endfor %}{{ _galera_cluster_nodes }}: 'dict object' has no attribute 'galera-cluster-nodes'"}
failed: [hardening] (item=etc/mysql/conf.d/galera.cnf) => {"ansible_loop_var": "item", "changed": false, "item": "etc/mysql/conf.d/galera.cnf", "msg": "AnsibleUndefinedVariable: {% set _galera_cluster_nodes = [] %}{% for host in groups[ galera_cluster_nodes_group ] %}{{ _galera_cluster_nodes.append( host ) }}{% endfor %}{{ _galera_cluster_nodes }}: 'dict object' has no attribute 'galera-cluster-nodes'"}
Is your feature request related to a problem? Please describe.
When a configuration change on our Galera Cluster, the ansible tasks setup_cluster | restarting node to apply config changes (other nodes)
happen too quickly between other nodes so the cluster is not in a good state for dozen a second.
as example, the restart happen at 5h28m14 for the second node and at 5h28m24s for the third node.
10 seconds between them it's too short in our case as the second node it's not yet Synced in the Galera cluster
The throttle :1
is working as expected but need to add a wait for X second (or even better wait for server integrate to the cluster pool + wait for X second)
Describe the solution you'd like
i'm thinking to move this specific tasks on a include tasks and add a wait_for delay on it
What it's you point of view ?
Hi, just wanted to let you know that I just used your module to deploy a Galera cluster on 3 Debian Jessie nodes but that I had to make a small modification to the file setup_cluster.yml in orde to get this working.
The next part contains validation only to allow the execution of galera_new_cluster on Ubuntu nodes:
- name: setup_cluster | bootstrapping galera cluster
command: "/usr/bin/galera_new_cluster"
when: >
not galera_cluster_configured.stat.exists and
inventory_hostname == mysql_master_node and
(ansible_distribution == "Ubuntu" and
ansible_distribution_version >= '16.04')
So on my Debian Jessie servers this failed.
Maybe change this part to
- name: setup_cluster | bootstrapping galera cluster
command: "/usr/bin/galera_new_cluster"
when: >
not galera_cluster_configured.stat.exists and
inventory_hostname == mysql_master_node and
((ansible_distribution == "Ubuntu" and
ansible_distribution_version >= '16.04') or
ansible_distribution == "Debian")
which did it for me.
Cheers
In order to clean up code I suggest removal of support for Debian like ansible_distribution_version <= '14.04'
Describe the bug
With new version of ansible and community mysql I started getting the following error:
failed: [services-mariadb-cluster-01.staging.example.com] (item=ip-172-21-1-53) => {"ansible_loop_var": "item", "changed": false, "item": "ip-172-21-1-53", "msg": "(1133, \"Can't find any matching row in the user table\")"}
ok: [services-mariadb-cluster-01.staging.example.com] => (item=127.0.0.1)
ok: [services-mariadb-cluster-01.staging.example.com] => (item=::1)
ok: [services-mariadb-cluster-01.staging.example.com] => (item=localhost)
To Reproduce
community.mysql 3.5.1
Expected behavior
Either ignore or cleanup root
accounts and keep only the localhost
one.
Additional context
To get past my reconfiguration I added:
ignore_errors: true
but I not sure if we can keep this or came up w/ a better solution.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.