Giter Site home page Giter Site logo

ansible-swarm-playbook's Introduction

Ansible Swarm Playbook

Playbook for creating/managing a Docker Swarm cluster (requires Docker >= 1.12).

Companion files to the following post: https://thisendout.com/2016/09/13/deploying-docker-swarm-with-ansible/

Assumptions

This playbook assumes a running Docker daemon on the hosts and that the following Ansible inventory groups have been populated: manager and worker.

Variables

You can override the swarm_iface variable in Ansible to determine the listening interface for your swarm hosts.

swarm.yml vs. swarm-facts.yml

The swarm.yml playbook uses a shell statement for determining cluster membership where the swarm-facts.yml playbook uses the docker_info_facts module for injecting Docker info as facts then checking cluster membership against that.

ansible-swarm-playbook's People

Contributors

castaf avatar markacomm avatar nextrevision avatar ztbrown avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-swarm-playbook's Issues

New Systems are not added in Swarm

Hey,
This playbook is working only for new Setups.

But if i try to add one more system in an existing setup through this script,the system is not added in the swarm.
Check this thing.

Thanks

Issue in ansible playbook swarm.yml

TASK [add initialized host to swarm_manager_operational group]
fatal: [manager-1]: FAILED! => {"msg": "The conditional check 'bootstrap_first_node | changed' failed. The error was: template error while templating string: no filter named 'changed'. String: {% if bootstrap_first_node | changed %} True {% else %} False {% endif %}\n\nThe error appears to be in '/home/***********/vagrant/ansible-swarm-playbook/swarm.yml': line 72, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: add initialized host to swarm_manager_operational group\n ^ here\n"}

docker_info_facts doesn't support docker==2.1.0 python

Instead of docker-py newer ansible (2.3) uses docker python module (pip install docker)

The fix is almost trivial:

--- library/docker_info_facts.py.bak    2017-03-01 15:46:36.629676252 +0100
+++ library/docker_info_facts   2017-03-01 15:54:48.506619896 +0100
@@ -29,7 +29,7 @@
 """
 
 try:
-    from docker import Client
+    from docker import APIClient
 except:
     docker_lib_missing = True
 else:
@@ -38,8 +38,7 @@
 
 def _get_docker_info():
     try:
-        cli = Client()
-        return cli.info(), False
+        return APIClient().info(), False
     except Exception as e:
         return {}, e.message
 
@@ -57,7 +56,7 @@
     info, err = _get_docker_info()
 
     if err:
-        module.fail_json(msg=err.message)
+        module.fail_json(msg=err)
 
     module.exit_json(
         changed=True,

This will break docker-py I think, but maybe this is useful for others for progress.

add_host only iterates with_items for the first register

If docker swarm state is inactive, the word active is still contained in the output..

I take it back. The issue we ran into is that the first node in our manager group already had an active swarm state.

Since the add_hosts task only runs for the first host in the play, all hosts that with_items iterated over were seen as active, even though they were actually inactive.

Inventory File and Swarm-iface Variable

Hey tell me what is the syntax in which you want the Inventory File and how to override the Swarm-iface variable.
give me some examples with some randoms hosts

thanks

Alpine Linux nodes need tweak to playbook

I needed to change line 95 to ['ansible_default_ipv4'] to get it to work with alpine linux docker nodes -- I am new to ansible, though, so this might be nonsense or unhelpful, hence no PR. :)

...thanks for the blog entry and script, though. I hate standing up docker swarm nodes manually, it's so tedious. :) This is a huge time-saver!

Here is the debug output for hostvars[item] for the offending item (my other 4 nodes look similar). My ansible_eth0 has no ipv4 entry. Let me know if you'd like more info. SSH keys are all throwaways.

{
u 'dot114': {
'omit': '__omit_place_holder__8419bb6096ef07d475d8c6206cb5b66a0958b6a9',
u 'ansible_product_serial': u 'b957bbd1-43f9-e83f-62e9-c06ead27ac33',
u 'ansible_form_factor': u 'Other',
u 'ansible_product_version': u '4.6.1-xs121563',
u 'ansible_fips': False,
u 'ansible_service_mgr': u 'sysvinit',
u 'ansible_memory_mb': {
u 'real': {
u 'total': 1998,
u 'free': 1729,
u 'used': 269
},
u 'swap': {
u 'cached': 0,
u 'total': 2559,
u 'used': 0,
u 'free': 2559
},
u 'nocache': {
u 'used': 91,
u 'free': 1907
}
},
u 'ansible_user_dir': u '/root',
u 'ansible_userspace_bits': u '64',
u 'ansible_all_ipv4_addresses': [],
u 'ansible_ssh_host_key_ecdsa_public': u 'AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN5JgXiDtWX2iOTyfwvXW94wQXzM7mFh5M25SJory1S8uigErOsBQ7WtUHE4Y4j4K90iM5tHzh7OMR6L4nUxR3I=',
'playbook_dir': u '/root',
u 'ansible_distribution_version': u '3.5.0',
u 'ansible_domain': u '',
u 'ansible_virtualization_type': u 'xen',
u 'ansible_ssh_host_key_ed25519_public': u 'AAAAC3NzaC1lZDI1NTE5AAAAIAdi/Vq3jIHH/sx8/4kUCfkJX5TyauyWWpc7nSaCf05a',
u 'ansible_devices': {
u 'sr0': {
u 'scheduler_mode': u 'cfq',
u 'rotational': u '1',
u 'vendor': u 'QEMU',
u 'sectors': u '2097151',
u 'sas_device_handle': None,
u 'sas_address': None,
u 'host': u '',
u 'sectorsize': u '512',
u 'removable': u '1',
u 'support_discard': u '0',
u 'model': u 'QEMU DVD-ROM',
u 'size': u '1024.00 MB',
u 'holders': [],
u 'partitions': {}
},
u 'xvda': {
u 'scheduler_mode': u '',
u 'rotational': u '0',
u 'vendor': None,
u 'sectors': u '20971520',
u 'sas_device_handle': None,
u 'sas_address': None,
u 'host': u '',
u 'sectorsize': u '512',
u 'removable': u '0',
u 'support_discard': u '0',
u 'model': None,
u 'size': u '10.00 GB',
u 'holders': [],
u 'partitions': {
u 'xvda1': {
u 'sectorsize': 512,
u 'uuid': None,
u 'sectors': u '204800',
u 'start': u '2048',
u 'holders': [],
u 'size': u '100.00 MB'
},
u 'xvda2': {
u 'sectorsize': 512,
u 'uuid': None,
u 'sectors': u '5242880',
u 'start': u '206848',
u 'holders': [],
u 'size': u '2.50 GB'
},
u 'xvda3': {
u 'sectorsize': 512,
u 'uuid': None,
u 'sectors': u '15521792',
u 'start': u '5449728',
u 'holders': [],
u 'size': u '7.40 GB'
}
}
}
},
u 'ansible_processor_cores': 2,
u 'ansible_virtualization_role': u 'guest',
'inventory_hostname': u 'dot114',
u 'ansible_dns': {
u 'nameservers': [u '8.8.4.4'],
u 'search': [u 'xeond.msxpert.com']
},
u 'ansible_processor_vcpus': 2,
u 'swarm_status': {
u 'changed': True,
u 'end': u '2017-01-19 21:35:41.156510',
u 'stdout': u 'inactive',
u 'cmd': u "docker info | egrep '^Swarm: ' | cut -d ' ' -f2",
u 'rc': 0,
u 'start': u '2017-01-19 21:35:41.138510',
u 'stderr': u 'WARNING: No swap limit support',
u 'delta': u '0:00:00.018000',
'stdout_lines': [u 'inactive'],
u 'warnings': []
},
u 'ansible_docker0': {
u 'macaddress': u '02:42:67:37:29:45',
u 'features': {},
u 'interfaces': [],
u 'mtu': 1500,
u 'active': False,
u 'promisc': False,
u 'stp': False,
u 'device': u 'docker0',
u 'type': u 'bridge',
u 'id': u '8000.024267372945'
},
u 'ansible_bios_version': u '4.6.1-xs121563',
u 'ansible_processor': [u 'GenuineIntel', u 'Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz', u 'GenuineIntel', u 'Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz'],
u 'ansible_date_time': {
u 'weekday_number': u '4',
u 'iso8601_basic_short': u '20170119T213540',
u 'tz': u 'UTC',
u 'weeknumber': u '03',
u 'hour': u '21',
u 'time': u '21:35:40',
u 'tz_offset': u '+0000',
u 'month': u '01',
u 'epoch': u '1484861740',
u 'iso8601_micro': u '2017-01-19T21:35:40.912680Z',
u 'weekday': u 'Thursday',
u 'iso8601_basic': u '20170119T213540912555',
u 'year': u '2017',
u 'date': u '2017-01-19',
u 'iso8601': u '2017-01-19T21:35:40Z',
u 'day': u '19',
u 'minute': u '35',
u 'second': u '40'
},
u 'ansible_lo': {
u 'features': {},
u 'mtu': 65536,
u 'active': True,
u 'promisc': False,
u 'device': u 'lo',
u 'type': u 'loopback'
},
u 'ansible_memtotal_mb': 1998,
u 'ansible_architecture': u 'x86_64',
u 'ansible_default_ipv4': {
u 'interface': u 'eth0',
u 'gateway': u '10.0.0.1',
u 'address': u '10.0.0.114'
},
u 'ansible_swapfree_mb': 2559,
u 'ansible_default_ipv6': {},
u 'ansible_distribution_release': u 'NA',
u 'ansible_system_vendor': u 'Xen',
u 'ansible_os_family': u 'Alpine',
'group_names': [u 'swarm_worker_bootstrap', u 'worker'],
u 'ansible_cmdline': {
u 'nomodeset': True,
u 'BOOT_IMAGE': u 'vmlinuz-grsec',
u 'modules': u 'sd-mod,usb-storage,ext4',
u 'quiet': True,
u 'initrd': u 'initramfs-grsec',
u 'root': u 'UUID=f090ebcf-17b1-4ff3-9525-7e1cff922be6'
},
u 'ansible_system_capabilities': [u 'cap_chown', u 'cap_dac_override', u 'cap_dac_read_search', u 'cap_fowner', u 'cap_fsetid', u 'cap_kill', u 'cap_setgid', u 'cap_setuid', u 'cap_setpcap', u 'cap_linux_immutable', u 'cap_net_bind_service', u 'cap_net_broadcast', u 'cap_net_admin', u 'cap_net_raw', u 'cap_ipc_lock', u 'cap_ipc_owner', u 'cap_sys_module', u 'cap_sys_rawio', u 'cap_sys_chroot', u 'cap_sys_ptrace', u 'cap_sys_pacct', u 'cap_sys_admin', u 'cap_sys_boot', u 'cap_sys_nice', u 'cap_sys_resource', u 'cap_sys_time', u 'cap_sys_tty_config', u 'cap_mknod', u 'cap_lease', u 'cap_audit_write', u 'cap_audit_control', u 'cap_setfcap', u 'cap_mac_override', u 'cap_mac_admin', u 'cap_syslog', u 'cap_wake_alarm', u 'cap_block_suspend', u 'cap_audit_read+ep'],
u 'ansible_user_gid': 0,
u 'ansible_selinux': False,
'ansible_version': {
'major': 2,
'full': '2.2.0.0',
'string': '2.2.0.0',
'minor': 2,
'revision': 0
},
u 'ansible_userspace_architecture': u 'x86_64',
u 'ansible_bios_date': u '02/29/2016',
u 'ansible_product_uuid': u 'B957BBD1-43F9-E83F-62E9-C06EAD27AC33',
'inventory_file': '/etc/ansible/hosts',
u 'ansible_system': u 'Linux',
u 'ansible_pkg_mgr': u 'apk',
u 'ansible_memfree_mb': 1729,
u 'ansible_user_shell': u '/bin/ash',
u 'ansible_user_uid': 0,
u 'ansible_user_id': u 'root',
u 'ansible_distribution': u 'Alpine',
u 'ansible_env': {
u 'USERNAME': u 'root',
u 'SUDO_COMMAND': u '/bin/sh -c echo BECOME-SUCCESS-ubecwnppoflkthuxcufpxthjvtgbofpi; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1484861740.52-249378026853971/setup.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1484861740.52-249378026853971/" > /dev/null 2>&1',
u 'SUDO_GID': u '0',
u 'SHELL': u '/bin/ash',
u 'SHLVL': u '1',
u 'SUDO_UID': u '0',
u 'TERM': u 'xterm',
u 'PATH': u '/bin:/usr/bin:/sbin:/usr/sbin',
u 'PWD': u '/root',
u 'LOGNAME': u 'root',
u 'USER': u 'root',
u 'HOME': u '/root',
u 'MAIL': u '/var/mail/root',
u 'SUDO_USER': u 'root'
},
u 'ansible_distribution_major_version': u 'NA',
u 'module_setup': True,
u 'ansible_processor_count': 2,
u 'ansible_hostname': u 'dot114',
'inventory_hostname_short': u 'dot114',
u 'ansible_swaptotal_mb': 2559,
u 'ansible_host': u '10.0.0.114',
'groups': {
u 'swarm_manager_operational': [u 'dot110'],
'ungrouped': [],
'all': [u 'dot113', u 'dot114', u 'dot110', u 'dot111', u 'dot112'],
u 'swarm_manager_bootstrap': [u 'dot111', u 'dot112'],
u 'worker': [u 'dot113', u 'dot114'],
u 'swarm_worker_bootstrap': [u 'dot113', u 'dot114'],
u 'manager': [u 'dot110', u 'dot111', u 'dot112']
},
u 'ansible_all_ipv6_addresses': [],
u 'ansible_interfaces': [u 'lo', u 'docker0', u 'eth0'],
u 'ansible_uptime_seconds': 5471,
u 'ansible_machine': u 'x86_64',
u 'ansible_kernel': u '4.4.41-0-grsec',
u 'ansible_gather_subset': [u 'hardware', u 'network', u 'virtual'],
u 'ansible_user_gecos': u 'root',
'inventory_dir': '/etc/ansible',
u 'ansible_system_capabilities_enforced': u 'True',
u 'ansible_python': {
u 'executable': u '/usr/bin/python',
u 'version': {
u 'micro': 13,
u 'major': 2,
u 'releaselevel': u 'final',
u 'serial': 0,
u 'minor': 7
},
u 'type': u 'CPython',
u 'has_sslcontext': True,
u 'version_info': [2, 7, 13, u 'final', 0]
},
u 'ansible_processor_threads_per_core': 1,
u 'ansible_fqdn': u 'dot114',
u 'ansible_mounts': [{
u 'uuid': u 'N/A',
u 'size_total': 7755128832,
u 'mount': u '/',
u 'size_available': 6831783936,
u 'fstype': u 'ext4',
u 'device': u '/dev/xvda3',
u 'options': u 'rw,relatime,data=ordered'
}, {
u 'uuid': u 'N/A',
u 'size_total': 97335296,
u 'mount': u '/boot',
u 'size_available': 73120768,
u 'fstype': u 'ext4',
u 'device': u '/dev/xvda1',
u 'options': u 'rw,relatime,data=ordered'
}, {
u 'uuid': u 'N/A',
u 'size_total': 7755128832,
u 'mount': u '/var/lib/docker/overlay',
u 'size_available': 6831783936,
u 'fstype': u 'ext4',
u 'device': u '/dev/xvda3',
u 'options': u 'rw,relatime,data=ordered'
}],
u 'ansible_eth0': {
u 'macaddress': u 'f6:85:3f:86:b8:22',
u 'features': {},
u 'pciid': u 'vif-0',
u 'module': u 'xen_netfront',
u 'mtu': 1500,
u 'active': True,
u 'promisc': False,
u 'device': u 'eth0',
u 'type': u 'ether'
},
u 'ansible_python_version': u '2.7.13',
u 'ansible_product_name': u 'HVM domU',
'ansible_check_mode': False,
u 'ansible_ssh_host_key_dsa_public': u 'AAAAB3NzaC1kc3MAAACBAIYgZ4/botA1gwJbsM/q6GpNGNKLcChgcqlcfzPPp1PB4p/SAOV9ZphzuLC94AQvjnMLEk2NyCdhfuNL0zcqbAZkDCXMMx10LNiSLYD84aaHUHKeg4neMDAoXlUUJ28AFknVKV+ZgFlSxFnfbiWWuIbG6htU3kUK6klZSed2F1TPAAAAFQCMEmdCRmC5h88j+vmyE1wgqUf6uwAAAIBk4c9w4BYP71ohJdKBoJpmJfaja4GOKo0W5RItQXaSqil9JVYa7QYu15XMrDrRVmaZeThi6crp22rW8BKEP9ZAR12FVvxPZg/gbELGpzMG30Px25W5vXkOEj0KyeeHri3fIqmAHm8J9yk4bftT+BiKfAGOzVSXI21bs6V0anSPEAAAAIBZWVeS5cIgOppKt8Q2fQ1O1LNENJe+h+ZjCMldOvbjQQP2xZ5GUBwWrxHNJaxPTy2ZR//yah01GqteSnBDgB+b8v0A5K/MYxeNKIVFdfn/BAesggct8eHHS9vQciJQtfueKOrz/XHD6T71UpIfVOw7AAx0cPfTgkU53Ja99/Yc/A==',
u 'ansible_ssh_host_key_rsa_public': u 'AAAAB3NzaC1yc2EAAAADAQABAAABAQC3+LDm6VawKuhHslZoeln/GclEM8jAqDnQ1HK/4Y0Wud9jewM83b325jdN7t3mCUUm9JaHAq4RTqTDWBY151QwSnF8TvuFGaqGY9csqT2JdSNGlIG8bnF6h4kC2GTjp7n3UOD1gOtKfwFI9q8KQUjP1+vKJJvpLd62rhap20iyEXeBz6RDJb5AjVY4i8w3oJuA8v/mHtaa+oBvN2cO5h13QKdIuSSo3ISLNwLMVs+RBjo9YNbxkizc8tlZbunKVP9Opxp3W8ic2AKBmv+XCbS/WQhJLERKBwirn+SgFwxqScWI5z1xmt93/tWeFq6UG/U+3P8xzq0Cd6nKcmUVsl5L',
u 'ansible_nodename': u 'dot114'
}

his node is already part of a swarm error

Please close as none issue. The error was caused because the port 2377 was not opened on the host.
when running swarm.yml after the first run, I am getting

TASK [join manager nodes to cluster] ***********************************************************************************
fatal: [QACSDCKRMGR2.domain.com]: FAILED! => {"changed": true, "cmd": "docker swarm join --advertise-addr=ens160:2377 --token=SWMTKN-1-67782acawcmcfbkn9wbbskn60qsbz9dbahpj6fie2q80v8382y-ekqipeu0w4qieycwx5nngmbmk 10.130.10.24:2377\n", "delta": "0:00:00.044771", "end": "2022-05-24 15:17:19.163574", "msg": "non-zero return code", "rc": 1, "start": "2022-05-24 15:17:19.118803", "stderr": "Error response from daemon: This node is already part of a swarm. Use \"docker swarm leave\" to leave this swarm and join another one.", "stderr_lines": ["Error response from daemon: This node is already part of a swarm. Use \"docker swarm leave\" to leave this swarm and join another one."], "stdout": "", "stdout_lines": []}
fatal: [QACSDCKRMGR3.domain.com]: FAILED! => {"changed": true, "cmd": "docker swarm join --advertise-addr=ens160:2377 --token=SWMTKN-1-67782acawcmcfbkn9wbbskn60qsbz9dbahpj6fie2q80v8382y-ekqipeu0w4qieycwx5nngmbmk 10.130.10.24:2377\n", "delta": "0:00:00.055135", "end": "2022-05-24 15:17:19.206045", "msg": "non-zero return code", "rc": 1, "start": "2022-05-24 15:17:19.150910", "stderr": "Error response from daemon: This node is already part of a swarm. Use \"docker swarm leave\" to leave this swarm and join another one.", "stderr_lines": ["Error response from daemon: This node is already part of a swarm. Use \"docker swarm leave\" to leave this swarm and join another one."], "stdout": "", "stdout_lines": []}

How do I check if the node is already part of the swarm before its added?
Thanks

error checking ControlAvailable value

Hi,
using when: "{{ hostvars[item]['docker_info']['Swarm']['ControlAvailable'] }} == 'true'"
always evaluates to true even in if the node is not a master

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.