Giter Site home page Giter Site logo

ansible-collections / ansible-consul Goto Github PK

View Code? Open in Web Editor NEW
445.0 25.0 308.0 1.26 MB

:satellite: Ansible role for Hashicorp Consul clusters

Home Page: https://galaxy.ansible.com/ansible-community/consul/

License: BSD 2-Clause "Simplified" License

Shell 18.98% Jinja 78.71% Dockerfile 2.31%
consul ansible service-discovery ansible-role hashicorp hacktoberfest

ansible-consul's Introduction

Consul

Molecule Average time to resolve an issue Percentage of issues still open

This Ansible role installs Consul, including establishing a filesystem structure and server or client agent configuration with support for some common operational features.

It can also bootstrap a development or evaluation cluster of 3 server agents running in a Vagrant and VirtualBox based environment. See README_VAGRANT.md and the associated Vagrantfile for more details.

Role Philosophy

“Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.”
― Kurt Vonnegut, Hocus Pocus

Please note that the original design goal of this role was more concerned with the initial installation and bootstrapping of a Consul server cluster environment and so it does not currently concern itself (all that much) with performing ongoing maintenance of a cluster.

Many users have expressed that the Vagrant based environment makes getting a working local Consul server cluster environment up and running an easy process — so this role will target that experience as a primary motivator for existing.

If you get some mileage from it in other ways, then all the better!

Role migration and installation

This role was originally developed by Brian Shumate and was known on Ansible Galaxy as brianshumate.consul. Brian asked the community to be relieved of the maintenance burden, and therefore Bas Meijer transferred the role to ansible-collections so that a team of volunteers can maintain it. To install this role into your project you should create a file requirements.yml in the subdirectory roles/ of your project with this content:

---
- src: https://github.com/ansible-collections/ansible-consul.git
  name: ansible-consul
  scm: git
  version: master

This repo has tagged releases that you can use to pin the version.

Tower will install the role automatically, if you use the CLI to control ansible, then install it like:

ansible-galaxy install -p roles -r roles/requirements.yml

Requirements

This role requires a FreeBSD, Debian, or Red Hat Enterprise Linux distribution or Windows Server 2012 R2.

The role might work with other OS distributions and versions, but is known to function well with the following software versions:

  • Consul: 1.8.7
  • Ansible: 2.8.2
  • Alma Linux: 8, 9
  • Alpine Linux: 3.8
  • CentOS: 7, 8
  • Debian: 9
  • FreeBSD: 11
  • Mac OS X: 10.15 (Catalina)
  • RHEL: 7, 8
  • Rocky Linux: 8
  • OracleLinux: 7, 8
  • Ubuntu: 16.04
  • Windows: Server 2012 R2

Note that for the "local" installation mode (the default), this role will locally download only one instance of the Consul archive, unzip it and install the resulting binary on all desired Consul hosts.

To do so requires that unzip is available on the Ansible control host and the role will fail if it doesn't detect unzip in the PATH.

Collection requirements for this role are listed in the requirements.yml file. It is your responsibility to make sure that you install these collections to ensure that the role runs properly. Usually, this can be done with:

ansible-galaxy collection install -r requirements.yml

Caveats

This role does not fully support the limit option (ansible -l) to limit the hosts, as this will break populating required host variables. If you do use the limit option with this role, you can encounter template errors like:

Undefined is not JSON serializable.

Role Variables

The role uses variables defined in these 3 places:

  • Hosts inventory file (see examples/vagrant_hosts for an example)
  • vars/*.yml (primarily OS/distributions specific variables)
  • defaults/main.yml (everything else)

⚠️ NOTE: The role relies on the inventory host group for the consul servers to be defined as the variable consul_group_name and it will not function properly otherwise. Alternatively the consul servers can be placed in the default host group [consul_instances] in the inventory as shown in the examples below.

Many role variables can also take their values from environment variables as well; those are noted in the description where appropriate.

consul_version

  • Version to install
  • Set value as latest for the latest available version of consul
  • Default value: 1.8.7

consul_architecture_map

  • Dictionary for translating ansible_architecture values to Go architecture values naming convention
  • Default value: dict

consul_architecture

  • System architecture as determined by {{ consul_architecture_map[ansible_architecture] }}
  • Default value (determined at runtime): amd64, arm, or arm64

consul_os

  • Operating system name in lowercase representation
  • Default value: {{ ansible_os_family | lower }}

consul_install_dependencies

  • Install python and package dependencies required for the role functions.
  • Default value: true

consul_zip_url

  • Consul archive file download URL
  • Default value: https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_{{ consul_os }}_{{ consul_architecture }}.zip

consul_checksum_file_url

  • Package SHA256 summaries file URL
  • Default value: https://releases.hashicorp.com/consul/{{ consul_version }}/{{ consul_version }}_SHA256SUMS

consul_bin_path

  • Binary installation path
  • Default Linux value: /usr/local/bin
  • Default Windows value: C:\ProgramData\consul\bin

consul_config_path

  • Base configuration file path
  • Default Linux value: /etc/consul
  • Default Windows value: C:\ProgramData\consul\config

consul_configd_path

  • Additional configuration directory
  • Default Linux value: {{ consul_config_path }}/consul.d
  • Default Windows value: C:\ProgramData\consul\config.d

consul_data_path

  • Data directory path as defined in data_dir or -data-dir
  • Default Linux value: /opt/consul
  • Default Windows value: C:\ProgramData\consul\data

consul_configure_syslogd

  • Enable configuration of rsyslogd or syslog-ng on Linux. If disabled, Consul will still log to syslog if consul_syslog_enable is true, but the syslog daemon won't be configured to write Consul logs to their own logfile.
    • Override with CONSUL_CONFIGURE_SYSLOGD environment variable
  • Default Linux value: false

consul_log_path

  • If consul_syslog_enable is false
  • If consul_syslog_enable is true
    • Log path for use in rsyslogd configuration on Linux. Ignored if consul_configure_syslogd is false.
  • Default Linux value: /var/log/consul
    • Override with CONSUL_LOG_PATH environment variable
  • Default Windows value: C:\ProgramData\consul\log

consul_log_file

  • If consul_syslog_enable is false
  • If consul_syslog_enable is true
    • Log file for use in rsyslogd configuration on Linux. Ignored if consul_configure_syslogd is false.
  • Override with CONSUL_LOG_FILE environment variable
  • Default Linux value: consul.log

consul_log_rotate_bytes

  • Log rotate bytes as defined in log_rotate_bytes or -log-rotate-bytes
    • Override with CONSUL_LOG_ROTATE_BYTES environment variable
  • Ignored if consul_syslog_enable is true
  • Default value: 0

consul_log_rotate_duration

consul_log_rotate_max_files

consul_syslog_facility

  • Syslog facility as defined in syslog_facility
    • Override with CONSUL_SYSLOG_FACILITY environment variable
  • Default Linux value: local0

syslog_user

  • Owner of rsyslogd process on Linux. consul_log_path's ownership is set to this user on Linux. Ignored if consul_configure_syslogd is false.
    • Override with SYSLOG_USER environment variable
  • Default Linux value: syslog

syslog_group

  • Group of user running rsyslogd process on Linux. consul_log_path's group ownership is set to this group on Linux. Ignored if consul_configure_syslogd is false.
    • Override with SYSLOG_GROUP environment variable
  • Default value: adm

consul_run_path

  • Run path for process identifier (PID) file
  • Default Linux value: /run/consul
  • Default Windows value: C:\ProgramData\consul

consul_user

  • OS user
  • Default Linux value: consul
  • Default Windows value: LocalSystem

consul_manage_user

  • Whether to create the user defined by consul_user or not
  • Default value: true

consul_group

  • OS group
  • Default value: bin

consul_manage_group

  • Whether to create the group defined by consul_group or not
  • Default value: true

consul_group_name

  • Inventory group name
    • Override with CONSUL_GROUP_NAME environment variable
  • Default value: consul_instances

consul_retry_interval

  • Interval for reconnection attempts to LAN servers
  • Default value: 30s

consul_retry_interval_wan

  • Interval for reconnection attempts to WAN servers
  • Default value: 30s

consul_retry_join_skip_hosts

  • If true, the config value for retry_join won't be populated by the default hosts servers. The value can be initialized using consul_join
  • Default value: false

consul_retry_max

  • Max reconnection attempts to LAN servers before failing (0 = infinite)
  • Default value: 0

consul_retry_max_wan

  • Max reconnection attempts to WAN servers before failing (0 = infinite)
  • Default value: 0

consul_join

  • List of LAN servers, not managed by this role, to join (IPv4 IPv6 or DNS addresses)
  • Default value: []

consul_join_wan

  • List of WAN servers, not managed by this role, to join (IPv4 IPv6 or DNS addresses)
  • Default value: []

consul_servers

It's typically not necessary to manually alter this list.

  • List of server nodes
  • Default value: List of all nodes in consul_group_name with consul_node_role set to server or bootstrap

consul_bootstrap_expect

  • Boolean that adds bootstrap_expect value on Consul servers's config file
  • Default value: false

consul_bootstrap_expect_value

  • Integer to define the minimum number of consul servers joined to the cluster in order to elect the leader.
  • Default value: Calculated at runtime based on the number of nodes

consul_gather_server_facts

This feature makes it possible to gather the consul_advertise_address(_wan) from servers that are currently not targeted by the playbook.

To make this possible the delegate_facts option is used; note that his option has been problematic.

  • Gather facts from servers that are not currently targeted
  • Default value: false

consul_datacenter

  • Datacenter label
    • Override with CONSUL_DATACENTER environment variable- Default value: dc1
  • Default value: dc1

consul_domain

  • Consul domain name as defined in domain or -domain
    • Override with CONSUL_DOMAIN environment variable
  • Default value: consul

consul_alt_domain

  • Consul domain name as defined in alt_domain or -alt-domain
    • Override with CONSUL_ALT_DOMAIN environment variable
  • Default value: Empty string

consul_node_meta

  • Consul node meta data (key-value)
  • Supported in Consul version 0.7.3 or later
  • Default value: {}
  • Example:
consul_node_meta:
    node_type: "my-custom-type"
    node_meta1: "metadata1"
    node_meta2: "metadata2"

consul_log_level

  • Log level as defined in log_level or -log-level
    • Override with CONSUL_LOG_LEVEL environment variable
  • Default value: INFO

consul_syslog_enable

  • Log to syslog as defined in enable_syslog or -syslog
    • Override with CONSUL_SYSLOG_ENABLE environment variable
  • Default Linux value: false
  • Default Windows value: false

consul_iface

  • Consul network interface
    • Override with CONSUL_IFACE environment variable
  • Default value: {{ ansible_default_ipv4.interface }}

consul_bind_address

  • Bind address
    • Override with CONSUL_BIND_ADDRESS environment variable
  • Default value: default ipv4 address, or address of interface configured by consul_iface

consul_advertise_address

  • LAN advertise address
  • Default value: consul_bind_address

consul_advertise_address_wan

  • Wan advertise address
  • Default value: consul_bind_address

consul_translate_wan_address

  • Prefer a node's configured WAN address when serving DNS
  • Default value: false

consul_advertise_addresses

  • Advanced advertise addresses settings
  • Individual addresses can be overwritten using the consul_advertise_addresses_* variables
  • Default value:
    consul_advertise_addresses:
      serf_lan: "{{ consul_advertise_addresses_serf_lan | default(consul_advertise_address+':'+consul_ports.serf_lan) }}"
      serf_wan: "{{ consul_advertise_addresses_serf_wan | default(consul_advertise_address_wan+':'+consul_ports.serf_wan) }}"
      rpc: "{{ consul_advertise_addresses_rpc | default(consul_bind_address+':'+consul_ports.server) }}"

consul_client_address

  • Client address
  • Default value: 127.0.0.1

consul_addresses

  • Advanced address settings
  • Individual addresses kan be overwritten using the consul_addresses_* variables
  • Default value:
    consul_addresses:
      dns: "{{ consul_addresses_dns | default(consul_client_address, true) }}"
      http: "{{ consul_addresses_http | default(consul_client_address, true) }}"
      https: "{{ consul_addresses_https | default(consul_client_address, true) }}"
      rpc: "{{ consul_addresses_rpc | default(consul_client_address, true) }}"
      grpc: "{{ consul_addresses_grpc | default(consul_client_address, true) }}"
      grpc_tls: "{{ consul_addresses_grpc_tls | default(consul_client_address, true) }}"

consul_ports

  • The official documentation on the Ports Used
  • The ports mapping is a nested dict object that allows setting the bind ports for the following keys:
    • dns - The DNS server, -1 to disable. Default 8600.
    • http - The HTTP API, -1 to disable. Default 8500.
    • https - The HTTPS API, -1 to disable. Default -1 (disabled).
    • rpc - The CLI RPC endpoint. Default 8400. This is deprecated in Consul 0.8 and later.
    • grpc - The gRPC endpoint, -1 to disable. Default -1 (disabled).
    • grpc_tls - The gRPC TLS endpoint, -1 to disable. Default -1 (disabled). This is available in Consul 1.14.0 and later.
    • serf_lan - The Serf LAN port. Default 8301.
    • serf_wan - The Serf WAN port. Default 8302.
    • server - Server RPC address. Default 8300.

For example, to enable the consul HTTPS API it is possible to set the variable as follows:

  • Default values:
  consul_ports:
    dns: "{{ consul_ports_dns | default('8600', true) }}"
    http: "{{ consul_ports_http | default('8500', true) }}"
    https: "{{ consul_ports_https | default('-1', true) }}"
    rpc: "{{ consul_ports_rpc | default('8400', true) }}"
    serf_lan: "{{ consul_ports_serf_lan | default('8301', true) }}"
    serf_wan: "{{ consul_ports_serf_wan | default('8302', true) }}"
    server: "{{ consul_ports_server | default('8300', true) }}"
    grpc: "{{ consul_ports_grpc | default('-1', true) }}"
    grpc_tls: "{{ consul_ports_grpc_tls | default('-1', true) }}"

Notice that the dict object has to use precisely the names stated in the documentation! And all ports must be specified. Overwriting one or multiple ports can be done using the consul_ports_* variables.

consul_node_name

  • Define a custom node name (should not include dots) See node_name
    • The default value on Consul is the hostname of the server.
  • Default value: ''

consul_recursors

  • List of upstream DNS servers See recursors
    • Override with CONSUL_RECURSORS environment variable
  • Default value: Empty list

consul_iptables_enable

  • Whether to enable iptables rules for DNS forwarding to Consul
    • Override with CONSUL_IPTABLES_ENABLE environment variable
  • Default value: false

consul_acl_policy

  • Add basic ACL config file
    • Override with CONSUL_ACL_POLICY environment variable
  • Default value: false

consul_acl_enable

  • Enable ACLs
    • Override with CONSUL_ACL_ENABLE environment variable
  • Default value: false

consul_acl_ttl

  • TTL for ACL's
    • Override with CONSUL_ACL_TTL environment variable
  • Default value: 30s

consul_acl_token_persistence

  • Define if tokens set using the API will be persisted to disk or not
    • Override with CONSUL_ACL_TOKEN_PERSISTENCE environment variable
  • Default value: true

consul_acl_datacenter

  • ACL authoritative datacenter name
    • Override with CONSUL_ACL_DATACENTER environment variable
  • Default value: {{ consul_datacenter }} (dc1)

consul_acl_down_policy

  • Default ACL down policy
    • Override with CONSUL_ACL_DOWN_POLICY environment variable
  • Default value: allow

consul_acl_token

  • Default ACL token, only set if provided
    • Override with CONSUL_ACL_TOKEN environment variable
  • Default value: ''

consul_acl_agent_token

  • Used for clients and servers to perform internal operations to the service catalog. See: acl_agent_token
    • Override with CONSUL_ACL_AGENT_TOKEN environment variable
  • Default value: ''

consul_acl_agent_master_token

  • A special access token that has agent ACL policy write privileges on each agent where it is configured
    • Override with CONSUL_ACL_AGENT_MASTER_TOKEN environment variable
  • Default value: ''

consul_acl_default_policy

  • Default ACL policy
    • Override with CONSUL_ACL_DEFAULT_POLICY environment variable
  • Default value: allow

consul_acl_master_token

  • ACL master token
    • Override with CONSUL_ACL_MASTER_TOKEN environment variable
  • Default value: UUID

consul_acl_master_token_display

  • Display generated ACL Master Token
    • Override with CONSUL_ACL_MASTER_TOKEN_DISPLAY environment variable
  • Default value: false

consul_acl_replication_enable

  • Enable ACL replication without token (makes it possible to set the token trough the API)
    • Override with CONSUL_ACL_REPLICATION_TOKEN_ENABLE environment variable
  • Default value: ''

consul_acl_replication_token

  • ACL replication token
    • Override with CONSUL_ACL_REPLICATION_TOKEN_DISPLAY environment variable
  • Default value: SN4K3OILSN4K3OILSN4K3OILSN4K3OIL

consul_tls_enable

  • Enable TLS
    • Override with CONSUL_ACL_TLS_ENABLE environment variable
  • Default value: false

consul_tls_copy_keys

  • Enables or disables the management of the TLS files
    • Disable it if you enable TLS (consul_tls_enable) but want to manage the TLS files on your own
  • Default value: true

consul_tls_dir

  • Target directory for TLS files
    • Override with CONSUL_TLS_DIR environment variable
  • Default value: /etc/consul/ssl

consul_tls_ca_crt

  • CA certificate filename
    • Override with CONSUL_TLS_CA_CRT environment variable
  • Default value: ca.crt

consul_tls_server_crt

  • Server certificate
    • Override with CONSUL_TLS_SERVER_CRT environment variable
  • Default value: server.crt

consul_tls_server_key

  • Server key
    • Override with CONSUL_TLS_SERVER_KEY environment variable
  • Default value: server.key

consul_tls_files_remote_src

  • Copy from remote source if TLS files are already on host
  • Default value: false

consul_encrypt_enable

  • Enable Gossip Encryption
  • Default value: true

consul_encrypt_verify_incoming

  • Verify incoming Gossip connections
  • Default value: true

consul_encrypt_verify_outgoing

  • Verify outgoing Gossip connections
  • Default value: true

consul_disable_keyring_file

  • If set, the keyring will not be persisted to a file. Any installed keys will be lost on shutdown, and only the given -encrypt key will be available on startup.
  • Default value: false

consul_raw_key

  • Set the encryption key; should be the same across a cluster. If not present the key will be generated & retrieved from the bootstrapped server.
  • Default value: ''

consul_tls_verify_incoming

  • Verify incoming connections
    • Override with CONSUL_TLS_VERIFY_INCOMING environment variable
  • Default value: false

consul_tls_verify_outgoing

  • Verify outgoing connections
    • Override with CONSUL_TLS_VERIFY_OUTGOING environment variable
  • Default value: true

consul_tls_verify_incoming_rpc

  • Verify incoming connections on RPC endpoints (client certificates)
    • Override with CONSUL_TLS_VERIFY_INCOMING_RPC environment variable
  • Default value: false

consul_tls_verify_incoming_https

  • Verify incoming connections on HTTPS endpoints (client certificates)
    • Override with CONSUL_TLS_VERIFY_INCOMING_HTTPS environment variable
  • Default value: false

consul_tls_verify_server_hostname

  • Verify server hostname
    • Override with CONSUL_TLS_VERIFY_SERVER_HOSTNAME environment variable
  • Default value: false

consul_tls_min_version

  • Minimum acceptable TLS version
    • Can be overridden with CONSUL_TLS_MIN_VERSION environment variable
    • For versions < 1.12.0 use 'tls12,tls13,...'
  • Default value: TLSv1_2

consul_tls_cipher_suites

consul_tls_prefer_server_cipher_suites

auto_encrypt

auto_encrypt:
  enabled: false
  • Example:
auto_encrypt:
  enabled: true
  dns_san: ["consul.com"]
  ip_san: ["127.0.0.1"]

consul_force_install

  • If true, then always install consul. Otherwise, consul will only be installed either if not present on the host, or if the installed version differs from consul_version.
  • The role does not handle the orchestration of a rolling update of servers followed by client nodes
  • Default value: false

consul_install_remotely

  • Whether to download the files for installation directly on the remote hosts
  • This is the only option on Windows as WinRM is somewhat limited in this scope
  • Default value: false

consul_install_from_repo

  • Boolean, whether to install consul from repository as opposed to installing the binary directly.
  • Supported distros: Alma Linux, Amazon Linux, CentOS, Debian, Fedora, Ubuntu, Red Hat, Rocky.
  • Default value: false

consul_ui

  • Enable the consul ui?
  • Default value: true

consul_ui_legacy

  • Enable legacy consul ui mode
  • Default value: false

consul_disable_update_check

  • Disable the consul update check?
  • Default value: false

consul_enable_script_checks

  • Enable script based checks?
  • Default value: false
  • This is discouraged in favor of consul_enable_local_script_checks.

consul_enable_local_script_checks

  • Enable locally defined script checks?
  • Default value: false

consul_raft_protocol

  • Raft protocol to use.
  • Default value:
    • Consul versions <= 0.7.0: 1
    • Consul versions > 0.7.0: 3

consul_node_role

  • The Consul role of the node, one of: bootstrap, server, or client
  • Default value: client

One server should be designated as the bootstrap server, and the other servers will connect to this server. You can also specify client as the role, and Consul will be configured as a client agent instead of a server.

There are two methods to setup a cluster, the first one is to explicitly choose the bootstrap server, the other one is to let the servers elect a leader among themselves.

Here is an example of how the hosts inventory could be defined for a simple cluster of 3 servers, the first one being the designated bootstrap / leader:

[consul_instances]
consul1.consul consul_node_role=bootstrap
consul2.consul consul_node_role=server
consul3.consul consul_node_role=server
consul4.local consul_node_role=client

Or you can use the simpler method of letting them do their election process:

[consul_instances]
consul1.consul consul_node_role=server consul_bootstrap_expect=true
consul2.consul consul_node_role=server consul_bootstrap_expect=true
consul3.consul consul_node_role=server consul_bootstrap_expect=true
consul4.local consul_node_role=client

Note that this second form is the preferred one, because it is simpler.

consul_autopilot_enable

Autopilot is a set of new features added in Consul 0.8 to allow for automatic operator-friendly management of Consul servers. It includes cleanup of dead servers, monitoring the state of the Raft cluster, and stable server introduction.

https://www.consul.io/docs/guides/autopilot.html

  • Enable Autopilot config (will be written to bootsrapper node)
    • Override with CONSUL_AUTOPILOT_ENABLE environment variable
  • Default value: false

consul_autopilot_cleanup_dead_Servers

Dead servers will periodically be cleaned up and removed from the Raft peer set, to prevent them from interfering with the quorum size and leader elections. This cleanup will also happen whenever a new server is successfully added to the cluster.

  • Enable Autopilot config (will be written to bootsrapper node)
    • Override with CONSUL_AUTOPILOT_CLEANUP_DEAD_SERVERS environment variable
  • Default value: false

consul_autopilot_last_contact_threshold

Used in the serf health check to determine node health.

  • Sets the threshold for time since last contact
    • Override with CONSUL_AUTOPILOT_LAST_CONTACT_THRESHOLD environment variable
  • Default value: 200ms

consul_autopilot_max_trailing_logs

  • Used in the serf health check to set a max-number of log entries nodes can trail the leader
    • Override with CONSUL_AUTOPILOT_MAX_TRAILING_LOGS environment variable
  • Default value: 250

consul_autopilot_server_stabilization_time

  • Time to allow a new node to stabilize
    • Override with CONSUL_AUTOPILOT_SERVER_STABILIZATION_TIME environment variable
  • Default value: 10s

consul_autopilot_redundancy_zone_tag

Consul Enterprise Only (requires that CONSUL_ENTERPRISE is set to true)

  • Override with CONSUL_AUTOPILOT_REDUNDANCY_ZONE_TAG environment variable
  • Default value: az

consul_autopilot_disable_upgrade_migration

Consul Enterprise Only (requires that CONSUL_ENTERPRISE is set to true)

  • Override with CONSUL_AUTOPILOT_DISABLE_UPGRADE_MIGRATION environment variable
  • Default value: false

consul_autopilot_upgrade_version_tag

Consul Enterprise Only (requires that CONSUL_ENTERPRISE is set to true)

  • Override with CONSUL_AUTOPILOT_UPGRADE_VERSION_TAG environment variable
  • Default value: ''

consul_debug

  • Enables the generation of additional config files in the Consul config directory for debug purpose
  • Default value: false

consul_config_template_path

  • If the default config template does not suit your needs, you can replace it with your own.
  • Default value: templates/config.json.j2.

consul_rolling_restart

  • Restarts consul node one by one to avoid service interruption on existing cluster (Unix platforms only).
  • Default value: false

consul_rolling_restart_delay_sec

  • Adds a delay between node restart (Linux platforms only).
  • Default value: 5

Custom Configuration Section

As Consul loads the configuration from files and directories in lexical order, typically merging on top of previously parsed configuration files, you may set custom configurations via consul_config_custom, which will be expanded into a file named config_z_custom.json within your consul_config_path which will be loaded after all other configuration by default.

An example usage for enabling telemetry:

  vars:
    consul_config_custom:
      telemetry:
        dogstatsd_addr: "localhost:8125"
        dogstatsd_tags:
          - "security"
          - "compliance"
        disable_hostname: true

Consul Snapshot Agent

Consul snapshot agent takes backup snaps on a set interval and stores them. Must have enterprise

consul_snapshot

  • Bool, true will setup and start snapshot agent (enterprise only)
  • Default value: false

consul_snapshot_storage

  • Location snapshots will be stored. NOTE: path must end in snaps
  • Default value: {{ consul_config_path }}/snaps

consul_snapshot_interval

  • Default value: 1h

consul_snapshot_retain

OS and Distribution Variables

The consul binary works on most Linux platforms and is not distribution specific. However, some distributions require installation of specific OS packages with different package names.

consul_centos_pkg

  • Consul package filename
  • Default value: {{ consul_version }}_linux_amd64.zip

consul_centos_url

  • Consul package download URL
  • Default value: {{ consul_zip_url }}

consul_centos_sha256

  • Consul download SHA256 summary
  • Default value: SHA256 summary

consul_centos_os_packages

  • List of OS packages to install
  • Default value: list

consul_debian_pkg

  • Consul package filename
  • Default value: {{ consul_version }}_linux_amd64.zip

consul_debian_url

  • Consul package download URL
  • Default value: {{ consul_zip_url }}

consul_debian_sha256

  • Consul download SHA256 summary
  • Default value: SHA256 SUM

consul_debian_os_packages

  • List of OS packages to install
  • Default value: list

consul_redhat_pkg

  • Consul package filename
  • Default value: {{ consul_version }}_linux_amd64.zip

consul_redhat_url

  • Consul package download URL
  • Default value: {{ consul_zip_url }}

consul_redhat_sha256

  • Consul download SHA256 summary
  • Default value: SHA256 summary

consul_redhat_os_packages

  • List of OS packages to install
  • Default value: list

consul_systemd_restart_sec

  • Integer value for systemd unit RestartSec option
  • Default value: 42

consul_systemd_limit_nofile

  • Integer value for systemd unit LimitNOFILE option
  • Default value: 65536

consul_systemd_restart

  • String value for systemd unit Restart option
  • Default value: on-failure

consul_ubuntu_pkg

  • Consul package filename
  • Default value: {{ consul_version }}_linux_amd64.zip

consul_ubuntu_url

  • Consul package download URL
  • Default value: {{ consul_zip_url }}

consul_ubuntu_sha256

  • Consul download SHA256 summary
  • Default value: SHA256 summary

consul_ubuntu_os_packages

  • List of OS packages to install
  • Default value: list

consul_windows_pkg

  • Consul package filename
  • Default value: {{ consul_version }}_windows_amd64.zip

consul_windows_url

  • Consul package download URL
  • Default value: {{ consul_zip_url }}

consul_windows_sha256

  • Consul download SHA256 summary
  • Default value: SHA256 summary

consul_windows_os_packages

  • List of OS packages to install
  • Default value: list

consul_performance

  • List of Consul performance tuning items
  • Default value: list

raft_multiplier

leave_drain_time

  • Node leave drain time is the dwell time for a server to honor requests while gracefully leaving

  • Default value: 5s

rpc_hold_timeout

  • RPC hold timeout is the duration that a client or server will retry internal RPC requests during leader elections
  • Default value: 7s

leave_on_terminate

  • leave_on_terminate If enabled, when the agent receives a TERM signal, it will send a Leave message to the rest of the cluster and gracefully leave. The default behavior for this feature varies based on whether or not the agent is running as a client or a server. On agents in client-mode, this defaults to true and for agents in server-mode, this defaults to false.

consul_limit

  • Consul node limits (key-value)
  • Supported in Consul version 0.9.3 or later
  • Default value: {}
  • Example:
consul_limits:
    http_max_conns_per_client: 250
    rpc_max_conns_per_client: 150

Dependencies

Ansible requires GNU tar and this role performs some local use of the unarchive module for efficiency, so ensure that your system has gtar and unzip installed and in the PATH. If you don't this role will install unzip on the remote machines to unarchive the ZIP files.

If you're on system with a different (i.e. BSD) tar, like macOS and you see odd errors during unarchive tasks, you could be missing gtar.

Installing Ansible on Windows requires the PowerShell Community Extensions. These already installed on Windows Server 2012 R2 and onward. If you're attempting this role on Windows Server 2008 or earlier, you'll want to install the extensions here.

Example Playbook

Basic installation is possible using the included site.yml playbook:

ansible-playbook -i hosts site.yml

You can also pass variables in using the --extra-vars option to the ansible-playbook command:

ansible-playbook -i hosts site.yml --extra-vars "consul_datacenter=maui"

Be aware that for clustering, the included site.yml does the following:

  1. Executes consul role (installs Consul and bootstraps cluster)
  2. Reconfigures bootstrap node to run without bootstrap-expect setting
  3. Restarts bootstrap node

ACL Support

Basic support for ACLs is included in the role. You can set the environment variables CONSUL_ACL_ENABLE to true, and also set the CONSUL_ACL_DATACENTER environment variable to its correct value for your environment prior to executing your playbook; for example:

CONSUL_ACL_ENABLE=true CONSUL_ACL_DATACENTER=maui \
CONSUL_ACL_MASTER_TOKEN_DISPLAY=true ansible-playbook -i uat_hosts aloha.yml

If you want the automatically generated ACL Master Token value emitted to standard out during the play, set the environment variable CONSUL_ACL_MASTER_TOKEN_DISPLAY to true as in the above example.

If you want to use existing tokens, set the environment variables CONSUL_ACL_MASTER_TOKEN and CONSUL_ACL_REPLICATION_TOKEN as well, for example:

CONSUL_ACL_ENABLE=true CONSUL_ACL_DATACENTER=stjohn \
CONSUL_ACL_MASTER_TOKEN=0815C55B-3AD2-4C1B-BE9B-715CAAE3A4B2 \
CONSUL_ACL_REPLICATION_TOKEN=C609E56E-DD0B-4B99-A0AD-B079252354A0 \
CONSUL_ACL_MASTER_TOKEN_DISPLAY=true ansible-playbook -i uat_hosts sail.yml

There are a number of Ansible ACL variables you can override to further refine your initial ACL setup. They are not all currently picked up from environment variables, but do have some sensible defaults.

Check defaults/main.yml to see how some of he defaults (i.e. tokens) are automatically generated.

Dnsmasq DNS Forwarding Support

The role now includes support for DNS forwarding with Dnsmasq.

Enable like this:

ansible-playbook -i hosts site.yml --extra-vars "consul_dnsmasq_enable=true"

Then, you can query any of the agents via DNS directly via port 53, for example:

dig @consul1.consul consul3.node.consul

; <<>> DiG 9.8.3-P1 <<>> @consul1.consul consul3.node.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29196
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;consul3.node.consul.   IN  A

;; ANSWER SECTION:
consul3.node.consul.  0 IN  A 10.1.42.230

;; Query time: 42 msec
;; SERVER: 10.1.42.210#53(10.1.42.210)
;; WHEN: Sun Aug  7 18:06:32 2016
;;

consul_delegate_datacenter_dns

  • Whether to delegate Consul datacenter DNS domain to Consul
  • Default value: false

consul_dnsmasq_enable

  • Whether to install and configure DNS API forwarding on port 53 using DNSMasq
    • Override with CONSUL_DNSMASQ_ENABLE environment variable
  • Default value: false

consul_dnsmasq_bind_interfaces

  • Setting this option to true prevents DNSmasq from binding by default 0.0.0.0, but instead instructs it to bind to the specific network interfaces that correspond to the consul_dnsmasq_listen_addresses option
  • Default value: false

consul_dnsmasq_consul_address

  • Address used by DNSmasq to query consul
  • Default value: consul_address.dns
  • Defaults to 127.0.0.1 if consul's DNS is bound to all interfaces (eg 0.0.0.0)

consul_dnsmasq_cache

  • dnsmasq cache-size
  • If smaller then 0, the default dnsmasq setting will be used.
  • Default value: -1

consul_dnsmasq_servers

  • Upstream DNS servers used by dnsmasq
  • Default value: 8.8.8.8 and 8.8.4.4

consul_dnsmasq_revservers

  • Reverse lookup subnets
  • Default value: []

consul_dnsmasq_no_poll

  • Do not poll /etc/resolv.conf
  • Default value: false

consul_dnsmasq_no_resolv

  • Ignore /etc/resolv.conf file
  • Default value: false

consul_dnsmasq_local_service

  • Only allow requests from local subnets
  • Default value: false

consul_dnsmasq_listen_addresses

  • Custom list of addresses to listen on.
  • Default value: []

consul_connect_enabled

  • Enable Consul Connect feature
  • Default value: false

consul_cleanup_ignore_files

  • List of files to ignore during cleanup steps
  • Default value: [{{ consul_configd_path }}/consul.env]

iptables DNS Forwarding Support

This role can also use iptables instead of Dnsmasq for forwarding DNS queries to Consul. You can enable it like this:

ansible-playbook -i hosts site.yml --extra-vars "consul_iptables_enable=true"

Note that iptables forwarding and DNSmasq forwarding cannot be used simultaneously and the execution of the role will stop with error if such a configuration is specified.

TLS Support

You can enable TLS encryption by dropping a CA certificate, server certificate, and server key into the role's files directory.

By default these are named:

  • ca.crt (can be overridden by {{ consul_tls_ca_crt }})
  • server.crt (can be overridden by {{ consul_tls_server_crt }})
  • server.key (can be overridden by {{ consul_tls_server_key }})

Then either set the environment variable CONSUL_TLS_ENABLE=true or use the Ansible variable consul_tls_enable=true at role runtime.

Service management Support

You can create a configuration file for consul services. Add a list of service in the consul_services.

name Required Type Default Comment
consul_services False List [] List of service object (see below)

Services object:

name Required Type Default Comment
name True string Name of the service
id False string Id of the service
tags False list List of string tags
address False string service-specific IP address
meta False dict Dict of 64 key/values with string semantics
port False int Port of the service
enable_tag_override False bool enable/disable the anti-entropy feature for the service
kind False string identify the service as a Connect proxy instance
proxy False dict proxy configuration
checks False list List of checks configuration
connect False dict Connect object configuration
weights False dict Weight of a service in DNS SRV responses
token False string ACL token to use to register this service

Configuration example:

consul_services:
  - name: "openshift"
    tags: ['production']
  - name: "redis"
    id: "redis"
    tags: ['primary']
    address: ""
    meta:
      meta: "for my service"
    proxy:
      destination_service_name: "redis"
      destination_service_id: "redis1"
      local_service_address: "127.0.0.1"
      local_service_port: 9090
      config: {}
      upstreams:  []
    checks:
      - args: ["/home/consul/check.sh"]
        interval: "10s"

Then you can check that the service is well added to the catalog

> consul catalog services
consul
openshift
redis

Note: to delete a service that has been added from this role, remove it from the consul_services list and apply the role again.

Vagrant and VirtualBox

See examples/README_VAGRANT.md for details on quick Vagrant deployments under VirtualBox for development, evaluation, testing, etc.

License

BSD

Author Information

Brian Shumate

Contributors

Special thanks to the folks listed in CONTRIBUTORS.md for their contributions to this project.

Contributions are welcome, provided that you can agree to the terms outlined in CONTRIBUTING.md.

ansible-consul's People

Contributors

ahjohannessen avatar arledesma avatar arouene avatar bbaassssiiee avatar brianshumate avatar calebtonn avatar chrismckee avatar danielkucera avatar dependabot[bot] avatar drewmullen avatar evilhamsterman avatar gardar avatar gofrolist avatar hwmrocker avatar jasonneurohr avatar judy-zz avatar karras avatar lanefu avatar liuxu623 avatar logan2211 avatar mattburgess avatar nre-ableton avatar oliverprater avatar ppuschmann avatar rodjers avatar slomo avatar soloradish avatar vincele avatar vincepii avatar violuke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-consul's Issues

Key not read by servers that need it

Hi Brian,

Thanks for your work on this role!

When adding a server after initial setup, the key is not correctly read by the server.

I think it's a wrong check;
https://github.com/brianshumate/ansible-consul/blob/master/tasks/main.yml#L125

- name: Read key for servers that require it
  set_fact:
    consul_raw_key: "{{ lookup('file', '/tmp/consul_raw.key') }}"
  when: consul_raw_key is not defined and bootstrap_marker.stat.exists

Servers that require it don't have the bootstrap_marker file. So I think the check should be ... and not bootstrap_marker.stat.exists.

Happy to provide a PR if you can confirm.

Is there a way to use `consul keygen` & `encrypt`?

Hi, Brian;

I love this role and I've used it quite a bit both for learning and in production. I'm in the process of switching over to TLS but for the short term is there a way to use the encrypt value in the config?

My current config looks like this:

{
    "bootstrap": true,
    "server": true,
    "datacenter": "stuff",
    "data_dir": "/var/consul",
    "encrypt": "QyYc0lVJXcNl3idBPREjIYww==",
    "log_level": "INFO",
    "statsd_addr": "127.0.0.1:8125",
    "enable_syslog": true,
    "start_join": ["consul2.server.prod", "consul3.server.prod"]
}

Thanks in advance!

Condition "Fail if more then one bootstrap server is defined" fails

Env

My inventory:

[consulservers]
80.some.ip.1 consul_node_role=bootstrap
80.some.ip.2 consul_node_role=server

My group_vars:

consul_version: '0.8.4'
consul_ui: yes

consul_bootstrap_expect: yes
consul_dnsmasq_enable: yes

My playbook:

- name: "Setting up consul cluster"
  hosts: consulservers
  roles:
    - role: brianshumate.consul

I am using:

» ansible --version
ansible 2.3.1.0
config file = /Users/sobolev/Documents/configs/callocal_ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.13 (default, Dec 17 2016, 23:03:43) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]

- brianshumate.consul, v1.24.1

Error

TASK [brianshumate.consul : Fail if more then one bootstrap server is defined] ***
fatal: [77.244.214.252]: FAILED! => {"failed": true, "msg": "The conditional check '_consul_bootstrap_servers | length > 1' failed. The error was: error while evaluating conditional (_consul_bootstrap_servers | length > 1): {% set __consul_bootstrap_servers = [] %}{% for server in _consul_lan_servers %}{% set _consul_node_role = hostvars[server]['consul_node_role'] | default('client', true) %}{% if _consul_node_role == 'bootstrap' %}{% if __consul_bootstrap_servers.append(server) %}{% endif %}{% endif %}{% endfor %}{{ __consul_bootstrap_servers }}: {% set __consul_lan_servers = [] %}{% for server in consul_servers %}{% set _consul_datacenter = hostvars[server]['consul_datacenter'] | default('dc1', true) %}{% if _consul_datacenter == consul_datacenter %}{% if __consul_lan_servers.append(server) %}{% endif %}{% endif %}{% endfor %}{{ __consul_lan_servers }}: {% set _consul_servers = [] %}{% for host in groups[consul_group_name] %}{% set _consul_node_role = hostvars[host]['consul_node_role'] | default('client', true) %}{% if ( _consul_node_role == 'server' or _consul_node_role == 'bootstrap') %}{% if _consul_servers.append(host) %}{% endif %}{% endif %}{% endfor %}{{ _consul_servers }}: 'dict object' has no attribute u'cluster_nodes'\n\nThe error appears to have been in '/usr/local/etc/ansible/roles/brianshumate.consul/tasks/asserts.yml': line 68, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Fail if more then one bootstrap server is defined\n ^ here\n"}
to retry, use: --limit @/Users/sobolev/Documents/configs/callocal_ansible/main.retry

I tried

  1. To change consul_node_role to every possible value.
  2. To disable consul_bootstrap_expect

No luck.

Fail if more than one bootstrap server is defined

So I'm trying to run this role, I haven't tried to override any vars or parameters this role provides, I'm just running it raw. However, when my playbook runs, I get this error.

fatal: [node_1]: FAILED! => {"failed": true, "msg": "The conditional check '_consul_bootstrap_servers | length > 1' failed. The error was: error while evaluating conditional (_consul_bootstrap_servers | length > 1): {% set __consul_bootstrap_servers = [] %}{% for server in _consul_lan_servers %}{% set _consul_node_role = hostvars[server]['consul_node_role'] | default('client', true) %}{% if _consul_node_role == 'bootstrap' %}{% if __consul_bootstrap_servers.append(server) %}{% endif %}{% endif %}{% endfor %}{{ __consul_bootstrap_servers }}: {% set __consul_lan_servers = [] %}{% for server in consul_servers %}{% set _consul_datacenter = hostvars[server]['consul_datacenter'] | default('dc1', true) %}{% if _consul_datacenter == consul_datacenter %}{% if __consul_lan_servers.append(server) %}{% endif %}{% endif %}{% endfor %}{{ __consul_lan_servers }}: {% set _consul_servers = [] %}{% for host in groups[consul_group_name] %}{% set _consul_node_role = hostvars[host]['consul_node_role'] | default('client', true) %}{% if ( _consul_node_role == 'server' or _consul_node_role == 'bootstrap') %}{% if _consul_servers.append(host) %}{% endif %}{% endif %}{% endfor %}{{ _consul_servers }}: 'dict object' has no attribute u'consul_instances'\n\nThe error appears to have been in '[REDACTED]/roles/brianshumate.consul/tasks/asserts.yml': line 68, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Fail if more than one bootstrap server is defined\n ^ here\n"}

Add checks to make role idempotent

As @kostyrevaa pointed out in #14, the role does not yet offer any idempotency, which is a touch sad. 😿 Let's identify and update areas where it needs to be checking and not altering anything existing.

Some good starting points were mentioned in #14.

Binary gets replaced each role run

Hello @brianshumate,

first thanks for creating this role. Makes the first hands on consul much easier!

We're using consul_install_remotely: true and I've noticed that the consul binary gets replaced each time the role get's applied to the servers.
This is bit confusing for us as the installation is already done, so why replacing it each time?
Mainly these tasks report a change on each run even when consul is already up and running:

  • Read Consul package checksum
  • Unarchive Consul and install binary (replaced the binary)

Also curious that the task Cleanup in the end of install_remote.yml does not trigger either.

May it be possible to alter the role so it performs the installation only if consul is absent?
Changing/replacing it when another version then installed is given via consul_version might not be an good idea as a manual upgrade seems the safer way.

Thank you very much!

Best regards

Jard

Fail if more then one bootstrap server is defined

I'm fairly new to ansible, and trying a simple ansible consul test install on VMs (3 * CentOS 7 on openstack)

The host launching the install has this:

$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

$ ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.5 (default, Sep 15 2016, 22:37:39) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]

$ cat inventory.cfg
[consul]
192.168.1.200 consul_node_role=bootstrap
192.168.1.106 consul_node_role=server
192.168.1.103 consul_node_role=client

$ cat consul.yml

  • hosts: consul
    roles:
    • ansible-consul

$ l -d roles/ansible-consul
drwxr-xr-x. 12 vlegoll users 4.0K Jun 23 14:18 roles/ansible-consul

This is a git clone of your repository, current master branch

$ ansible-playbook -i inventory.cfg consul.yml
[...]
TASK [ansible-consul : Fail if more then one bootstrap server is defined] ********************************************************************************************************************
fatal: [192.168.1.200]: FAILED! => {"failed": true, "msg": "The conditional check '_consul_bootstrap_servers | length > 1' failed. The error was: error while evaluating conditional (_consul_bootstrap_servers | length > 1): {% set __consul_bootstrap_servers = [] %}{% for server in _consul_lan_servers %}{% set _consul_node_role = hostvars[server]['consul_node_role'] | default('client', true) %}{% if _consul_node_role == 'bootstrap' %}{% if __consul_bootstrap_servers.append(server) %}{% endif %}{% endif %}{% endfor %}{{ __consul_bootstrap_servers }}: {% set __consul_lan_servers = [] %}{% for server in consul_servers %}{% set _consul_datacenter = hostvars[server]['consul_datacenter'] | default('dc1', true) %}{% if _consul_datacenter == consul_datacenter %}{% if __consul_lan_servers.append(server) %}{% endif %}{% endif %}{% endfor %}{{ __consul_lan_servers }}: {% set _consul_servers = [] %}{% for host in groups[consul_group_name] %}{% set _consul_node_role = hostvars[host]['consul_node_role'] | default('client', true) %}{% if ( _consul_node_role == 'server' or _consul_node_role == 'bootstrap') %}{% if _consul_servers.append(host) %}{% endif %}{% endif %}{% endfor %}{{ _consul_servers }}: 'dict object' has no attribute u'cluster_nodes'\n\nThe error appears to have been in '/home/vlegoll/ansible/roles/ansible-consul/tasks/asserts.yml': line 68, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Fail if more then one bootstrap server is defined\n ^ here\n"}

discussion: default acl_agent_token for servers

Hi

Consul servers are dropping “[WARN] agent: Node info update blocked by ACLs” messages without adding an "acl_agent_token".

See:
thread

ATM I workaround the problem with "acl_agent_token": <acl_master_token>
I would like to discuss how to solve this within the role.

I see these ways:

  1. "acl_agent_token": <acl_master_token>
    Very simple to implement in the role. But are there any security issues?
{## Server ACLs ##}
{% if consul_acl_enable %}
 "acl_default_policy": "{{ consul_acl_default_policy }}",
 "acl_master_token": "{{ consul_acl_master_token }}",
 "acl_agent_token": "{{ consul_acl_master_token }}",
  1. Own token without restart
    Give/Gen an acl_agent_token for the servers and add it also to the configs.
    Start the servers
    Call the API with the master token and add the necessary acl rule.

  2. Own token with restart
    Start the server
    Call the API with the master token, add the necessary acl rule and store it locally
    Add acl_agent_token configs
    Restart servers

Maybe some one knows a better way?

bootstrap without bootstrap node

If I read the info about the -bootstrap-expect option in the consul docs, I'm under the impression this is the better choice for normal clusters (not for testing that is).

As this doesn't require a node to be in bootstrap mode all the time, or restart it in normal server mode after bootstrapping. (something that's currently not done by this role, but would be good practice if I understand correctly)

I'd like to add an option to choose between both bootstrapping techniques.

What's your opinion on this @brianshumate?

Consul encryption key is not retrieved from existing json config

I have 1 server and N clients, and I am running the playbook repeatedly. On subsequent runs the following error is manifested:

TASK [brianshumate.consul : Read key for servers that require it] **************
skipping: [10.180.45.22]
 [WARNING]: Unable to find '/tmp/consul_raw.key' in expected paths.

fatal: [10.180.45.18]: FAILED! => {"failed": true, "msg": "could not locate file in lookup: /tmp/consul_raw.key"}

It appears the playbook correctly detects an exiting config file on the bootstrapping server (10.180.45.18), but it does not read it from that server. Instead it tries to read it from host that is not designated as bootstrapping server (10.180.45.22).

I suspect the culprit is the playbook uses run_once clause in a block where the block has when condition. What happens is run_once clause is applied first, it picks one host which may or may not be a bootstrapping server, and then the when clause is applied. If that host is not a server then there is no config to read and later on the playbook fails with the above message.

I was able to get pass this error by removing the run_once clause from the tasks enclosed in block. I will create a PR shortly for review.

consul_bind_address invalid value

Hi,

When installing a cluster with vagrant each node with role=server has the error:

fatal: [node2]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'consul_bind_address'\n\nThe error appears to have been in '/etc/ansible/roles/brianshumate.consul/tasks/config.yml': line 4, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Create configuration\n ^ here\n"}

The node with bootstrap role is sucessful configured

Not compatible with Ansible 2.3.0.0 with python3 on Ubuntu 16.04

TASK [brianshumate.consul : Check Consul package checksum file] ****************
fatal: [4cas]: FAILED! => {"changed": false, "failed": true, "module_stderr": "/bin/sh: /usr/bin/python3: No such file or directory\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 0}

ansible --version
ansible 2.3.0.0

vars:
ansible_python_interpreter: "/usr/bin/python3"

Ubuntu Trusty (14.04) support

The galaxy page says that Trusty is supported, but Trusty still uses upstart and upstart support has been removed from this role. Full support for systemd on Ubuntu started with version 15.04, see here.

Failing at "Create configuration" task

I'm trying to run this role using everything as default.

TASK [ansible-consul : Create configuration] ******************************************************************************************************************************************************************
fatal: [52.90.156.175]: FAILED! => {"failed": true, "msg": "An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleError'>, original message: {{ lookup('env','CONSUL_TLS_DIR') | default({{ consul_config_path }}/ssl, true) }}: template error while templating string: expected token ':', got '}'. String: {{ lookup('env','CONSUL_TLS_DIR') | default({{ consul_config_path }}/ssl, true) }}"}
fatal: [54.90.196.123]: FAILED! => {"failed": true, "msg": "An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleError'>, original message: {{ lookup('env','CONSUL_TLS_DIR') | default({{ consul_config_path }}/ssl, true) }}: template error while templating string: expected token ':', got '}'. String: {{ lookup('env','CONSUL_TLS_DIR') | default({{ consul_config_path }}/ssl, true) }}"}
fatal: [54.164.166.113]: FAILED! => {"failed": true, "msg": "An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleError'>, original message: {{ lookup('env','CONSUL_TLS_DIR') | default({{ consul_config_path }}/ssl, true) }}: template error while templating string: expected token ':', got '}'. String: {{ lookup('env','CONSUL_TLS_DIR') | default({{ consul_config_path }}/ssl, true) }}"}

Config files and dir structure

Currently the following config dir structure is used;

/etc/consul
└── .consul_bootstrapped
/etc/consul.d
├── bootstrap
│   └── config.json
├── client
│   └── config.json
└── server
    └── config.json

I would like to change this to a more common layout where the agent config resides in the consul dir. And all files in consul.d are included automatically.

/etc/consul
├── {{ consul_node_role }}.json
└── .consul_bootstrapped
/etc/consul.d
└── random_config.json

In this example I only added the config for the specific role of the agent. Is there a reason why the role currently provides all 3 configs? I would provide bootstrap/server to all servers and only client to clients.

Happy to provide a PR.

Not disabling bootstrap mode

After a succesful deployment, I still see :
"bootstrap": true,
in the config file of the server which had consul_node_role=bootstrap passed in ansible inventory file.

I'm not seeing that happening:
Be aware that for clustering, the included site.yml does the following:

Executes consul role (installs Consul and bootstraps cluster)
Reconfigures bootstrap node to run without bootstrap-expect setting
Restarts bootstrap node

Have I missed something ?

CentOS 6 question

Could you please clarify why CentOS 6 is banned?

TASK [brianshumate.consul : Fail if not a new release of Red Hat / CentOS] *****
fatal: [consul1.local]: FAILED! => {"changed": false, "failed": true, "msg": "6.8 is not an acceptable version of CentOS for this role"}
fatal: [consul3.local]: FAILED! => {"changed": false, "failed": true, "msg": "6.8 is not an acceptable version of CentOS for this role"}
fatal: [consul2.local]: FAILED! => {"changed": false, "failed": true, "msg": "6.8 is not an acceptable version of CentOS for this role"}

get_url failing on releases.hashicorp.com

OS: Ubuntu 14.04

Seems to be an issue with Python 2.7.6 and TLS 1.2-only servers?

TASK [brianshumate.consul : Get Consul package checksum file] ******************
fatal: [10.0.12.158]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to validate the SSL certificate for releases.hashicorp.com:443. Make sure your managed systems have a valid CA certificate installed. You can use validate_certs=False if you do not need to confirm the servers identity but this is unsafe and not recommended. Paths checked for this platform: /etc/ssl/certs, /etc/ansible, /usr/local/etc/openssl"}

Question: Deploying Consul Using Ansible

Greetings!!

Trying to deploy using ansible-consul, but for some reason it gathers facts and then exits.

site.yml

- name: cluster1
  hosts: consul_instances
  any_errors_fatal: true
  become: true
  become_user: root

and my hosts file is:

hosts

[consul_instances]
server83.localnet.net ansible_ssh_port=22 ansible_ssh_user=root consul_node_role=server consul_bootstrap_expect=true
server84.localnet.net ansible_ssh_port=22 ansible_ssh_user=root consul_node_role=server consul_bootstrap_expect=true
server85.localnet.net ansible_ssh_port=22 ansible_ssh_user=root consul_node_role=server consul_bootstrap_expect=true
server86.localnet.net ansible_ssh_port=22 ansible_ssh_user=root consul_node_role=server consul_bootstrap_expect=true
server87.localnet.net ansible_ssh_port=22 ansible_ssh_user=root consul_node_role=server consul_bootstrap_expect=true

Now, when deploying this is what I am issuing:

ansible-playbook -i hosts site.yml --extra-vars "consul_acl_master_token=95FBC040-C484-XXXXXXXX" --extra-vars "consul_datacenter=dc1" --extra-vars "consul_default_port_dns=53"

Output:

PLAY [cluster1] **********************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************
ok: [server85.localnet.net]
ok: [server83.localnet.net]
ok: [server87.localnet.net]


ok: [server86.localnet.net]


---
ok: [server84.localnet.net]

PLAY RECAP ******************************************************************************************************************************************************************************************
server83.localnet.net  : ok=1    changed=0    unreachable=0    failed=0
server84.localnet.net  : ok=1    changed=0    unreachable=0    failed=0
server85.localnet.net  : ok=1    changed=0    unreachable=0    failed=0
server86.localnet.net  : ok=1    changed=0    unreachable=0    failed=0
server87.localnet.net  : ok=1    changed=0    unreachable=0    failed=0

Why it's not deploying ? What am I doing wrong?

Thanks again!

Alex

Support Upgrading Consul

I noticed that when I change the variable consul_version and run the script on an existing consul cluster that each nodes still states the old consul_version in consul members.

When I checked on the machine I noticed that the binaries were not upgraded to the specified consul version.

The main issue I found is the Check for existing Consul binary task inside the file task/main.yml.

With the guide from consul at https://www.consul.io/docs/upgrading.html I hacked together something that is not idempotent and uses a variable called consul_install_upgrade. This bypasses the stated check and runs additional tasks such as systemctl daemon-reload on Ubuntu.

I think further restructuring of this hack is required to ideally make the process idempotent.

So my questions: Is it planned to support Upgrading Consul via the ansible role?

Cannot rerun the role to setup additional nodes

I ran the role once to setup a node, then added a new node in inventory and re-ran the playbook.

It failed because of the combination of:

  • when clause skipped the node I had already setup on the first run (that host, even if skipped counts for the run once)
  • run_once ensured that the download tasks are not run on the new node
  • the cleanup action from the previous run removed the downloaded file

Error:

TASK [ansible-consul : Install Consul] *******************************************************************************************************************************************************
skipping: [192.168.1.200]
fatal: [192.168.1.111]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to find '/home/vlegoll/dev/ansible/roles/ansible-consul-brianshumate/files/consul' in expected paths."}

At least this should be documented, IMHO.

Commenting out the first (succesful node from the inventory) made the new node download and install OK...

Naming of repo should be brianshumate.consul

Hey Brian - the naming convention of your repo should be the same as that in Ansible-galaxy to avoid issues: e.g. brianshumate.consul instead of ansible-consul.

The reason I say this is that if anyone clones your repo via ansible-galaxy cli instead of git clone, the example wont work ( - { role: ansible-consul } is there instead of role: brianshumate.consull)

'consul_node_role' is undefined

TASK [brianshumate.consul : Create cluster groupings] **************************
fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'consul_node_role' is undefined\n\nThe error appears to have been in '/etc/ansible/roles/brianshumate.consul/tasks/main.yml': line 24, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Create cluster groupings\n  ^ here\n"}

Changed to consul_node_name?

change start_join to retry_join

Hi
If there are no knowing drawbacks at your site I would suggest changing “start_join” to “retry_join” in config.json.j2. I have problems to deploy a new cluster without using retry, because not all nodes start simultaneously.

Role does not work on second run

When I try to run the role a second time I get the following error.

Playbook:

- name: Install & Configure Consul Cluster
  hosts: consuls
  become: yes 
  roles:
    - role: consul
      consul_iface: eth0
      consul_group_name: consuls

Error:

 TASK [consul : Writing key locally to share with other servers that are new] ***
fatal: [consul-host-0]: FAILED! => {"failed": true, "msg": "Failed to get information on remote file (/tmp/consul_raw.key): MODULE FAILURE"}
fatal: [consul-host-2]: FAILED! => {"failed": true, "msg": "Failed to get information on remote file (/tmp/consul_raw.key): MODULE FAILURE"}
fatal: [consul-host-1]: FAILED! => {"failed": true, "msg": "Failed to get information on remote file (/tmp/consul_raw.key): MODULE FAILURE"}
  to retry, use: --limit @/var/lib/awx/projects/_6__consulting_bootcamp/ansible/playbooks/setup_consul.retry

I was able to resolve this by setting become: no on the local_action tasks.

consul_servers only works if consul_node_role is a fact

I think I found the problem you were having with the consul_node_role. When a default is used, the variable isn't available as a fact. But the used loops and look-ups need this as a fact.

Simply setting it as a fact should solve the problem.

I'll add a fix to the config layout PR

'dict object' has no attribute 'stdout'

I'm new to this project and just trying it now. errors come out from a few of my servers.

my inventory file:

[cluster_nodes]
adcb-zk-1.vm.elenet.me consul_node_role=bootstrap
adcb-zk-[2:3].vm.elenet.me consul_node_role=server
adcb-mesos-[18:23].vm.elenet.me consul_node_role=client

my playbook:

---
- name: Assemble Consul cluster
  hosts: cluster_nodes 
  remote_user: root 
  roles:
    - { role: brianshumate.consul }

error log:

TASK [brianshumate.consul : Save encryption key] *******************************
skipping: [adcb-mesos-18.vm.elenet.me] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
fatal: [adcb-mesos-18.vm.elenet.me]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to have been in '/etc/ansible/roles/brianshumate.consul/tasks/main.yml': line 89, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n      run_once: true\n    - name: Save encryption key\n      ^ here\n"}

my starting code:

ansible-playbook -i ./hosts consul.yml --extra-vars "consul_iface={{ansible_default_ipv4.alias}}" -v

any help would be appreciated :)

Changing cluster_nodes to consul_cluster_nodes

Hello Brian,

I'm currently using your roles for nomad and consul and I find it weird to have one role with cluster_nodes as a mandatory host group while having the nomad role with nomad_cluster_nodes.

This tiny issues makes it weird when using both on the same playbook (the inventory is then explicit for nomad but not for consul).

Do you think it would be possible to change this (I could make the PR if needed).

Ubuntu 14.04 init file

I had some trouble getting the consul service to start up on a 14.04 server, but finally narrowed things down to some quotation problems in consul_debianinit.j2.

First, ${DAEMON_ARGS} apparently shouldn't be quoted. According to this post, bash and ksh on Linux will add in extra quotes that mess things up. So this section:

start-stop-daemon --start --quiet --pidfile "${PIDFILE}" --exec "${DAEMON}" --chuid "${USER}" --background --make-pidfile -- \
        "${DAEMON_ARGS}" \
        || return 2

should become this:

start-stop-daemon --start --quiet --pidfile "${PIDFILE}" --exec "${DAEMON}" --chuid "${USER}" --background --make-pidfile -- \
        ${DAEMON_ARGS} \
        || return 2

Second, the process check right after that section was missing a tailing quote mark on the pidfile and the user parameter. After I got the first problem fixed, the service would be started correctly but these checks wouldn't run. So this:

 if ! start-stop-daemon --quiet --stop --test --pidfile "${PIDFILE} --exec "${DAEMON}" --user "${USER}; then

should be this:

if ! start-stop-daemon --quiet --stop --test --pidfile "${PIDFILE}" --exec "${DAEMON}" --user "${USER}"; then

Last, thanks to all of this I found an edge case for the "Consul up?" playbook task where it incorrectly thought the process was running. Checking for the existence of the pidfile does work, except that in my case, the pidfile stuck around even after the consul process exited. Since the DAEMON_ARGS weren't being passed properly, I would just get a "==> Must specify data directory using -data-dir" error from consul, and it would immediately exit. I think the pidfile was created because the consul process did technically start properly, but thanks to the immediate exit, it was never removed with a "service consul stop". I don't really have a good way to improve that.

I did test my template changes and was able to successfully deploy to new 14.04 servers. Thanks for the good work, it's made my life easier as I start to learn and roll out consul.

Issues with install -> Get Consul package checksum file

- name: Get Consul package checksum file
  become: no
  connection: local
  get_url:
    url: "{{ consul_checksum_file_url }}"
    dest: "{{ role_path }}/files/consul_{{ consul_version }}_SHA256SUMS"
  run_once: true
  tags: installation
  when: consul_checksum.stat.exists == False

I realize this is executing on my local machine, and i can use wget and curl to successfully download the file.

If i run it as is - I get the error:
Failed to validate the SSL certificate for releases.hashicorp.com:443

if i run with the verify_certs flag set to no, i get this error:
Request failed: <urlopen error EOF occurred in violation of protocol (_ssl.c:590)>

if i download the file manually and place it in the desired location, i get similar errors when i get to downloading the binary for consul.

Any thoughts on a solution/workaround?

'dict object' has no attribute u'ansible_eth1'

Good afternoon, Brian!

I installed your role from Ansible Galaxy and I'm trying to install consul on a standalone server.

Here's basic information:

  • ansible 2.2.0.0
  • Installing on Ubuntu 16.04 (not officially supported)

Full error:

TASK [brianshumate.consul : Bootstrap configuration] ***************************
fatal: [consul]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: {{ hostvars[inventory_hostname]['ansible_'+consul_iface]['ipv4']['address'] }}: 'dict object' has no attribute u'ansible_eth1'"}

The playbook:

  - hosts: "consul"
    gather_facts: true
    become: yes
    vars:
      consul_node_role: "bootstrap" 
    roles:
    - { role: brianshumate.consul, tags: ["consul"] }

ansible.cfg

[defaults]
host_key_checking = False

Thanks in advance!

join advertise adresses

Hi
Is it possible to implement an option to choose if start|retry_join in config.json.j2 use the advertise addresses of the cluster partners?
The current implementation does not work between the servers when they are using advertise addresses, because the bind addresses are still used to join the cluster (lan and wan).

Prestent:

    {## LAN Join ##}
    "start_join": [
        {% for server in _consul_lan_servers %}
           "{{ hostvars[server]['consul_bind_address'] | ipwrap }}",
        {% endfor %} ],

Suggestion:

    {## LAN Join ##}
    "retry_join": [
        {% for server in _consul_lan_servers %}
            "{{ hostvars[server]['consul_advertise_address'] | ipwrap }}",
        {% endfor %} ],

The Change should be enough, because the default of consul_advertise_address is:
consul_advertise_address: "{{ consul_bind_address }}"

Same story for {## WAN Join ##} .

user 'consul' with ID 1042 created

The role is templated to use {{ consul_user }} everywhere, but one of the first things that tasks/main.yml does is to create a user with a hardcoded name (consul) and a hardcoded ID (1042). This smells like a bug to me. At the very least, the hardcoded name should probably be changed to "{{ consul_user }}". I also submit that the user ID should not be hardcoded, and that the consul user should be a system account, as follows:

- name: Add Consul user
  user:
    name: "{{ consul_user }}"
    comment: "Consul user"
    group: "{{ consul_group }}"
    system: yes

What are your thoughts?

Multi Datacenter Features?

This is just a discussion/thinking issue around what kinds of support we might like to add in for multiple datacenters given the initial bits to differentiate between LAN and WAN servers which were recently added by @groggemans.

I'd like to get some ideas about any features which would be helpful to add, like:

If there are ideas for what would be handy and how we might implement it, let's coordinate here before hitting the YAML and so on. 😄

issue with downloading files locally

My control machine is a mac - downloading these files locally is causing errors with the get_url module.

fatal: [consul01]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to validate the SSL certificate for releases.hashicorp.com:443. Make sure your managed systems have a valid CA certificate installed. You can use validate_certs=False if you do not need to confirm the servers identity but this is unsafe and not recommended. Paths checked for this platform: /etc/ssl/certs, /etc/ansible, /usr/local/etc/openssl"}

I have tried addressing the issue with what google returns, but haven't been able to get past this error. I will issue a PR which should let all the file transferring occur on the remote host - which lets me get past it.

Starting Consul under well known ports

ot: thank you for the great role!

Im using redhat and have to run consul within the well known port range (<1024). But there are two problems:

  • SELinux
  • Non root users are not allowed to open wkn ports.

I have still no solution for SELinux because its new for me. That could helpful with the root problem:

`
# - name: TODO: allow well known ports in selinux

- name: Allow well known ports for consul
  capabilities:
    path: "{{consul_binary}}"
    capability: CAP_NET_BIND_SERVICE=+eip
    state: present
  when: (consul_ports.rpc|int < 1024)

- name: Forbid well known ports for consul
  capabilities:
    path: "{{consul_binary}}"
    capability: CAP_NET_BIND_SERVICE=-eip
    state: absent
  when: (consul_ports.rpc|int > 1024)

`

Role is not idempotent

Strangely enough, nobody pointed out yet that this role is not idempotent.
with every Ansible run it will:

TASK [brianshumate.consul : Bootstrap configuration
TASK [brianshumate.consul : Client configuration
TASK [brianshumate.consul : Server configuration
TASK [brianshumate.consul : systemd script

not to mention
TASK [brianshumate.consul : Reconfigure bootstrap node (systemd)

Issues with consul_iface

Hello!

Thank you for the module! One thing I am not understanding is the "consul_iface" variable.
Several of my machines do not have the same interface name, so I have attempted to set "consul_iface" to "{{ ansible_default_ipv4.interface }}"

When your module runs, it then fails with the following error.

fatal: [10.11.1.46]: FAILED! => {
    "changed": false, 
    "failed": true, 
    "invocation": {
        "module_args": {
            "dest": "/etc/consul.d/client/config.json", 
            "group": "bin", 
            "owner": "consul", 
            "src": "config_client.json.j2"
        }, 
        "module_name": "template"
    }, 
    "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute u'ansible_officelanbr0'"
}

It appears that https://github.com/brianshumate/ansible-consul/blob/master/templates/config_client.json.j2#L29 is properly trying to find the var (see below for the output of ansible -m setup

       "ansible_officelanbr0": {                                                                                                                           
            "active": true,                                                                                                                                 
            "device": "officelanbr0",                                                                                                                       
            "features": {                                                                                                                                   
                "busy_poll": "off [fixed]",                                                                                                                 
                "fcoe_mtu": "off [fixed]",                                                                                                                  
                "generic_receive_offload": "on",                                                                                                            
                "generic_segmentation_offload": "on",                                                                                                       
                "highdma": "on",                                                                                                                            
                "hw_tc_offload": "off [fixed]",                                                                                                             
                "l2_fwd_offload": "off [fixed]",                                                                                                            
                "large_receive_offload": "off [fixed]",                                                                                                     
                "loopback": "off [fixed]",                                                                                                                  
                "netns_local": "on [fixed]",                                                                                                                
                "ntuple_filters": "off [fixed]",                                                                                                            
                "receive_hashing": "off [fixed]",                                                                                                           
                "rx_all": "off [fixed]",                                                                                                                    
                "rx_checksumming": "off [fixed]",                                                                                                           
                "rx_fcs": "off [fixed]",                                                                                                                    
                "rx_vlan_filter": "off [fixed]",                                                                                                            
                "rx_vlan_offload": "off [fixed]",                                                                                                           
                "rx_vlan_stag_filter": "off [fixed]",                                                                                                       
                "rx_vlan_stag_hw_parse": "off [fixed]",                                                                                                     
                "scatter_gather": "on",                                                                                                                     
                "tcp_segmentation_offload": "on",                                                                                                           
                "tx_checksum_fcoe_crc": "off [fixed]",                                                                                                      
                "tx_checksum_ip_generic": "on",                                                                                                             
                "tx_checksum_ipv4": "off [fixed]",                                                                                                          
                "tx_checksum_ipv6": "off [fixed]",                                                                                                          
                "tx_checksum_sctp": "off [fixed]",                                                                                                          
                "tx_checksumming": "on",                                                                                                                    
                "tx_fcoe_segmentation": "off [requested on]",                                                                                               
                "tx_gre_segmentation": "on",                                                                                                                
                "tx_gso_robust": "off [requested on]",                                                                                                      
                "tx_ipip_segmentation": "on",                                                                                                               
                "tx_lockless": "on [fixed]",                                                                                                                
                "tx_nocache_copy": "off",                                                                                                                   
                "tx_scatter_gather": "on",                                                                                                                  
                "tx_scatter_gather_fraglist": "on",                                                                                                         
                "tx_sit_segmentation": "on",                                                                                                                
                "tx_tcp6_segmentation": "on",                                                                                                               
                "tx_tcp_ecn_segmentation": "on",                                                                                                            
                "tx_tcp_segmentation": "on",                                                                                                                
                "tx_udp_tnl_segmentation": "on",                                                                                                            
                "tx_vlan_offload": "on",                                                                                                                    
                "tx_vlan_stag_hw_insert": "on",                                                                                                             
                "udp_fragmentation_offload": "on",                                                                                                          
                "vlan_challenged": "off [fixed]"                                                                                                            
            },                                                                                                                                              
            "id": "8000.10c37b699bd4",                                                                                                                      
            "interfaces": [                                                                                                                                 
                "vethO17AQ9",                                                                                                                               
                "enp10s0",                                                                                                                                  
                "veth42ECAF"                                                                                                                                
            ],                                                                                                                                              
            "ipv4": {                                                                                                                                       
                "address": "10.11.1.46",                                                                                                                    
                "broadcast": "10.11.1.255",                                                                                                                 
                "netmask": "255.255.255.0",                                                                                                                 
                "network": "10.11.1.0"                                                                                                                      
            },                                                                                                                                              
            "ipv6": [                                                                                                                                       
                {                                                                                                                                           
                    "address": "fe80::12c3:7bff:fe69:9bd4",                                                                                                 
                    "prefix": "64",                                                                                                                         
                    "scope": "link"                                                                                                                         
                }                                                                                                                                           
            ],                                                                                                                                              
            "macaddress": "<removed>",                                                                                                              
            "mtu": 1500,                                                                                                                                    
            "promisc": false,                                                                                                                               
            "stp": false,                                                                                                                                   
            "type": "bridge"                                                                                                                                
        }

Deploy only consul clients

Hi

Please correct me if I am wrong:
ATM the role is only usable when all consul servers and clients are part of the hostgroup “cluster_nodes/consul_instances (latest)”. Therefore I have to be a server admin also when I only want to deploy a client.
-> I cannot use the role out of the box as a consul client admin.

My quick and dirty solution was:

Playbook:

- hosts: "consulclients.com"
  remote_user: centos
  become: true
  become_user: root
  become_method: sudo
  gather_facts: yes
  roles:
    - consul  
  vars:
    consul_node_role: "client"
    consul_acl_agent_token: "MyClientToken"
    consul_tls_enable: "true"
    consul_join_servers:
      - "1.2.3.4"
      - "1.2.3.5"
      - "1.2.3.6"
    ...

config.json.j2:


    {## LAN Join ##}
    "start_join": [
			{% for server in consul_join_servers %}
					"{{ server }}",
			{% endfor %}
        {% for server in _consul_lan_servers %}
            "{{ hostvars[server]['consul_bind_address'] | ipwrap }}",
        {% endfor %} ],

Is it possible to add a “clients only mode” in one of the next releases?

How To Set a recursor

Greetings!!

Can anyone give me a hint on how to use a recursor ? I tried to set recursor list using:

--extra-vars "consul_recursors=192.168.96.245,192.168.96.212,192.168.96.234" 

However, it's not working, because it is getting parsed i config.json in the following manner:

"recursors": [
"1",
"9",
"2",
"."
...

Any hints ?

Thanks again!!

Building consul_tls_dir dont work

defaults/main.yml:
consul_tls_dir: "{{ lookup('env','CONSUL_TLS_DIR') | default('{{ consul_config_path }}/ssl', true) }}"

default('{{ consul_config_path }}/ssl', true) }}" will not be interpreted because of single quotes.
Workaround, because I was not able to solve the problem in one line within 5 minutes:

consul_tls_dir_default: "{{consul_config_path}}/ssl"
consul_tls_dir: "{{ lookup('env','CONSUL_TLS_DIR') | default(consul_tls_dir_default, true) }}"

value for bootstrap_expect - autoscaling

Hi

ATM the config looks like this:
"bootstrap_expect": {{ _consul_lan_servers | length }},

I think there is no need to set this value to the current number of server instances.
In some time I will try to implement autoscaling for the servers and so i need a fix value for the bootstramp (3 or 5) that will not be changed if new server instances will be added or removed - at least I think so.
There is no benefit to set "bootstrap_expect": "6" when the only usefull values are 5 (or 3).

Could you implement a integer key which overides bootstrap_expect if it is set?
Something like that:
"bootstrap_expect": {{ consul_static_bootstramp | default(_consul_lan_servers | length) }},

Maybe I am wrong and this role will only be used to deploy the first "core" of servers and scaling servers have to be deployed with a customized role.
Is some one of you using auto or manuel scaling of an existing consul server environment (up and down scaling)?

Lower minimal Debian version

Is it ok if I lower the minimal Debian version from 8.5 to 8.0? The latest Raspbian is 8.0, but it does install without issues after circumventing the Debian version assert.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.