Giter Site home page Giter Site logo

redhatgov / redhatgov.workshops Goto Github PK

View Code? Open in Web Editor NEW
65.0 35.0 73.0 26.67 MB

This is a collection of Ansible-deployed workshop environments. Use it in combination with the student workbook content, from the repo at https://github.com/RedHatGov/redhatgov.github.io

Home Page: http://redhatgov.io

Shell 4.41% Python 54.87% HTML 10.87% Dockerfile 0.15% JavaScript 0.09% Jinja 29.48% PHP 0.01% Vim Script 0.11%

redhatgov.workshops's Issues

Ansible Tower AWS Workshop | Update key creation with mode to avoid Ansible warning

When provisioning the workshop the defaults for file creation have changed resulting in a warning:

TASK [admin_server_prep : copy template to working directory] ************************************************************************************
[WARNING]: File '/redhatgov.workshops/ansible_tower_aws/.redhatgov/aws_credentials_vault.yml.pre' created with
default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [localhost]

TASK [admin_server_prep : copy password file to working directory] *******************************************************************************
[WARNING]: File '/redhatgov.workshops/ansible_tower_aws/.redhatgov/workshop-password' created with default
permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [localhost]


TASK [admin_server_prep : stage group_vars/all.yml for remote host] ******************************************************************************
[WARNING]: File '/redhatgov.workshops/ansible_tower_aws/.redhatgov/all.yml' created with default permissions
'600'. The previous default was '666'. Specify 'mode' to avoid this warning.

Ansible Tower AWS Workshop | cannot be deployed in us-east-1e without changing instance types

When attempting to deploy the workshop in us-east-1 the provisioning fails if the availability zone gets set to us-east-1e and the instance sizes are left as the defaults:

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Instance creation failed => Unsupported: Your requested instance type (t3.small) is not supported in your requested Availability Zone (us-east-1e). Please retry your request by not specifying an Availability Zone or choosing us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1f."}

Web terminal appears to wrap after 80 characters

When trying out the new web terminal configured since PR#132, we are seeing wrapping after about 67 chars when pasting cli commands from the lab instructions. When adding the prompt, it looks like about 80 chars?

Not able to find a width setting, or anything else that seemed pertinent to the width of the terminal. Same result in Firefox and Chrome.

The wrapping is on the same line, so it's difficult to see what you have pasted. Worse is the case when you need to edit the line pasted with your github id.

SCL repos not available

install went fine on east2

but in exercise 1 - installing maven I got this:

ec2-user@ip-10-0-2-190 ~]$ ansible web -m package -a "name=rh-maven35 state=present" -b
east2.node.0.redhatgov.io | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "No package matching 'rh-maven35' found available, installed or updated",
"rc": 126,
"results": [
"No package matching 'rh-maven35' found available, installed or updated"
]
}

but during the initial install I knew I saw some maven stuff - I found all this in /var/log/messages on node0

[root@ip-10-0-2-211 ~]# grep -i maven /var/log/messages

Oct 23 19:07:24 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-javapackages-tools-5.0.0-2.4.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-runtime-1-1.2.el7.x86_64
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-resolver-api-1.0.3-5.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-resolver-spi-1.0.3-5.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-resolver-util-1.0.3-5.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-plexus-utils-3.0.24-3.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-maven-wagon-provider-api-2.10-3.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-guava-18.0-10.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-plexus-cipher-1.7-12.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-hawtjni-runtime-1.15-1.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-apache-commons-io-2.5-2.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-plexus-classworlds-2.5.2-7.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-slf4j-1.7.25-1.3.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-httpcomponents-core-4.4.6-3.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-apache-commons-logging-1.2-10.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-plexus-containers-component-annotations-1.7.1-2.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-apache-commons-lang-2.6-19.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-atinject-1-24.20100611svn86.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-maven-shared-utils-3.1.0-4.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-jansi-native-1.7-1.2.el7.x86_64
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-plexus-sec-dispatcher-1.4-22.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-resolver-impl-1.0.3-5.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-plexus-interpolation-1.22-7.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-apache-commons-cli-1.4-1.2.el7.noarch
Oct 23 19:07:26 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-aopalliance-1.0-14.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-google-guice-4.1-6.1.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-apache-commons-lang3-3.5-3.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-jansi-1.16-1.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-maven-wagon-file-2.10-3.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-jcl-over-slf4j-1.7.25-1.3.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-resolver-transport-wagon-1.0.3-5.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-resolver-connector-basic-1.0.3-5.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-glassfish-el-api-3.0.1-0.4.b08.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-publicsuffix-list-20170424-1.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-apache-commons-codec-1.10-4.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-httpcomponents-client-4.5.3-3.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-jsoup-1.10.3-1.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-maven-wagon-http-shared-2.10-3.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-maven-wagon-http-2.10-3.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-jboss-interceptors-1.2-api-1.0.0-6.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-cdi-api-1.2-4.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-sisu-inject-0.3.3-1.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-sisu-plexus-0.3.3-1.2.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-lib-3.5.0-4.3.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: 1:rh-maven35-maven-3.5.0-4.3.el7.noarch
Oct 23 19:07:27 ip-10-0-2-211 yum[10400]: Installed: rh-maven35-1-1.2.el7.x86_64

Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-1-1.2.el7.x86_64
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-3.5.0-4.3.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-lib-3.5.0-4.3.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-maven-wagon-http-2.10-3.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-httpcomponents-client-4.5.3-3.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-maven-wagon-http-shared-2.10-3.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-sisu-plexus-0.3.3-1.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-resolver-transport-wagon-1.0.3-5.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-google-guice-4.1-6.1.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-resolver-impl-1.0.3-5.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-resolver-connector-basic-1.0.3-5.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-plexus-sec-dispatcher-1.4-22.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-jansi-1.16-1.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-maven-wagon-file-2.10-3.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-maven-wagon-provider-api-2.10-3.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-jansi-native-1.7-1.2.el7.x86_64
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-resolver-spi-1.0.3-5.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-resolver-util-1.0.3-5.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-sisu-inject-0.3.3-1.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-cdi-api-1.2-4.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-maven-shared-utils-3.1.0-4.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-jcl-over-slf4j-1.7.25-1.3.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-slf4j-1.7.25-1.3.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-apache-commons-io-2.5-2.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-glassfish-el-api-3.0.1-0.4.b08.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-atinject-1-24.20100611svn86.2.el7.noarch
Oct 23 19:11:43 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-jboss-interceptors-1.2-api-1.0.0-6.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: 1:rh-maven35-maven-resolver-api-1.0.3-5.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-hawtjni-runtime-1.15-1.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-plexus-utils-3.0.24-3.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-apache-commons-lang-2.6-19.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-plexus-cipher-1.7-12.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-aopalliance-1.0-14.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-guava-18.0-10.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-plexus-classworlds-2.5.2-7.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-plexus-containers-component-annotations-1.7.1-2.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-jsoup-1.10.3-1.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-apache-commons-codec-1.10-4.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-apache-commons-logging-1.2-10.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-httpcomponents-core-4.4.6-3.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-publicsuffix-list-20170424-1.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-apache-commons-cli-1.4-1.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-apache-commons-lang3-3.5-3.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-plexus-interpolation-1.22-7.2.el7.noarch
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-runtime-1-1.2.el7.x86_64
Oct 23 19:11:44 ip-10-0-2-211 yum[10860]: Erased: rh-maven35-javapackages-tools-5.0.0-2.4.el7.noarch

maven was installed and then erased???

and looking at the available repos - there is no scl repo enabled

[root@ip-10-0-2-211 ~]# yum repolist
Loaded plugins: amazon-id, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
repo id repo name status
rhel-7-server-rhui-rh-common-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server - RH Common from RHUI ( 239
rhel-7-server-rhui-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server from RHUI (RPMs) 26,500
rhui-REGION-jbcs-rhui-rhel-7-rpms/7Server/x86_64 JBoss JBCS for RHUI RHEL 7 (RPMs) 208
rhui-REGION-jbeap-7.1-rhui-rhel-7-rpms/7Server/x86_64 JBoss EAP 7.1 for RHUI RHEL 7 (RPMs) 1,783
rhui-client-config-server-7/x86_64 Custom Repositories - Red Hat Update Infrastructure 3 Cli 3
repolist: 28,733
looks like an update may have changed some repo info - note files with rpmsave

[root@ip-10-0-2-211 yum.repos.d]# ls -l
total 56
-rw-r--r--. 1 root root 358 Oct 23 19:09 redhat.repo
-rw-r--r--. 1 root root 4745 Oct 11 12:04 redhat-rhui-beta.repo.disabled
-rw-r--r--. 1 root root 657 Dec 21 2017 redhat-rhui-client-config-jbeap-7.1.repo
-rw-r--r--. 1 root root 504 Oct 23 19:17 redhat-rhui-client-config.repo
-rw-r--r--. 1 root root 607 Oct 23 18:50 redhat-rhui-client-config.repo.rpmsave
-rw-r--r--. 1 root root 2990 Dec 21 2017 redhat-rhui-jbeap-7.1.repo
-rw-r--r--. 1 root root 10142 Oct 23 19:52 redhat-rhui.repo
-rw-r--r--. 1 root root 8679 Oct 23 18:55 redhat-rhui.repo.rpmsave
-rw-r--r--. 1 root root 80 Oct 23 18:50 rhui-load-balancers.conf.rpmsave

from redhat-rhui.repo.rpmsave

[rhui-REGION-rhel-server-rhscl]
name=Red Hat Enterprise Linux Server 7 RHSCL (RPMs)
mirrorlist=https://rhui2-cds01.REGION.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/$releasever/$basearch/rhscl/1/os
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify=1
sslclientkey=/etc/pki/rhui/content-rhel7.key
sslclientcert=/etc/pki/rhui/product/content-rhel7.crt
sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt

from redhat-rhui.repo

[rhel-server-rhui-rhscl-7-rpms]
name=Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server from RHUI
mirrorlist=https://rhui3.REGION.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/$releasever/$basearch/rhscl/1/os
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify=1
sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt
sslclientcert=/etc/pki/rhui/product/content-rhel7.crt
sslclientkey=/etc/pki/rhui/content-rhel7.key

I edited redhat-rhui.repo to enable the scl repo

[root@ip-10-0-2-211 yum.repos.d]# yum repolist
Loaded plugins: amazon-id, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
repo id repo name status
rhel-7-server-rhui-rh-common-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server - RH Common from RHUI ( 239
rhel-7-server-rhui-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server from RHUI (RPMs) 26,500
rhel-server-rhui-rhscl-7-rpms/7Server/x86_64 Red Hat Software Collections RPMs for Red Hat Enterprise 11,415
rhui-REGION-jbcs-rhui-rhel-7-rpms/7Server/x86_64 JBoss JBCS for RHUI RHEL 7 (RPMs) 208
rhui-REGION-jbeap-7.1-rhui-rhel-7-rpms/7Server/x86_64 JBoss EAP 7.1 for RHUI RHEL 7 (RPMs) 1,783
rhui-client-config-server-7/x86_64 Custom Repositories - Red Hat Update Infrastructure 3 Cli 3
repolist: 40,148

then when I reran the maven stuff it worked

[ec2-user@ip-10-0-2-190 ~]$ ansible web -m package -a "name=rh-maven35 state=present" -b
east2.node.0.redhatgov.io | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"changes": {
"installed": [
"rh-maven35"
]
},
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: amazon-id, product-id, search-disabled-repos, subscription-\n : manager\nThis system is not registered with an

So somehow the repo info is getting changed..

RHEL 8 AWS workshop: Change to use of aws.create role breaks deployment in region ap-southeast 2

The following change has removed previous commits which had added support for deploying the RHEL 8 workshop into AWS region ap-southeast-2. The previous commits can be seen here.

3164301#diff-d1d8121193f65fab18245bee99530f6533cd7e9744306eb42e93a7424e4314ae

These changes were originally added to PR 162 and have been lost by the change in this commit.

I may need some advice on fixing this @ajacocks ? We may need to go back and do a diff between the now deleted aws.create role in this workshop and the version it refers to now. I will separately make the appropriate code changes for this week's workshop to run in ap-southeast-2 in a branch on my own fork.

RHEL8 workshop Integrate container into systemd fails

In Ex 1.8, Section 5: Use skopeo and podman to integrate the container into systemd, when you get to Step 3: Integrate container into systemd, the command to enable and start the container-web.service fails to start.

[ec2-user@ip-10-0-2-184 ~]$ sudo systemctl enable --now container-web.service
Created symlink /etc/systemd/system/multi-user.target.wants/container-web.service → /etc/systemd/system/container-web.service.
Created symlink /etc/systemd/system/default.target.wants/container-web.service → /etc/systemd/system/container-web.service.
Job for container-web.service failed because the control process exited with error code.
See "systemctl status container-web.service" and "journalctl -xe" for details.

It appears that because fapolicyd is enabled in the OSPP profile for OpenSCAP, the system usage of containers is blocked.

If we turn off fapolicyd and try again, the container starts.

[ec2-user@ip-10-0-2-184 ~]$ sudo systemctl status fapolicyd
● fapolicyd.service - File Access Policy Daemon
   Loaded: loaded (/usr/lib/systemd/system/fapolicyd.service; enabled; vend>
   Active: active (running) since Thu 2021-03-04 16:07:02 UTC; 38min ago
 Main PID: 41340 (fapolicyd)
    Tasks: 4 (limit: 10899)
   Memory: 76.1M
   CGroup: /system.slice/fapolicyd.service
           └─41340 /usr/sbin/fapolicyd
[ec2-user@ip-10-0-2-184 ~]$ sudo systemctl stop fapolicyd
[ec2-user@ip-10-0-2-184 ~]$ sudo systemctl enable --now container-web.service
[ec2-user@ip-10-0-2-184 ~]$ sudo systemctl status container-web.service 
● container-web.service - Podman container-web.service
   Loaded: loaded (/etc/systemd/system/container-web.service; enabled; vend>
   Active: active (running) since Thu 2021-03-04 16:46:09 UTC; 15s ago
     Docs: man:podman-generate-systemd(1)
  Process: 46448 ExecStart=/usr/bin/podman start web (code=exited, status=0>
 Main PID: 46544 (conmon)
    Tasks: 2 (limit: 10899)
   Memory: 3.6M
   CGroup: /system.slice/container-web.service
           └─46544 /usr/bin/conmon --api-version 1 -c 168b5c70e24f064894b7

Separately, the container-web.service shows the following:

Error: unable to start container "168b5c70e24f064894b7f2c7b869bcd81c6ba28e31a9182dfa87ae0220e5dabc": /usr/bin/runc: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Operation not permitted: OCI permission denied

We found this open bugzilla issue Bug 1907870 - cannot run podman in 8.3 (https://bugzilla.redhat.com/show_bug.cgi?id=1907870) that seems to be current status on this.

We have a workshop with a customer next week. We can explain the issue during the workshop, but if there is any suggested workaround other than that, please let me know.

Wetty install no longer working

The wetty install is no longer properly working. It appears to run the commands indicating it is properly working but in reality the commands are failing. It looks like the global install is specifically what is failing to work properly resulting in further failures causing a fatal failure of the role.

As a workaround to use this for a workshop I have added ignore errors to the following tasks within the wetty role:
name: Patch hterm_all.js to handle iOS spaces
name: Verify that wetty is listening

Additionally I have changed the sshd config to use allow for password based logins letting students use a standard ssh client. Obviously these are temporary fixes until I or someone else can figure out how to get wetty working properly.

Install Let's Encrypt! TLS certs for HTTPS endpoints

This addresses 2 problems we encounter during workshop delivery:

  1. New browsers may reject self-signed certs
  2. Managed browser policies are more prevalent and students with corporate laptops are prevented from accessing workshops with self-signed certs.

Move ansible roles to new roles directory in the repo

I'm planning to move all the roles, that have general use, to a directory called 'roles', in this repo. After reading up on Git submodules, and all the issues with them, that seems to be the best path.

If we decide to split them off into their own repo later, it's fairly easy to do. Please comment soon, as I plan to make the changes this week.

hostvars[inventory_hostname]['ec2_tag_Index'] not working

The expression hostvars[inventory_hostname]['ec2_tag_Index'] used in many places is not working... It appears this fact is not being returned in hostvars fact....

TASK [wetty : Copy cert.pem to Wetty dir] ***************************************************************************************************************************************
fatal: [redhatgovbr.0.redhatbr.io]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ec2_tag_Index'\n\nThe error appears to have been in '/home/rsoares/workshops/RedHatGov/redhatgov.workshops/roles/wetty/tasks/main.yml': line 57, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n#-------------------------------------------------------\n- name: Copy cert.pem to Wetty dir\n  ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'ec2_tag_Index'"}
	to retry, use: --limit @/home/rsoares/.ansible-retry/2_configure.retry

Workshop won't deploy with Terraform >= 0.10.0

When you try to deploy the workshop with Terraform versions of 0.10.0, or newer, you get the following error, indicating that the required plugins are missing:

TASK [terraform.infra.aws : Terraform apply (build/change infrastructure)] *****
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["terraform", "apply"], "delta": "0:00:00.020314", "end": "2017-09-27 16:09:08.015834", "failed": true, "rc": 1, "start": "2017-09-27 16:09:07.995520", "stderr": "\u001b[31merror satisfying plugin requirements\u001b[0m\u001b[0m", "stderr_lines": ["\u001b[31merror satisfying plugin requirements\u001b[0m\u001b[0m"], "stdout": "\u001b[0m\u001b[1m\u001b[33mPlugin reinitialization required. Please run \"terraform init\".\u001b[0m\n\u001b[33mReason: Could not satisfy plugin requirements.\n\nPlugins are external binaries that Terraform uses to access and manipulate\nresources. The configuration provided requires plugins which can't be located,\ndon't satisfy the version constraints, or are otherwise incompatible.\n\n\u001b[0m\u001b[31m2 error(s) occurred:\n\n* provider.aws: no suitable version installed\n  version requirements: \"(any version)\"\n  versions installed: none\n* provider.template: no suitable version installed\n  version requirements: \"(any version)\"\n  versions installed: none\n\n\u001b[0m\u001b[33mTerraform automatically discovers provider requirements from your\nconfiguration, including providers used in child modules. To see the\nrequirements and constraints from each module, run \"terraform providers\".\n\u001b[0m", "stdout_lines": ["\u001b[0m\u001b[1m\u001b[33mPlugin reinitialization required. Please run \"terraform init\".\u001b[0m", "\u001b[33mReason: Could not satisfy plugin requirements.", "", "Plugins are external binaries that Terraform uses to access and manipulate", "resources. The configuration provided requires plugins which can't be located,", "don't satisfy the version constraints, or are otherwise incompatible.", "", "\u001b[0m\u001b[31m2 error(s) occurred:", "", "* provider.aws: no suitable version installed", "  version requirements: \"(any version)\"", "  versions installed: none", "* provider.template: no suitable version installed", "  version requirements: \"(any version)\"", "  versions installed: none", "", "\u001b[0m\u001b[33mTerraform automatically discovers provider requirements from your", "configuration, including providers used in child modules. To see the", "requirements and constraints from each module, run \"terraform providers\".", "\u001b[0m"]}
        to retry, use: --limit @/home/ajacocks/.ansible-retry/site.retry

The issue is that the default plugin storage directory has changed. Reference this issue: 15705.

2_load.yml playbook execution failing for rhel_aws workshop due to undefined variable

When running the 2_load.yml playbook, execution fails twice when encountering the undefined variable beta. The complete files are attached, but to get around the issue, I modified 2 files:

../ansible_tower_aws/roles/ansible.tower/tasks/nodes_setup.yml

BEFORE

  • name: Enable Ansible repo
    rhsm_repository:
    name: "{{ ansible_repo }}"
    when: not beta|bool

AFTER

  • name: Enable Ansible repo
    rhsm_repository:
    name: "{{ ansible_repo }}"

../ansible_tower_aws/roles/subscription_manager/tasks/subscribe.yml

BEFORE

  • name: RHEL 8 tasks
    import_tasks: repos-rhel8.yml
    when: not cloud_access and ansible_distribution == 'RedHat' and ansible_distribution_major_version == '8' and not beta|bool

AFTER

  • name: RHEL 8 tasks
    import_tasks: repos-rhel8.yml
    when: not cloud_access and ansible_distribution == 'RedHat' and ansible_distribution_major_version == '8'

load.error.txt
load-2.error.txt

wetty build in west 1 fails

TASK [wetty : Build wetty] *****************************************************
fatal: [ames.tower.0.redhatgov.io]: FAILED! => {"changed": true, "cmd": "yarn && yarn build", "delta": "0:00:01.370765", "end": "2019-10-22 19:56:58.845790", "msg": "non-zero return code", "rc": 1, "start": "2019-10-22 19:56:57.475025", "stderr": "error An unexpected error occurred: "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz: Request failed \"404 Not Found\"".", "stderr_lines": ["error An unexpected error occurred: "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz: Request failed \"404 Not Found\""."], "stdout": "yarn install v1.19.1\n[1/5] Validating package.json...\n[2/5] Resolving packages...\n[3/5] Fetching packages...\ninfo If you think this is a bug, please open a bug report with the information provided in "/home/ec2-user/wetty/yarn-error.log".\ninfo Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.", "stdout_lines": ["yarn install v1.19.1", "[1/5] Validating package.json...", "[2/5] Resolving packages...", "[3/5] Fetching packages...", "info If you think this is a bug, please open a bug report with the information provided in "/home/ec2-user/wetty/yarn-error.log".", "info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command."]}
fatal: [ames.tower.1.redhatgov.io]: FAILED! => {"changed": true, "cmd": "yarn && yarn build", "delta": "0:00:01.338989", "end": "2019-10-22 19:56:59.018340", "msg": "non-zero return code", "rc": 1, "start": "2019-10-22 19:56:57.679351", "stderr": "error An unexpected error occurred: "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz: Request failed \"404 Not Found\"".", "stderr_lines": ["error An unexpected error occurred: "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz: Request failed \"404 Not Found\""."], "stdout": "yarn install v1.19.1\n[1/5] Validating package.json...\n[2/5] Resolving packages...\n[3/5] Fetching packages...\ninfo If you think this is a bug, please open a bug report with the information provided in "/home/ec2-user/wetty/yarn-error.log".\ninfo Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.", "stdout_lines": ["yarn install v1.19.1", "[1/5] Validating package.json...", "[2/5] Resolving packages...", "[3/5] Fetching packages...", "info If you think this is a bug, please open a bug report with the information provided in "/home/ec2-user/wetty/yarn-error.log".", "info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command."]}

PLAY RECAP *********************************************************************
ames.node.0.redhatgov.io : ok=2 changed=1 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0
ames.node.1.redhatgov.io : ok=2 changed=1 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0
ames.tower.0.redhatgov.io : ok=46 changed=4 unreachable=0 failed=1 skipped=34 rescued=0 ignored=0
ames.tower.1.redhatgov.io : ok=46 changed=4 unreachable=0 failed=1 skipped=34 rescued=0 ignored=0
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

I have tried the same commands from my laptop - and it works fine. But it doesn't work a tower vm

it's in the build wetty section.

  • hosts: localhost
    tasks:
    • name: Build wetty
      shell: yarn && yarn build
      args:
      chdir: "wetty"
      creates: "dist/client"

[root@ip-10-0-2-91 ~]# cd /home/ec2-user/wetty/
[root@ip-10-0-2-91 wetty]# ls
bin Dockerfile-ssh LICENSE src webpack.config.babel.js
docker-compose.yml docs package.json[root@ip-10-0-2-91 ~]# cd /home/ec2-user/wetty/
[root@ip-10-0-2-91 wetty]# ls
bin Dockerfile-ssh LICENSE src webpack.config.babel.js
docker-compose.yml docs package.json terminal.png yarn-error.log
Dockerfile index.js README.md tsconfig.json yarn.lock
[root@ip-10-0-2-91 wetty]# yarn
yarn install v1.19.1
[1/5] Validating package.json...
[2/5] Resolving packages...
[3/5] Fetching packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz: Request failed "404 Not Found"".
info If you think this is a bug, please open a bug report with the information provided in "/home/ec2-user/wetty/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
terminal.png yarn-error.log
Dockerfile index.js README.md tsconfig.json yarn.lock
[root@ip-10-0-2-91 wetty]# yarn
yarn install v1.19.1
[1/5] Validating package.json...
[2/5] Resolving packages...
[3/5] Fetching packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/es-abstract/-/es-abstract-1.14.0.tgz: Request failed "404 Not Found"".
info If you think this is a bug, please open a bug report with the information provided in "/home/ec2-user/wetty/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

terraform failing when vault.yml is encrypted

ansible-playbook 1_provision.yml produces the following excerpt of an error (the actual error is super long).

Error: Error applying plan:\n\n4 error(s) occurred:\n\n* aws_route53_record.master: 1 error(s) occurred:\n\n* aws_route53_record.master: NoSuchHostedZone: No hosted zone found with ID:  Z24HHVIM122O

While troubleshooting, I decided to NOT encrypt the group_vars/all/vault.yml file and the Create Infrastructure Using Terraform task completed without errors.

openshift_terraform RHSM error handling

The registration logic needs to be enhanced to provide better handling of different error conditions. For example this error is ignored when it should error, because this activation key did not have a subscription attached.

TASK [openshift.prereq : Register host via Activation key] ***********************************
fatal: [node.0.dca-ocp101.redhatgov.io]: FAILED! => {"changed": false, "cmd": "/sbin/subscription-manager register --org 11294799 --activationkey VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "msg": "None of the subscriptions on the activation key were available for attaching.", "rc": 70, "stderr": "None of the subscriptions on the activation key were available for attaching.\n", "stderr_lines": ["None of the subscriptions on the activation key were available for attaching."], "stdout": "", "stdout_lines": []}
...ignoring

ansible_tower_aws Mac OS pre-reqs

It is worth nothing that for at least Mac OS, possibly RHEL as per the ec2 module requirements "boto3" is also a required package to be installed. For mac simply adding the below to the existing instruction set will resolve this change.

sudo pip install boto3

Don't use underscore "_" in the workshop name.

Ran into this today with a workshop - there's a known issue with Apache and underscores in the workshop name portion of the machine name. If you try to hit the web page you get an error 400. If you use the public facing IP address, it works fine.

This line:

workshop_prefix : defaults to "tower", set to the name of your workshop

should say:

workshop_prefix : defaults to "tower", set to the name of your workshop - NO underscores!

I have changed the file and submitted it.

student_cli playbooks/roles not working...

I was able to provision (after some hacks not documented in README :-)) the student_cli workshop but I'm not able to run the 2_configure.yml playbook.

I'm getting the following error:

TASK [subscription_manager : Add DNS entries for subscription] *************************************************************************************************
task path: /home/rsoares/workshops/RedHatGov/redhatgov.workshops/student_cli/roles/subscription_manager/tasks/main.yml:7
fatal: [redhatgovbr.0.redhatbr.io]: FAILED! => {
    "msg": "The conditional check 'prep' failed. The error was: error while evaluating conditional (prep): 'prep' is undefined\n\nThe error appears to have been in '/home/rsoares/workshops/RedHatGov/redhatgov.workshops/student_cli/roles/subscription_manager/tasks/main.yml': line 7, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n#---------------------------------------------------\n- name: Add DNS entries for subscription\n  ^ here\n"
}
	to retry, use: --limit @/home/rsoares/.ansible-retry/2_configure.retry

PLAY RECAP *****************************************************************************************************************************************************
redhatgovbr.0.redhatbr.io  : ok=1    changed=0    unreachable=0    failed=1

It appears some tasks on role subscription_manager references a fact named prep which is not defined anywhere...

see: https://github.com/RedHatGov/redhatgov.workshops/blob/master/student_cli/roles/subscription_manager/tasks/main.yml#L20

Can someone guide me here? Is this workshop (student_cli) still valid or I should work with another one?

ansible_tower replaced by ansible_tower_aws

Since ansible_tower has been replaced by ansible_tower_aws, I recommend removing the old directory; since it is out of date and missing variables used by the latest workshop.

yum update failing for RHEL workshop when RHEL version is 8.3

It appears that the xorgxrdp package from the EPEL repository is not compatible with RHEL 8.3.

ansible-playbook 2_load.yml

...
TASK [Gathering Facts] *************************************************************************************************************
ok: [asnellrhel8ws.node.0.rhnaps.io]

TASK [upgrade : Upgrade all packages to latest] ************************************************************************************
changed: [asnellrhel8ws.node.0.rhnaps.io]

TASK [upgrade : Check on Package Updates ...] **************************************************************************************
FAILED - RETRYING: Check on Package Updates ... (180 retries left).
fatal: [asnellrhel8ws.node.0.rhnaps.io]: FAILED! => {"ansible_job_id": "715022773479.129659", "attempts": 2, "changed": false, "failures": [], "finished": 1, "msg": "Depsolve Error occured: \n Problem 1: package xorgxrdp-0.2.14-2.el8.x86_64 requires xorg-x11-server-Xorg(x86-64) = 1.20.6, but none of the providers can be installed\n - package gdm-1:3.28.3-34.el8.x86_64 conflicts with xorg-x11-server-Xorg < 1.20.8-4 provided by xorg-x11-server-Xorg-1.20.6-3.el8.x86_64\n - cannot install the best update candidate for package xorgxrdp-0.2.14-2.el8.x86_64\n - cannot install the best update candidate for package gdm-1:3.28.3-29.el8.x86_64\n Problem 2: problem with installed package xorgxrdp-0.2.14-2.el8.x86_64\n - package xorgxrdp-0.2.14-2.el8.x86_64 requires xorg-x11-server-Xorg(x86-64) = 1.20.6, but none of the providers can be installed\n - cannot install both xorg-x11-server-Xorg-1.20.8-6.el8.x86_64 and xorg-x11-server-Xorg-1.20.6-3.el8.x86_64\n - cannot install the best update candidate for package xorg-x11-server-Xorg-1.20.6-3.el8.x86_64", "rc": 1, "results": []}

PLAY RECAP *************************************************************************************************************************
asnellrhel8ws.node.0.rhnaps.io : ok=67 changed=25 unreachable=0 failed=1 skipped=63 rescued=0 ignored=0
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Logging in to the provisioned host:

[root@ip-10-0-2-175 ~]# yum list xorg-x11-server-Xorg
Updating Subscription Management repositories.
Last metadata expiration check: 0:27:32 ago on Sat 14 Nov 2020 12:42:22 AM UTC.
Installed Packages
xorg-x11-server-Xorg.x86_64 1.20.6-3.el8 @rhel-8-for-x86_64-appstream-rpms
Available Packages
xorg-x11-server-Xorg.x86_64 1.20.8-6.el8 rhel-8-for-x86_64-appstream-rpms
[root@ip-10-0-2-175 ~]# yum update
Updating Subscription Management repositories.
Last metadata expiration check: 0:30:56 ago on Sat 14 Nov 2020 12:42:22 AM UTC.
Error:
Problem 1: package xorgxrdp-0.2.14-2.el8.x86_64 requires xorg-x11-server-Xorg(x86-64) = 1.20.6, but none of the providers can be installed

  • package gdm-1:3.28.3-34.el8.x86_64 conflicts with xorg-x11-server-Xorg < 1.20.8-4 provided by xorg-x11-server-Xorg-1.20.6-3.el8.x86_64
  • cannot install the best update candidate for package xorgxrdp-0.2.14-2.el8.x86_64
  • cannot install the best update candidate for package gdm-1:3.28.3-29.el8.x86_64
    Problem 2: problem with installed package xorgxrdp-0.2.14-2.el8.x86_64
  • package xorgxrdp-0.2.14-2.el8.x86_64 requires xorg-x11-server-Xorg(x86-64) = 1.20.6, but none of the providers can be installed
  • cannot install both xorg-x11-server-Xorg-1.20.8-6.el8.x86_64 and xorg-x11-server-Xorg-1.20.6-3.el8.x86_64
  • cannot install the best update candidate for package xorg-x11-server-Xorg-1.20.6-3.el8.x86_64
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    [root@ip-10-0-2-175 ~]# yum list xorgxrdp
    Updating Subscription Management repositories.
    Last metadata expiration check: 0:31:06 ago on Sat 14 Nov 2020 12:42:22 AM UTC.
    Installed Packages
    xorgxrdp.x86_64 0.2.14-2.el8 @epel
    [root@ip-10-0-2-175 ~]# cat /etc/redhat-release
    Red Hat Enterprise Linux release 8.3 (Ootpa)

As a workaround, I modified the 'update' role to add a play that runs the following (this is admittedly a bad temporary workaround):

yum update --allowerasing

Change made to file: redhatgov.workshops/roles/upgrade/tasks/main.yml

BEFORE:


tasks file for upgrade

  • name: Upgrade all packages to latest
    package:
    name: "*"
    state: latest
    async: 10800
    poll: 0
    register: package_update_status

AFTER


tasks file for upgrade

  • name: Upgrade packages and reboot
    block:
    • name: Workaround for 8.3
      command: yum update -y --allowerasing

    • name: Upgrade all packages to latest
      package:
      name: "*"
      state: latest
      async: 10800
      poll: 0
      register: package_update_status

...

  • name: Rebooting to apply kernel updates (<2.7)
    shell: /sbin/shutdown -r +1
    when: ansible_version.full < "2.7" and job_result is changed

  • name: Force reboot for 8.3
    reboot:

  • name: Wait for system to become reachable again
    wait_for_connection:
    timeout: 900
    delay: 120
    sleep: 5

when: upgrade == 1

when: ansible_version.full < "2.7" and job_result is changed

This removes the xorgxrdp package; hopefully this doesn't affect the workshop.

I suppose an alternative would be to fix the RHEL version to 8.2 using something like

subscription-manager release --set=8.2

the play should check for existing vault_cluster_name and inform user

TASK [aws.infra : Create infrastructure using Terraform] **********************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "command": "/usr/local/bin/terraform apply -no-color -input=false -auto-approve=true -lock=true /tmp/tmpj4IPYh.tfplan", "msg": "Failure when executing Terraform command. Exited 1.\nstdout: aws_vpc.main: Creating...\n  arn:                              \"\" => \"<computed>\"\n  assign_generated_ipv6_cidr_block: \"\" => \"false\"\n  cidr_block:                       \"\" => \"10.0.0.0/16\"\n  default_network_acl_id:           \"\" => \"<computed>\"\n  default_route_table_id:           \"\" => \"<computed>\"\n  default_security_group_id:        \"\" => \"<computed>\"\n  dhcp_options_id:                  \"\" => \"<computed>\"\n  enable_classiclink:               \"\" => \"<computed>\"\n  enable_classiclink_dns_support:   \"\" => \"<computed>\"\n  enable_dns_hostnames:             \"\" => \"true\"\n  enable_dns_support:               \"\" => \"true\"\n  instance_tenancy:                 \"\" => \"default\"\n  ipv6_association_id:              \"\" => \"<computed>\"\n  ipv6_cidr_block:                  \"\" => \"<computed>\"\n  main_route_table_id:              \"\" => \"<computed>\"\n  owner_id:                         \"\" => \"<computed>\"\n  tags.%:                           \"\" => \"1\"\n  tags.Name:                        \"\" => \"tosc.redhatgov.io-openshift-vpc\"\naws_vpc.main: Creation complete after 4s (ID: vpc-0d5afb662794c4c7a)\naws_internet_gateway.public: Creating...\n  owner_id:  \"\" => \"<computed>\"\n  tags.%:    \"0\" => \"1\"\n  tags.Name: \"\" => \"tosc.redhatgov.io-openshift-igw\"\n  vpc_id:    \"\" => \"vpc-0d5afb662794c4c7a\"\naws_subnet.public: Creating...\n  arn:                             \"\" => \"<computed>\"\n  assign_ipv6_address_on_creation: \"\" => \"false\"\n  availability_zone:               \"\" => \"<computed>\"\n  availability_zone_id:            \"\" => \"<computed>\"\n  cidr_block:                      \"\" => \"10.0.2.0/24\"\n  ipv6_cidr_block:                 \"\" => \"<computed>\"\n  ipv6_cidr_block_association_id:  \"\" => \"<computed>\"\n  map_public_ip_on_launch:         \"\" => \"true\"\n  owner_id:                        \"\" => \"<computed>\"\n  tags.%:                          \"\" => \"1\"\n  tags.Name:                       \"\" => \"tosc.redhatgov.io-openshift-public-subnet\"\n  vpc_id:                          \"\" => \"vpc-0d5afb662794c4c7a\"\naws_security_group.ose-sg: Creating...\n  arn:                                  \"\" => \"<computed>\"\n  description:                          \"\" => \"Managed by Terraform\"\n  egress.#:                             \"\" => \"1\"\n  egress.482069346.cidr_blocks.#:       \"\" => \"1\"\n  egress.482069346.cidr_blocks.0:       \"\" => \"0.0.0.0/0\"\n  egress.482069346.description:         \"\" => \"\"\n  egress.482069346.from_port:           \"\" => \"0\"\n  egress.482069346.ipv6_cidr_blocks.#:  \"\" => \"0\"\n  egress.482069346.prefix_list_ids.#:   \"\" => \"0\"\n  egress.482069346.protocol:            \"\" => \"-1\"\n  egress.482069346.security_groups.#:   \"\" => \"0\"\n  egress.482069346.self:                \"\" => \"false\"\n  egress.482069346.to_port:             \"\" => \"0\"\n  ingress.#:                            \"\" => \"1\"\n  ingress.482069346.cidr_blocks.#:      \"\" => \"1\"\n  ingress.482069346.cidr_blocks.0:      \"\" => \"0.0.0.0/0\"\n  ingress.482069346.description:        \"\" => \"\"\n  ingress.482069346.from_port:          \"\" => \"0\"\n  ingress.482069346.ipv6_cidr_blocks.#: \"\" => \"0\"\n  ingress.482069346.prefix_list_ids.#:  \"\" => \"0\"\n  ingress.482069346.protocol:           \"\" => \"-1\"\n  ingress.482069346.security_groups.#:  \"\" => \"0\"\n  ingress.482069346.self:               \"\" => \"false\"\n  ingress.482069346.to_port:            \"\" => \"0\"\n  name:                                 \"\" => \"ose-sg\"\n  owner_id:                             \"\" => \"<computed>\"\n  revoke_rules_on_delete:               \"\" => \"false\"\n  vpc_id:                               \"\" => \"vpc-0d5afb662794c4c7a\"\naws_internet_gateway.public: Creation complete after 2s (ID: igw-0791896b958d7777c)\naws_route_table.public: Creating...\n  owner_id:                                   \"\" => \"<computed>\"\n  propagating_vgws.#:                         \"\" => \"<computed>\"\n  route.#:                                    \"\" => \"1\"\n  route.3091792144.cidr_block:                \"\" => \"0.0.0.0/0\"\n  route.3091792144.egress_only_gateway_id:    \"\" => \"\"\n  route.3091792144.gateway_id:                \"\" => \"igw-0791896b958d7777c\"\n  route.3091792144.instance_id:               \"\" => \"\"\n  route.3091792144.ipv6_cidr_block:           \"\" => \"\"\n  route.3091792144.nat_gateway_id:            \"\" => \"\"\n  route.3091792144.network_interface_id:      \"\" => \"\"\n  route.3091792144.transit_gateway_id:        \"\" => \"\"\n  route.3091792144.vpc_peering_connection_id: \"\" => \"\"\n  tags.%:                                     \"\" => \"1\"\n  tags.Name:                                  \"\" => \"tosc.redhatgov.io-openshift-public-rt\"\n  vpc_id:                                     \"\" => \"vpc-0d5afb662794c4c7a\"\naws_subnet.public: Creation complete after 2s (ID: subnet-021e55765b854f643)\naws_security_group.ose-sg: Creation complete after 3s (ID: sg-05ac93271edb3f733)\naws_instance.ose-master: Creating...\n  ami:                                               \"\" => \"ami-000db10762d0c4c05\"\n  arn:                                               \"\" => \"<computed>\"\n  associate_public_ip_address:                       \"\" => \"<computed>\"\n  availability_zone:                                 \"\" => \"<computed>\"\n  cpu_core_count:                                    \"\" => \"<computed>\"\n  cpu_threads_per_core:                              \"\" => \"<computed>\"\n  ebs_block_device.#:                                \"\" => \"1\"\n  ebs_block_device.3905984573.delete_on_termination: \"\" => \"true\"\n  ebs_block_device.3905984573.device_name:           \"\" => \"/dev/xvdb\"\n  ebs_block_device.3905984573.encrypted:             \"\" => \"<computed>\"\n  ebs_block_device.3905984573.iops:                  \"\" => \"\"\n  ebs_block_device.3905984573.kms_key_id:            \"\" => \"<computed>\"\n  ebs_block_device.3905984573.snapshot_id:           \"\" => \"<computed>\"\n  ebs_block_device.3905984573.volume_id:             \"\" => \"<computed>\"\n  ebs_block_device.3905984573.volume_size:           \"\" => \"50\"\n  ebs_block_device.3905984573.volume_type:           \"\" => \"gp2\"\n  ephemeral_block_device.#:                          \"\" => \"<computed>\"\n  get_password_data:                                 \"\" => \"false\"\n  host_id:                                           \"\" => \"<computed>\"\n  instance_state:                                    \"\" => \"<computed>\"\n  instance_type:                                     \"\" => \"m4.2xlarge\"\n  ipv6_address_count:                                \"\" => \"<computed>\"\n  ipv6_addresses.#:                                  \"\" => \"<computed>\"\n  key_name:                                          \"\" => \"tosc\"\n  network_interface.#:                               \"\" => \"<computed>\"\n  network_interface_id:                              \"\" => \"<computed>\"\n  password_data:                                     \"\" => \"<computed>\"\n  placement_group:                                   \"\" => \"<computed>\"\n  primary_network_interface_id:                      \"\" => \"<computed>\"\n  private_dns:                                       \"\" => \"<computed>\"\n  private_ip:                                        \"\" => \"<computed>\"\n  public_dns:                                        \"\" => \"<computed>\"\n  public_ip:                                         \"\" => \"<computed>\"\n  root_block_device.#:                               \"\" => \"1\"\n  root_block_device.0.delete_on_termination:         \"\" => \"true\"\n  root_block_device.0.encrypted:                     \"\" => \"<computed>\"\n  root_block_device.0.kms_key_id:                    \"\" => \"<computed>\"\n  root_block_device.0.volume_id:                     \"\" => \"<computed>\"\n  root_block_device.0.volume_size:                   \"\" => \"72\"\n  root_block_device.0.volume_type:                   \"\" => \"gp2\"\n  security_groups.#:                                 \"\" => \"1\"\n  security_groups.1234085284:                        \"\" => \"sg-05ac93271edb3f733\"\n  source_dest_check:                                 \"\" => \"true\"\n  subnet_id:                                         \"\" => \"subnet-021e55765b854f643\"\n  tags.%:                                            \"\" => \"4\"\n  tags.Name:                                         \"\" => \"master\"\n  tags.kubernetes.io/cluster/tosc:                   \"\" => \"bdsa\"\n  tags.role:                                         \"\" => \"masters\"\n  tags.sshUser:                                      \"\" => \"ec2-user\"\n  tenancy:                                           \"\" => \"<computed>\"\n  user_data:                                         \"\" => \"a003292bc87636bf04c9acb8373c2781cc48a55a\"\n  volume_tags.%:                                     \"\" => \"1\"\n  volume_tags.kubernetes.io/cluster/tosc:            \"\" => \"bdsa\"\n  vpc_security_group_ids.#:                          \"\" => \"<computed>\"\naws_instance.ose-node[0]: Creating...\n  ami:                                               \"\" => \"ami-000db10762d0c4c05\"\n  arn:                                               \"\" => \"<computed>\"\n  associate_public_ip_address:                       \"\" => \"<computed>\"\n  availability_zone:                                 \"\" => \"<computed>\"\n  cpu_core_count:                                    \"\" => \"<computed>\"\n  cpu_threads_per_core:                              \"\" => \"<computed>\"\n  ebs_block_device.#:                                \"\" => \"1\"\n  ebs_block_device.3905984573.delete_on_termination: \"\" => \"true\"\n  ebs_block_device.3905984573.device_name:           \"\" => \"/dev/xvdb\"\n  ebs_block_device.3905984573.encrypted:             \"\" => \"<computed>\"\n  ebs_block_device.3905984573.iops:                  \"\" => \"\"\n  ebs_block_device.3905984573.kms_key_id:            \"\" => \"<computed>\"\n  ebs_block_device.3905984573.snapshot_id:           \"\" => \"<computed>\"\n  ebs_block_device.3905984573.volume_id:             \"\" => \"<computed>\"\n  ebs_block_device.3905984573.volume_size:           \"\" => \"50\"\n  ebs_block_device.3905984573.volume_type:           \"\" => \"gp2\"\n  ephemeral_block_device.#:                          \"\" => \"<computed>\"\n  get_password_data:                                 \"\" => \"false\"\n  host_id:                                           \"\" => \"<computed>\"\n  instance_state:                                    \"\" => \"<computed>\"\n  instance_type:                                     \"\" => \"m4.xlarge\"\n  ipv6_address_count:                                \"\" => \"<computed>\"\n  ipv6_addresses.#:                                  \"\" => \"<computed>\"\n  key_name:                                          \"\" => \"tosc\"\n  network_interface.#:                               \"\" => \"<computed>\"\n  network_interface_id:                              \"\" => \"<computed>\"\n  password_data:                                     \"\" => \"<computed>\"\n  placement_group:                                   \"\" => \"<computed>\"\n  primary_network_interface_id:                      \"\" => \"<computed>\"\n  private_dns:                                       \"\" => \"<computed>\"\n  private_ip:                                        \"\" => \"<computed>\"\n  public_dns:                                        \"\" => \"<computed>\"\n  public_ip:                                         \"\" => \"<computed>\"\n  root_block_device.#:                               \"\" => \"1\"\n  root_block_device.0.delete_on_termination:         \"\" => \"true\"\n  root_block_device.0.encrypted:                     \"\" => \"<computed>\"\n  root_block_device.0.kms_key_id:                    \"\" => \"<computed>\"\n  root_block_device.0.volume_id:                     \"\" => \"<computed>\"\n  root_block_device.0.volume_size:                   \"\" => \"72\"\n  root_block_device.0.volume_type:                   \"\" => \"gp2\"\n  security_groups.#:                                 \"\" => \"1\"\n  security_groups.1234085284:                        \"\" => \"sg-05ac93271edb3f733\"\n  source_dest_check:                                 \"\" => \"true\"\n  subnet_id:                                         \"\" => \"subnet-021e55765b854f643\"\n  tags.%:                                            \"\" => \"4\"\n  tags.Name:                                         \"\" => \"node-0\"\n  tags.kubernetes.io/cluster/tosc:                   \"\" => \"bdsa\"\n  tags.role:                                         \"\" => \"nodes\"\n  tags.sshUser:                                      \"\" => \"ec2-user\"\n  tenancy:                                           \"\" => \"<computed>\"\n  user_data:                                         \"\" => \"a003292bc87636bf04c9acb8373c2781cc48a55a\"\n  volume_tags.%:                                     \"\" => \"1\"\n  volume_tags.kubernetes.io/cluster/tosc:            \"\" => \"bdsa\"\n  vpc_security_group_ids.#:                          \"\" => \"<computed>\"\naws_instance.ose-node[1]: Creating...\n  ami:                                               \"\" => \"ami-000db10762d0c4c05\"\n  arn:                                               \"\" => \"<computed>\"\n  associate_public_ip_address:                       \"\" => \"<computed>\"\n  availability_zone:                                 \"\" => \"<computed>\"\n  cpu_core_count:                                    \"\" => \"<computed>\"\n  cpu_threads_per_core:                              \"\" => \"<computed>\"\n  ebs_block_device.#:                                \"\" => \"1\"\n  ebs_block_device.3905984573.delete_on_termination: \"\" => \"true\"\n  ebs_block_device.3905984573.device_name:           \"\" => \"/dev/xvdb\"\n  ebs_block_device.3905984573.encrypted:             \"\" => \"<computed>\"\n  ebs_block_device.3905984573.iops:                  \"\" => \"\"\n  ebs_block_device.3905984573.kms_key_id:            \"\" => \"<computed>\"\n  ebs_block_device.3905984573.snapshot_id:           \"\" => \"<computed>\"\n  ebs_block_device.3905984573.volume_id:             \"\" => \"<computed>\"\n  ebs_block_device.3905984573.volume_size:           \"\" => \"50\"\n  ebs_block_device.3905984573.volume_type:           \"\" => \"gp2\"\n  ephemeral_block_device.#:                          \"\" => \"<computed>\"\n  get_password_data:                                 \"\" => \"false\"\n  host_id:                                           \"\" => \"<computed>\"\n  instance_state:                                    \"\" => \"<computed>\"\n  instance_type:                                     \"\" => \"m4.xlarge\"\n  ipv6_address_count:                                \"\" => \"<computed>\"\n  ipv6_addresses.#:                                  \"\" => \"<computed>\"\n  key_name:                                          \"\" => \"tosc\"\n  network_interface.#:                               \"\" => \"<computed>\"\n  network_interface_id:                              \"\" => \"<computed>\"\n  password_data:                                     \"\" => \"<computed>\"\n  placement_group:                                   \"\" => \"<computed>\"\n  primary_network_interface_id:                      \"\" => \"<computed>\"\n  private_dns:                                       \"\" => \"<computed>\"\n  private_ip:                                        \"\" => \"<computed>\"\n  public_dns:                                        \"\" => \"<computed>\"\n  public_ip:                                         \"\" => \"<computed>\"\n  root_block_device.#:                               \"\" => \"1\"\n  root_block_device.0.delete_on_termination:         \"\" => \"true\"\n  root_block_device.0.encrypted:                     \"\" => \"<computed>\"\n  root_block_device.0.kms_key_id:                    \"\" => \"<computed>\"\n  root_block_device.0.volume_id:                     \"\" => \"<computed>\"\n  root_block_device.0.volume_size:                   \"\" => \"72\"\n  root_block_device.0.volume_type:                   \"\" => \"gp2\"\n  security_groups.#:                                 \"\" => \"1\"\n  security_groups.1234085284:                        \"\" => \"sg-05ac93271edb3f733\"\n  source_dest_check:                                 \"\" => \"true\"\n  subnet_id:                                         \"\" => \"subnet-021e55765b854f643\"\n  tags.%:                                            \"\" => \"4\"\n  tags.Name:                                         \"\" => \"node-1\"\n  tags.kubernetes.io/cluster/tosc:                   \"\" => \"bdsa\"\n  tags.role:                                         \"\" => \"nodes\"\n  tags.sshUser:                                      \"\" => \"ec2-user\"\n  tenancy:                                           \"\" => \"<computed>\"\n  user_data:                                         \"\" => \"a003292bc87636bf04c9acb8373c2781cc48a55a\"\n  volume_tags.%:                                     \"\" => \"1\"\n  volume_tags.kubernetes.io/cluster/tosc:            \"\" => \"bdsa\"\n  vpc_security_group_ids.#:                          \"\" => \"<computed>\"\naws_route_table.public: Creation complete after 2s (ID: rtb-0eef265b807d7bf71)\naws_route_table_association.public: Creating...\n  route_table_id: \"\" => \"rtb-0eef265b807d7bf71\"\n  subnet_id:      \"\" => \"subnet-021e55765b854f643\"\naws_route_table_association.public: Creation complete after 0s (ID: rtbassoc-06b81a45a85b1a04d)\naws_instance.ose-node.0: Still creating... (10s elapsed)\naws_instance.ose-master: Still creating... (10s elapsed)\naws_instance.ose-node.1: Still creating... (10s elapsed)\naws_instance.ose-master: Provisioning with 'file'...\naws_instance.ose-master: Still creating... (20s elapsed)\naws_instance.ose-node.0: Still creating... (20s elapsed)\naws_instance.ose-node.1: Still creating... (20s elapsed)\naws_instance.ose-node[1]: Provisioning with 'file'...\naws_instance.ose-node[0]: Provisioning with 'file'...\naws_instance.ose-master: Still creating... (30s elapsed)\naws_instance.ose-node.0: Still creating... (30s elapsed)\naws_instance.ose-node.1: Still creating... (30s elapsed)\naws_instance.ose-master: Still creating... (40s elapsed)\naws_instance.ose-node.0: Still creating... (40s elapsed)\naws_instance.ose-node.1: Still creating... (40s elapsed)\naws_instance.ose-node.0: Still creating... (50s elapsed)\naws_instance.ose-master: Still creating... (50s elapsed)\naws_instance.ose-node.1: Still creating... (50s elapsed)\naws_instance.ose-node.0: Still creating... (1m0s elapsed)\naws_instance.ose-master: Still creating... (1m0s elapsed)\naws_instance.ose-node.1: Still creating... (1m0s elapsed)\naws_instance.ose-master: Still creating... (1m10s elapsed)\naws_instance.ose-node.0: Still creating... (1m10s elapsed)\naws_instance.ose-node.1: Still creating... (1m10s elapsed)\naws_instance.ose-node.0: Still creating... (1m20s elapsed)\naws_instance.ose-master: Still creating... (1m20s elapsed)\naws_instance.ose-node.1: Still creating... (1m20s elapsed)\naws_instance.ose-master: Still creating... (1m30s elapsed)\naws_instance.ose-node.0: Still creating... (1m30s elapsed)\naws_instance.ose-node.1: Still creating... (1m30s elapsed)\naws_instance.ose-master: Still creating... (1m40s elapsed)\naws_instance.ose-node.0: Still creating... (1m40s elapsed)\naws_instance.ose-node.1: Still creating... (1m40s elapsed)\naws_instance.ose-master: Still creating... (1m50s elapsed)\naws_instance.ose-node.0: Still creating... (1m50s elapsed)\naws_instance.ose-node.1: Still creating... (1m50s elapsed)\naws_instance.ose-node.0: Still creating... (2m0s elapsed)\naws_instance.ose-master: Still creating... (2m0s elapsed)\naws_instance.ose-node.1: Still creating... (2m0s elapsed)\naws_instance.ose-node.0: Still creating... (2m10s elapsed)\naws_instance.ose-master: Still creating... (2m10s elapsed)\naws_instance.ose-node.1: Still creating... (2m10s elapsed)\naws_instance.ose-master: Still creating... (2m20s elapsed)\naws_instance.ose-node.0: Still creating... (2m20s elapsed)\naws_instance.ose-node.1: Still creating... (2m20s elapsed)\naws_instance.ose-node.0: Still creating... (2m30s elapsed)\naws_instance.ose-node.1: Still creating... (2m30s elapsed)\naws_instance.ose-master: Still creating... (2m30s elapsed)\naws_instance.ose-node.1: Still creating... (2m40s elapsed)\naws_instance.ose-node.0: Still creating... (2m40s elapsed)\naws_instance.ose-master: Still creating... (2m40s elapsed)\naws_instance.ose-node.1: Still creating... (2m50s elapsed)\naws_instance.ose-node.0: Still creating... (2m50s elapsed)\naws_instance.ose-master: Still creating... (2m50s elapsed)\naws_instance.ose-node.1: Still creating... (3m0s elapsed)\naws_instance.ose-master: Still creating... (3m0s elapsed)\naws_instance.ose-node.0: Still creating... (3m0s elapsed)\naws_instance.ose-node.1: Still creating... (3m10s elapsed)\naws_instance.ose-node.0: Still creating... (3m10s elapsed)\naws_instance.ose-master: Still creating... (3m10s elapsed)\naws_instance.ose-node.1: Still creating... (3m20s elapsed)\naws_instance.ose-master: Still creating... (3m20s elapsed)\naws_instance.ose-node.0: Still creating... (3m20s elapsed)\naws_instance.ose-node.1: Still creating... (3m30s elapsed)\naws_instance.ose-master: Still creating... (3m30s elapsed)\naws_instance.ose-node.0: Still creating... (3m30s elapsed)\naws_instance.ose-node.0: Still creating... (3m40s elapsed)\naws_instance.ose-master: Still creating... (3m40s elapsed)\naws_instance.ose-node.1: Still creating... (3m40s elapsed)\naws_instance.ose-node.1: Still creating... (3m50s elapsed)\naws_instance.ose-node.0: Still creating... (3m50s elapsed)\naws_instance.ose-master: Still creating... (3m50s elapsed)\naws_instance.ose-node.0: Still creating... (4m0s elapsed)\naws_instance.ose-master: Still creating... (4m0s elapsed)\naws_instance.ose-node.1: Still creating... (4m0s elapsed)\naws_instance.ose-node.0: Still creating... (4m10s elapsed)\naws_instance.ose-master: Still creating... (4m10s elapsed)\naws_instance.ose-node.1: Still creating... (4m10s elapsed)\naws_instance.ose-node.0: Still creating... (4m20s elapsed)\naws_instance.ose-node.1: Still creating... (4m20s elapsed)\naws_instance.ose-master: Still creating... (4m20s elapsed)\naws_instance.ose-master: Still creating... (4m30s elapsed)\naws_instance.ose-node.1: Still creating... (4m30s elapsed)\naws_instance.ose-node.0: Still creating... (4m30s elapsed)\naws_instance.ose-master: Still creating... (4m40s elapsed)\naws_instance.ose-node.1: Still creating... (4m40s elapsed)\naws_instance.ose-node.0: Still creating... (4m40s elapsed)\naws_instance.ose-node.1: Still creating... (4m50s elapsed)\naws_instance.ose-node.0: Still creating... (4m50s elapsed)\naws_instance.ose-master: Still creating... (4m50s elapsed)\naws_instance.ose-master: Still creating... (5m0s elapsed)\naws_instance.ose-node.0: Still creating... (5m0s elapsed)\naws_instance.ose-node.1: Still creating... (5m0s elapsed)\naws_instance.ose-node.1: Still creating... (5m10s elapsed)\naws_instance.ose-master: Still creating... (5m10s elapsed)\naws_instance.ose-node.0: Still creating... (5m10s elapsed)\naws_instance.ose-node.1: Still creating... (5m20s elapsed)\naws_instance.ose-node.0: Still creating... (5m20s elapsed)\naws_instance.ose-master: Still creating... (5m20s elapsed)\n\nstderr: \nError: Error applying plan:\n\n3 error(s) occurred:\n\n* aws_instance.ose-node[1]: timeout - last error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain\n* aws_instance.ose-node[0]: timeout - last error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain\n* aws_instance.ose-master: timeout - last error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain\n\nTerraform does not automatically rollback in the face of errors.\nInstead, your Terraform state file has been partially updated with\nany resources that successfully completed. Please address the error\nabove and apply again to incrementally change your infrastructure.\n\n\n"}

Node count appears to be broken

Hey guys, the instructions for the terraform OCP creation state that you can control the number of nodes by editing the [0:1] block in the inventory file prior to provisioning. However, in the

redhatgov.workshops/openshift_terraform/roles/terraform.infra.aws/defaults/main.yml

file, theres an ec2_node_count field hard coded to 3 nodes. Can that be changed to reflect the desired number of nodes via a variable?

Documentation updates for rhel_aws workshop

  1. In README.md, there are instructions in some of the o/s sections that refer to the Ansible Tower workshop, such as

(ansible) $ cd ~/src/redhatgov.workshops/ansible_tower_aws/
(ansible) $ cp group_vars/all_example.yml group_vars/all.yml
(ansible) $ vim group_vars/all.yml # fill in all the required fields
(ansible) $ ansible-playbook 1_provision.yml
(ansible) $ ansible-playbook 2_preload.yml
(ansible) $ ssh -i $(ls -1 .redhatgov/*-key | head -1) ec2-user@$(egrep '^workshop_prefix' group_vars/all.yml | awk -F" '{ print $2 }').admin.redhatgov.io
(admin) $ cd src/ansible_tower_aws
(admin) $ ansible-playbook 3_load.yml

  • The instructions that refer to cd'ing to ansible_tower_aws should refer to rhel_aws.
  • After running the 1_provision.yml playbook, the 2_load.yml playbook should be run. In some sections, the instructions indicate that 2_preload.yml should be run. There is also mention of running 3_load.yml. Neither of these playbooks exist in the rhel_aws directory.
  1. When running the 1_provision.yml playbook, there is an instruction printed that says to run 2_preload.yml. As with the README.md file, this should say 2_load.yml. Here is the output snippet:

TASK [explain how to login] *****************************************************************************************************
ok: [localhost] => {
"msg": "Please run 'ansible-playbook 2_preload.yml' to load the admin server."
}

PLAY RECAP **********************************************************************************************************************
localhost : ok=34 changed=13 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0

AWS RHEL workshop provisioning failing at TASK [acme.sh to issue certs with session token]

Workshop provisioner failing at the mentioned step. Tested via native Fedora/RHEL deployment by my colleague Mark, and in container format by me. Logs attached appear to show a Create account key error.

Is this a transient issue or has something changed at the LetsEncrypt end? I see our code hasn't changed at all in 2022, and I deployed this workshop successfully earlier this month.

Documentation error for RHEL workshop

Workshop: RHEL workshop (rhel_aws)

After provisioning the RHEL workshop, the stdout from the 1_provision.yml indicates you should run the '2_preload.yml' playbook, but the name of the playbook is '2_load.yml'

Output snippet from

ansible-playbook 1_provision.yml

...
TASK [aws.create : Create DNS records for Windows] *********************************************************************************

TASK [explain how to login] ********************************************************************************************************
ok: [localhost] => {
"msg": "Please run 'ansible-playbook 2_preload.yml' to load the admin server."
}

PLAY RECAP *************************************************************************************************************************
localhost : ok=34 changed=13 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0

ls 2*

2_load.yml

2_load.yml playbook for rhel_aws workshop consistently hangs on task TASK [graphical : Install GUI components and Xrdp build reqs on RHEL 8]

When running the 2_load.yml playbook, it consistently hangs when getting to the task

TASK [graphical : Install GUI components and Xrdp build reqs on RHEL 8]

I've waited at least 15 minutes (~30 minutes on one occasion). If I just abort and re-run the playbook, it finishes without issues.

I'm on an Apple:

Catalina 10.15.6
Python 3.8.2
Ansible 2.10.1

hang-error.txt

bash-completion package not installed on RHEL 8 hosts

I noticed that bash-completion, which is super useful for attendees, is not installed on the RHEL hosts in the RHEL 8 workshop.

I would be happy to add this function myself but would like some guidance on where you suggest the code be located. I know that most of the work is done by the roles in the redhatgov.workshops/roles directory, but am not sure which would be most suitable to add it to, as I do not want to break any other workshops, or non-RHEL 8 deployments.

The installation is as simple as adding the bash-completion package from the RHEL 8 baseos repo.

ansible_tower - exercise2.0-7 failing

Hello,

I have provisioned the ansible_tower workshop and am trying to complete Exercise 2.0 - Step 7, but the playbook is failing with the following message:

TASK [packages_el : Install the Tower RPM.] **********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "It appears that a space separated string of packages was passed in as an argument. To operate
 on several packages, pass a comma separated string of packages or a list of packages."}
        to retry, use: --limit @/tmp/ansible-tower-setup-3.3.1-1/install.retry

Full setup.sh log file:
setup-2018-11-07-16:08:54.log

Additional Info:

I am seeing this same failure across multiple tower nodes in the workshop I provisioned. I also see this failure whether I run commands manually or use the exercise2.0-* bash scripts listed in the ~/walkthrough directory.

I didn't encounter any issues running the 1.x exercises.

Not sure if the following is relevant, but my workshop was provisioned using the following settings in my group_vars/all.yml file:

# us-east-1
ami_id:                         "ami-a4791ede" # RHEL 7.4 with JBoss EAP 7.1

# subscription_manager     |      Red Hat Subscription via Cloud Access
cloud_access:                      true

Please let me know if there's other information I can supply about the environment in question.

Thank you!
Jason

Ansible Tower AWS Workshop | Fix firewall service list that break deployment

In 3_load.yml the following task fails due to httpd not being the correct service name. The correct service names are http and https

- name: Open ports 80,443 on nodes
  become: yes
  remote_user: ec2-user
  hosts:
    - rhel_nodes
  gather_facts: yes
  tasks:
    - name: open firewalld ports
      firewalld:
        service: "{{ item }}"
        permanent: yes
        state: enabled
      with_items:
      - 'httpd'
      - 'https'
    - name: restart service
      service:
        name: firewalld
        state: reloaded

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.