Giter Site home page Giter Site logo

dev-install's Introduction

dev-install

dev-install installs TripleO standalone on a remote system for development use.

Host configuration

dev-install requires that:

  • an appropriate OS has already been installed
  • EPEL repositories aren't deployed, they aren't compatible with OpenStack repositories
  • the machine running dev-install can SSH to the standalone host as either root or a user with passwordless sudo access
  • this machine has Ansible installed, and some dependencies like python3-netaddr.

You need to deploy the right RHEL version depending on which OSP version you want:

OSP version RHEL Version
16.2 8.4
17.1* 9.2
  • Current default in dev-install

There is no need to do any other configuration prior to running dev-install. When deploying on TripleO from upstream, you need to deploy on CentOS Stream. If CentOS is not Stream, dev-install will migrate it.

Local pre-requisites

dev-install requires up to date versions of ansible and make, both of which must be installed manually before invoking dev-install.

If installing OSP 16.2 with official rhel 8.4 cloud images, it is required that the cloud-init service be disabled before deployment as per THIS

At present the deployment depends on a valid DHCP source for the external interface (br-ex) as per THIS

All other requirements should be configured automatically by ansible. Note that dev-install does require root access (or passwordless sudo) on the machine it is invoked from to install certificate management tools (simpleca) in addition to the remote host.

Running dev-install

dev-install is invoked using its Makefile. The simplest invocation is:

$ make config host=<standalone host>
$ make osp_full

make config initialises 2 local statefiles:

  • inventory - this is an ansible inventory file, initialised such that standalone is an alias for your target host.
  • local-overrides.yaml - this is an ansible vars file containing configuration which overrides the defaults in playbooks/vars/defaults.yaml.

Both of these files can be safely modified.

make osp_full performs the actual installation. On an example system with 12 cores and 192GB RAM and running in a Red Hat data centre this takes approximately 65 minutes to execute.

If you deal with multiple OpenStack clouds and want to maintain a single local-overrides per cloud, you can create local-overrides-<name>.yaml and then use it when deploying with make osp_full overrides=local-overrides-<name>.yaml

Accessing OpenStack from your workstation

By default, dev-install configures OpenStack to use the default public IP of the host. To access this you just need a correct clouds.yaml, which dev-install configures with:

make local_os_client

This will configure your local clouds.yaml with 2 entries:

  • standalone - The admin user
  • standalone_openshift - The appropriately configured non-admin openshift user

You can change the name of these entries by editing local-overrides.yaml and setting local_cloudname to something else.

Validating the installation

dev-install provides an additional playbook to validate the fresh deployment. This can be run with:

make prepare_stack_testconfig

This can be used to configure some helpful defaults for validating your cluster, namely:

  • Configure SSH access
  • Configure routers and security groups to allow external network connectivity for created guests
  • Upload a Cirros image

Network configuration

dev-install will create a new OVS bridge called br-ex and move the host's external interface on to that bridge. This bridge is used to provide the external provider network if external_fip_pool_start and external_fip_pool_end are defined in local-overrides.yaml.

In addition it will create OVS bridges called br-ctlplane and br-hostonly. The former is used internally by OSP. The latter is a second provider network which is only routable from the host.

Note that we don't enable DHCP on provider networks by default, and it is not recommended to enable DHCP on the external network at all. To enable DHCP on the hostonly network after installation, run:

OS_CLOUD=standalone openstack subnet set --dhcp hostonly-subnet

make local_os_client will write a sshuttle script to scripts/sshuttle-standalone.sh which will route to the hostonly provider network over ssh.

Configuration

dev-install is configured by overriding variables in local-overrides.yaml. See the default variable definitions for what can be overridden.

Sizing

When idle, a standalone deployment uses approximately:

  • 16GB RAM
  • 15G on /
  • 3.5G on /home
  • 3.6G on /var/lib/cinder
  • 3.6G on /var/lib/nova

There is no need to mount /var/lib/cinder and /var/lib/nova separately if / is large enough for your workload.

Advanced features

NFV enablement

This section contains configuration procedures for single root input/output virtualization (SR-IOV) for network functions virtualization infrastructure (NFVi) in your Standalone OpenStack deployment. Unfortunately, most of these parameters don't have default values nor can be automatically figured out in a Standalone type environment.

SR-IOV Variables

To understand how the SR-IOV configuration works, please have a look at this upstream guide.

Name Default Value Description
sriov_services ['OS::TripleO::Services::NeutronSriovAgent', 'OS::TripleO::Services::BootParams'] List of TripleO services to add to the default Standalone role
sriov_interface [undefined] Name of the SR-IOV capable interface. Must be enabled in BIOS. e.g. ens1f0
sriov_nic_numvfs [undefined] Number of Virtual Functions that the NIC can handle.
sriov_nova_pci_passthrough [undefined] List of PCI Passthrough whitelist parameters. Guidelines to configure it.

Note: when SR-IOV is enabled, a dedicated provider network will be created and binded to the SR-IOV interface.

Kernel Variables

It is possible to configure the Kernel to boot with specific arguments:

Name Default Value Description
kernel_services ['TripleO::Services::BootParams'] List of TripleO services to add to the default Standalone role
kernel_args [undefined] Kernel arguments to configure when booting the machine.

DPDK Variables

It is possible to configure the deployment to be ready for DPDK:

Name Default Value Description
dpdk_services ['OS::TripleO::Services::ComputeNeutronOvsDpdk'] List of TripleO services to add to the default Standalone role
dpdk_interface [undefined] Name of the DPDK capable interface. Must be enabled in BIOS. e.g. ens1f0
tuned_isolated_cores [undefined] List of logical CPU ids which need to be isolated from the host processes. This input is provided to the tuned profile cpu-partitioning to configure systemd and repin interrupts (IRQ repinning).

When deploying DPDK, it is suggested to configure these options:

extra_heat_params:
  # A list or range of host CPU cores to which processes for pinned instance
  # CPUs (PCPUs) can be scheduled:
  NovaComputeCpuDedicatedSet: ['6-47']
  # Reserved RAM for host processes:
  NovaReservedHostMemory: 4096
  # Determine the host CPUs that unpinned instances can be scheduled to:
  NovaComputeCpuSharedSet: [0,1,2,3]
  # Sets the amount of hugepage memory to assign per NUMA node:
  OvsDpdkSocketMemory: "2048,2048"
  # A list or range of CPU cores for PMD threads to be pinned to:
  OvsPmdCoreList: "4,5"

SSL for public endpoints

This sections contains configuration procedures for enabling SSL on OpenStack public endpoints.

Name Default Value Description
ssl_enabled false Whether or not we enable SSL for public endpoints
ssl_ca_cert [undefined] CA certificate. If undefined, a self-signed will be generated and deployed
ssl_ca_key [undefined] CA key. If undefined, it will be generated and used to sign the SSL certificate
ssl_key [undefined] SSL Key. If undefined, it will be generated and deployed
ssl_cert [undefined] SSL certificate. If undefined, a self-signed will be generated and deployed
ssl_ca_cert_path /etc/pki/ca-trust/source/anchors/simpleca.crt Path to the CA certificate
update_local_pki false Whether or not we want to update the local PKI with the CA certificate

Multi-node deployments

It is possible to deploy Edge-style environments, where multiple AZ are configured.

Deploy the Central site

Deploy a regular cloud with dev-install, and make sure you set these parameters:

  • dcn_az: has to be central.
  • tunnel_remote_ips: list of known public IPs of the AZ nodes.

Once this is done, you need to collect the content from /home/stack/exported-data into a local directory on the host where dev-install is executed.

Deploy the "AZ" sites

Before deploying OSP, you need to scp the content from exported-data into the remote hosts into /opt/exported-data. Once this is done, you can deploy the AZ sites with a regular config for dev-install, except that you'll need to set these parameters:

  • dcn_az: must contains "az" in the string (e.g. az0, az1)
  • local_ip: choose an available IP in the control plane subnet, (e.g. 192.168.24.10)
  • control_plane_ip: same as for local_ip, pick one that is available (e.g. 192.168.24.11)
  • hostonly_gateway: if using provider networks, you'll need to select an available IP (e.g. 192.168.25.2)
  • tunnel_remote_ips: the list of known public IPs that will be used to establish the VXLAN tunnels.
  • hostname: you got to make sure both central and AZ doesn't use the default hostname (standalone), so set it at least on the compute. E.g. compute1.
  • octavia_enabled: set to false.

Notes:

  • Control plane IPs (192.168.24.x) are arbitrary, if in doubt just use the example ones.
  • The control plane bridges will be connected thanks to VXLAN tunnels, which is why we need to select control plane IP for AZ nodes that were not taken on the Central site.
  • If you deploy the clouds in OpenStack, you need to make sure that the security groups allow VXLAN (udp/4789).
  • If the public IPs aren't predictable, you'll need to manually change the MTU on the br-ctlplane and br-hostonly on the central site and the AZ sites where needed. You can do it by editing the os-net-config configuration file and run os-net-config to apply it.

After the installation you can "join" AZs to just have a regular multinode cloud. E.g.:

openstack aggregate remove host az0 compute1.shiftstack
openstack aggregate add host central compute1.shiftstack

Then if you're using OVN (you probably are) you got to execute this on compute nodes:

ovs-vsctl set Open_vSwitch . external-ids:ovn-cms-options="enable-chassis-as-gw,availability-zones=central"

Post Deployment Stack Updates

It is possible to perform stack updates on an ephemeral standalone stack.

Copying the generated tripleo_deploy.sh in your deployment users folder (eg. /home/stack/tripleo_deploy.sh) to tripleo_update.sh and add the parameter --force-stack-update. This will allow you to modify the stack configuration without needing to redeploy the entire cloud which can save you considerable time.

Post install script

It is possible to run any script in post-install with post_install parameter:

post_install: |
  export OS_CLOUD=standalone
  openstack flavor set --property hw:mem_page_size=large m1.smal

And then run make post_install.

Tools

You can find tools helping to work with DSAL machines in tools/dsal directory.

dev-install's People

Contributors

christophefontaine avatar dulek avatar elmiko avatar emilienm avatar fmount avatar gouthampacha avatar gprocunier avatar gryf avatar mandre avatar maysamacedo avatar mdbooth avatar pierreprinetti avatar stephenfin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dev-install's Issues

deployment using rhsm will fail if using a bulky account

This might not be exclusively a dev-install issue but I find when I use the employee sku which has eleventyfivehundred repos, the build will fail with this:

(Running it again seems to eventually succeed. Perhaps this task should have a succes/retry loop?)

TASK [redhat-subscription : Configure repository subscriptions] **************************************************************************************************************************************************************************************************************************************************************
task path: /home/greg/.ansible/roles/redhat-subscription/tasks/portal.yml:34
Using module file /home/greg/.ansible/roles/redhat-subscription/library/rhsm_repository_conf.py
Pipelining is enabled.
<10.10.0.100> ESTABLISH SSH CONNECTION FOR USER: stack
<10.10.0.100> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stack"' -o ConnectTimeout=10 -o ControlPath=/home/greg/.ansible/cp/129b39cb34 10.10.0.100 '/bin/sh -c '"'"'sudo -H -S -n  -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-evwttpxhvzosmiecwmcfpnpwyxfewghu ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<10.10.0.100> (1, b'', b'Traceback (most recent call last):\n  File "<stdin>", line 102, in <module>\n  File "<stdin>", line 94, in _ansiballz_main\n  File "<stdin>", line 40, in invoke_module\n  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code\n    exec(code, run_globals)\n  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 294, in <module>\n  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 290, in main\n  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 195, in repository_modify\n  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 188, in get_repository_list\nUnboundLocalError: local variable \'repo\' referenced before assignment\n')
<10.10.0.100> Failed to connect to the host via ssh: Traceback (most recent call last):
  File "<stdin>", line 102, in <module>
  File "<stdin>", line 94, in _ansiballz_main
  File "<stdin>", line 40, in invoke_module
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 294, in <module>
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 290, in main
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 195, in repository_modify
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 188, in get_repository_list
UnboundLocalError: local variable 'repo' referenced before assignment
The full traceback is:
Traceback (most recent call last):
  File "<stdin>", line 102, in <module>
  File "<stdin>", line 94, in _ansiballz_main
  File "<stdin>", line 40, in invoke_module
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 294, in <module>
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 290, in main
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 195, in repository_modify
  File "/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py", line 188, in get_repository_list
UnboundLocalError: local variable 'repo' referenced before assignment
fatal: [standalone]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py\", line 294, in <module>\n  File \"/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py\", line 290, in main\n  File \"/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py\", line 195, in repository_modify\n  File \"/tmp/ansible_rhsm_repository_conf_payload_66e14_5h/ansible_rhsm_repository_conf_payload.zip/ansible/modules/rhsm_repository_conf.py\", line 188, in get_repository_list\nUnboundLocalError: local variable 'repo' referenced before assignment\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************************************************
standalone                 : ok=6    changed=2    unreachable=0    failed=1    skipped=5    rescued=0    ignored=0   

make: *** [Makefile:61: prepare_host] Error 2

osp-17-1 not able to complete install_stack

make install_stack is faling at the stage of "
TASK [tripleo.operator.tripleo_deploy : Standalone deploy]"
fatal: [standalone]: FAILED! => {"ansible_job_id": "j813975028024.20727", "changed": false, "cmd": "sudo openstack tripleo deploy --templates $DEPLOY_TEMPLATES --standalone --yes --output-dir $DEPLOY_OUTPUT_DIR --stack $DEPLOY_STACK --standalone-role $DEPLOY_STANDALONE_ROLE --timeout $DEPLOY_TIMEOUT_ARG -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -e /home/stack/containers-prepare-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/external-network-vip.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml -e /home/stack/standalone_parameters.yaml -r $DEPLOY_ROLES_FILE -n $DEPLOY_NETWORKS_FILE --deployment-user $DEPLOY_DEPLOYMENT_USER --local-ip $DEPLOY_LOCAL_IP --control-virtual-ip $DEPLOY_CONTROL_VIP --public-virtual-ip $DEPLOY_PUBLIC_VIP --keep-running >/home/stack/standalone_deploy.log 2>&1", "delta": "0:13:48.191389", "end": "2023-08-30 05:41:21.054633", "finished": 1, "msg": "non-zero return code", "rc": 1, "results_file": "/home/stack/.ansible_async/j813975028024.20727", "start": "2023-08-30 05:27:32.863244", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

PLAY RECAP **********************************************************************************************************************************************
standalone : ok=44 changed=10 unreachable=0 failed=1 skipped=46 rescued=0 ignored=0

make: *** [Makefile:70: install_stack] Error 2

attaching the standalone_deploy.log
standalone_deploy.log

newbie question: can you please document the pre-req pkgs?

Hello!

I've been interested in using ir standalone for a long time, but never managed to have it working.
I was trying it on 16, so maybe that was my problem. ;)

Can you elaborate a little bit more on what I need in order to invoke dev-install? Since the doc is
saying all I need to do is a "make" and the underlying makefile uses ansible, I suspect I
need makefile and ansible packages installed, ideally on a Centos 8.2 system, right? Anything else?

Thanks!

-- flaviof

overcloud-resource-registry-puppet.yaml : 'dict object' has no attribute 'name'

While deploying on a Centos 8 system.

Following the instructions, but it is still not working.

Does the host we are deploying into have to have a DNS name?

$ cat inventory.yaml
all:
  hosts:
    standalone:
      ansible_host: 10.18.57.27
      ansible_user: root

Lenovo ~/bigvm/dev-install.git on bigvm.wip.2*
$ time make osp_full
...
TASK [tripleo.operator.tripleo_deploy : Standalone deploy] ***********************************************************************************************************
fatal: [standalone]: FAILED! => {"ansible_job_id": "260165930829.8223", "changed": false, "cmd": "sudo openstack tripleo deploy  --templates $DEPLOY_TEMPLATES --standalone  --yes --output-dir $DEPLOY_OUTPUT_DIR  --stack $DEPLOY_STACK --standalone-role $DEPLOY_STANDALONE_ROLE --timeout $DEPLOY_TIMEOUT_ARG -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -e /home/stack/containers-prepare-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /home/stack/standalone_parameters.yaml -r $DEPLOY_ROLES_FILE      --deployment-user $DEPLOY_DEPLOYMENT_USER  --local-ip $DEPLOY_LOCAL_IP --control-virtual-ip $DEPLOY_CONTROL_VIP     --keep-running    >/home/stack/standalone_deploy.log 2>&1", "delta": "0:01:09.302738", "end": "2021-05-24 10:12:51.288898", "finished": 1, "msg": "non-zero return code", "rc": 1, "start": "2021-05-24 10:11:41.986160", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

PLAY RECAP ***********************************************************************************************************************************************************
standalone                 : ok=25   changed=8    unreachable=0    failed=1    skipped=27   rescued=0    ignored=0

Makefile:69: recipe for target 'install_stack' failed
** Handling template files **
jinja2 rendering normal template overcloud-resource-registry-puppet.j2.yaml
rendering j2 template to file: /home/stack/tripleo-heat-installer-templates/./overcloud-resource-registry-puppet.yaml
Error rendering template /home/stack/tripleo-heat-installer-templates/./overcloud-resource-registry-puppet.yaml : 'dict object' has no attribute 'name'
Traceback (most recent call last):
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 98, in _j2_render_to_file
    r_template = template.render(**j2_data)
  File "/usr/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
    return original_render(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
    return self.environment.handle_exception(exc_info, True)
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
    raise value.with_traceback(tb)
  File "<template>", line 14, in top-level template code
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 430, in getattr
    return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'name'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 413, in <module>
    network_data_path, (not opts.safe), opts.dry_run)
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 317, in process_templates
    overwrite, dry_run)
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 103, in _j2_render_to_file
    raise Exception(error_msg)
Exception: Error rendering template /home/stack/tripleo-heat-installer-templates/./overcloud-resource-registry-puppet.yaml : 'dict object' has no attribute 'name'
Problems generating templates.
Not cleaning working directory /home/stack/tripleo-heat-installer-templates
Not cleaning ansible directory /home/stack/standalone-ansible-1mo7pteb
Install artifact is located at /home/stack/standalone-install-20210524101147.tar.bzip2

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Deployment Failed!

ERROR: Heat log files: /var/log/heat-launcher/undercloud_deploy-2jbepa30

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Exception: Problems generating templates.
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 1443, in take_action
    self._standalone_deploy(parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 1276, in _standalone_deploy
    parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 754, in _deploy_tripleo_heat_templates
    roles_file_path, networks_file_path, parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 614, in _setup_heat_environments
    output_dir=self.tht_render)
  File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1796, in jinja_render_files
    raise exceptions.DeploymentError(msg)
tripleoclient.exceptions.DeploymentError: Problems generating templates.
None
Problems generating templates.
Could not clean up: 'ClientManager' object has no attribute 'sdk_connection'
[root@standalone stack]#

Step "Install container-tools module" fails with 404

Note that the initial warning is unrelated.

TASK [Install container-tools module] **********************************************************************************
[WARNING]: Consider using the dnf module rather than running 'dnf'.  If you need to use command because dnf is
insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid
of this message.
fatal: [standalone]: FAILED! => {"changed": true, "cmd": "dnf module disable -y container-tools:rhel8\ndnf module enable -y container-tools:\"3.0\"\n", "delta": "0:00:08.245101", "end": "2021-11-18 09:24:27.818698", "msg": "non-zero return code", "rc": 1, "start": "2021-11-18 09:24:19.573597", "stderr": "Errors during downloading metadata for repository 'rhel-8-for-x86_64-highavailability-eus-rpms':\n  - Status code: 404 for https://cdn.redhat.com/content/eus/rhel8/8/x86_64/highavailability/os/repodata/repomd.xml (IP: 2.16.30.83)\nError: Failed to download metadata for repo 'rhel-8-for-x86_64-highavailability-eus-rpms': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried\nErrors during downloading metadata for repository 'rhel-8-for-x86_64-highavailability-eus-rpms':\n  - Status code: 404 for https://cdn.redhat.com/content/eus/rhel8/8/x86_64/highavailability/os/repodata/repomd.xml (IP: 2.16.30.83)\nError: Failed to download metadata for repo 'rhel-8-for-x86_64-highavailability-eus-rpms': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'rhel-8-for-x86_64-highavailability-eus-rpms':", "  - Status code: 404 for https://cdn.redhat.com/content/eus/rhel8/8/x86_64/highavailability/os/repodata/repomd.xml (IP: 2.16.30.83)", "Error: Failed to download metadata for repo 'rhel-8-for-x86_64-highavailability-eus-rpms': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "Errors during downloading metadata for repository 'rhel-8-for-x86_64-highavailability-eus-rpms':", "  - Status code: 404 for https://cdn.redhat.com/content/eus/rhel8/8/x86_64/highavailability/os/repodata/repomd.xml (IP: 2.16.30.83)", "Error: Failed to download metadata for repo 'rhel-8-for-x86_64-highavailability-eus-rpms': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "Updating Subscription Management repositories.\nRed Hat Enterprise Linux 8 for x86_64 - High Av  11  B/s |  10  B     00:00    \nUpdating Subscription Management repositories.\nRed Hat Enterprise Linux 8 for x86_64 - High Av  11  B/s |  10  B     00:00    ", "stdout_lines": ["Updating Subscription Management repositories.", "Red Hat Enterprise Linux 8 for x86_64 - High Av  11  B/s |  10  B     00:00    ", "Updating Subscription Management repositories.", "Red Hat Enterprise Linux 8 for x86_64 - High Av  11  B/s |  10  B     00:00    "]}

For better readability, here's the same command run from a root shell directly:

# dnf module disable -y container-tools:rhel8
Updating Subscription Management repositories.
Red Hat Enterprise Linux 8 for x86_64 - High Availability - Extended Update Support (RP 7.9  B/s |  10  B     00:01
Errors during downloading metadata for repository 'rhel-8-for-x86_64-highavailability-eus-rpms':
  - Status code: 404 for https://cdn.redhat.com/content/eus/rhel8/8/x86_64/highavailability/os/repodata/repomd.xml (IP: 95.101.44.251)
Error: Failed to download metadata for repo 'rhel-8-for-x86_64-highavailability-eus-rpms': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

sriov_interface not binding not able to start instances

#vi:syntax=yaml

parameter_defaults:
CloudName: standalone.shiftstack
ContainerCli: podman
Debug: true
DeploymentUser: stack
DnsServers:
- 10.122.3.10

needed for vip & pacemaker

KernelIpNonLocalBind: 1

For OSP16

ExtraSysctlSettings:
net.ipv6.conf.all.forwarding:
value: 1

For OSP17 and beyond

KernelIpv6ConfAllForwarding: 1
DockerInsecureRegistryAddress:
- 192.168.24.2:8787

domain name used by the host

CloudDomain: shiftstack
NeutronDnsDomain: shiftstack
NeutronBridgeMappings: external:br-ex,hostonly:br-hostonly
NeutronFlatNetworks: external,hostonly,hostonly-sriov
NeutronGlobalPhysnetMtu: 1400
OVNCMSOptions: "enable-chassis-as-gw"
NeutronPhysicalDevMappings: "hostonly:ens3f0"
NovaPCIPassthrough:
- vendor_id: "8086"
product_id: "158b"
address: "0000:11:0*.*"
physical_network: "hostonly"
KernelArgs: "intel_iommu=on"
StandaloneEnableRoutedNetworks: false
StandaloneHomeDir: "/home/stack"
InterfaceLocalMtu: 1500
SELinuxMode: permissive

For OSP17+:

StandaloneNetworkConfigTemplate: "/home/stack/dev-install_net_config.yaml"

For OSP16:

StandaloneNetConfigOverride:
network_config:
- {'type': 'ovs_bridge', 'name': 'br-ex', 'use_dhcp': True, 'mtu': 1600, 'ovs_extra': ['br-set-external-id br-ex bridge-id br-ex'], 'members': [{'type': 'interface', 'name': 'ens3f0', 'primary': True, 'mtu': 1600}]}
- {'type': 'ovs_bridge', 'name': 'br-ctlplane', 'use_dhcp': False, 'ovs_extra': ['br-set-external-id br-ctlplane bridge-id br-ctlplane'], 'addresses': [{'ip_netmask': '192.168.24.1/24'}], 'members': [{'type': 'interface', 'name': 'dummy0', 'nm_controlled': True, 'mtu': 1600}]}
- {'type': 'ovs_bridge', 'name': 'br-hostonly', 'use_dhcp': False, 'ovs_extra': ['br-set-external-id br-hostonly bridge-id br-hostonly'], 'addresses': [{'ip_netmask': '192.168.25.1/32'}, {'ip_netmask': '2001:db8::1/64'}], 'routes': [{'destination': '192.168.25.0/24', 'nexthop': '192.168.25.1'}], 'members': [{'type': 'interface', 'name': 'dummy1', 'nm_controlled': True, 'mtu': 1600}]}
- {'type': 'sriov_pf', 'name': 'ens3f0', 'numvfs': 63, 'mtu': 9000, 'use_dhcp': False, 'defroute': False, 'nm_controlled': True, 'hotplug': True, 'promisc': False, 'addresses': [{'ip_netmask': '192.168.26.1/24'}]}
OctaviaGenerateCerts: true
OctaviaCaKeyPassphrase: "secrete"
OctaviaAmphoraSshKeyFile: "/home/stack/octavia.pub"
OctaviaAmphoraImageFilename: "/home/stack/amphora.qcow2"
BarbicanSimpleCryptoGlobalDefault: true
CinderApiPolicies:
cinder-vol-state-set:
key: "volume_extension:volume_admin_actions:reset_status"
value: "rule:admin_or_owner"
cinder-vol-force-detach:
key: "volume_extension:volume_admin_actions:force_detach"
value: "rule:admin_or_owner"

We never want the node to reboot during tripleo deploy, but defer to later

KernelArgsDeferReboot: true
StandaloneExtraGroupVars:
tripleo_kernel_defer_reboot: true

Configure Nova to use preallocated raw disks for consistent storage

performance.

NovaComputeUseCowImages: false
NovaComputeLibvirtPreAllocateImages: space

NtpServer: 10.122.3.10
ContainerImageRegistryCredentials:
registry.redhat.io:
[email protected]: 'xxxxx'
ContainerImageRegistryLogin: true

sriov interface is not mapping in nova.conf in nova compute
also how we can add mutiple sriov interfaces

stack-update does fails looking for underloud log file

After deploying a cluster, if update is attempted by copying "tripleo_deploy.sh" to "tripleo_update.sh" and adding "--force-stack-update" the process fails.
For example.
the command being executed ends up being:

sudo openstack tripleo deploy --templates /usr/share/openstack-tripleo-heat-templates --standalone --yes --output-dir /home/stack --stack standalone --standalone-role Standalone --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -e /home/stack/containers-prepare-parameters.yaml -e /home/stack/standalone_parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml --deployment-user stack --local-ip 10.8.1.135 --control-virtual-ip 10.8.1.135 --keep-running --force-stack-update

But this fails with:
[EXPERIMENTAL] The tripleo deploy interface is an experimental interface. It may change in the next release.
The heat stack standalone action is UPDATE
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 461, in fire_timers
timer()
File "/usr/lib/python3.6/site-packages/eventlet/hubs/timer.py", line 59, in call
cb(*args, **kw)
File "/usr/lib/python3.6/site-packages/eventlet/greenthread.py", line 221, in main
result = function(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/eventlet/wsgi.py", line 818, in process_request
proto.init(conn_state, self)
File "/usr/lib/python3.6/site-packages/eventlet/wsgi.py", line 357, in init
self.handle()
File "/usr/lib/python3.6/site-packages/eventlet/wsgi.py", line 390, in handle
self.handle_one_request()
File "/usr/lib/python3.6/site-packages/eventlet/wsgi.py", line 466, in handle_one_request
self.handle_one_response()
File "/usr/lib/python3.6/site-packages/eventlet/wsgi.py", line 649, in handle_one_response
'wall_seconds': finish - start,
File "/usr/lib64/python3.6/logging/init.py", line 1636, in info
self.log(INFO, msg, *args, **kwargs)
File "/usr/lib64/python3.6/logging/init.py", line 1674, in log
self.logger.log(level, msg, *args, **kwargs)
File "/usr/lib64/python3.6/logging/init.py", line 1374, in log
self._log(level, msg, args, **kwargs)
File "/usr/lib64/python3.6/logging/init.py", line 1444, in _log
self.handle(record)
File "/usr/lib64/python3.6/logging/init.py", line 1454, in handle
self.callHandlers(record)
File "/usr/lib64/python3.6/logging/init.py", line 1516, in callHandlers
hdlr.handle(record)
File "/usr/lib64/python3.6/logging/init.py", line 865, in handle
self.emit(record)
File "/usr/lib64/python3.6/logging/handlers.py", line 482, in emit
logging.FileHandler.emit(self, record)
File "/usr/lib64/python3.6/logging/init.py", line 1071, in emit
self.stream = self._open()
File "/usr/lib64/python3.6/logging/init.py", line 1061, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/var/log/heat-launcher/undercloud_deploy-0qlmw33u/heat.log'

Keep having to restart standalone_hostonly_snat.service

For some reason, in several deployments, even though the standalone_hostonly_snat.service status shows that it was run and it added the required iptables rule to the nat table (iptables --list -t nat), the rule is not there.
The solution is easy, just "sudo systemctl restart standalone_host_only_snat.service" reads the rule,

Probably something else is clearing the iptables.

/dev/sdX can't be used on RHEL9 for disks params

This affects ephemeral_storage_devices and ceph_devices parameters in dev-install.

If you (like me) used to provide /dev/sdX device names for ephemeral or/and ceph storage.
It was never recommended to use non-persistent device names (though it worked fine until now), but on RHEL9 it seems really different from RHEL8 and this can't be used reliably anymore.

I think ultimately we'll enforce the user to provide the path attribute in /dev/disk/by-path/ or with the WWID attribute in /dev/disk/by-id/

fallocate failed: No space left on device

make osp_full fails with:

TASK [Create a backing file for ceph] *******************************************************************************************************************************************************************************
fatal: [standalone]: FAILED! => {"changed": true, "cmd": ["/usr/bin/fallocate", "-l", "100G", "/var/lib/ceph-osd.img"], "delta": "0:00:00.003927", "end": "2021-11-19 22:49:11.211600", "msg": "non-zero return code", "rc": 1, "start": "2021-11-19 22:49:11.207673", "stderr": "fallocate: fallocate failed: No space left on device", "stderr_lines": ["fallocate: fallocate failed: No space left on device"], "stdout": "", "stdout_lines": []}

After that, with no modification, running make osp_full again makes it go a bit further then fails with:

TASK [Create volume group for ceph] *********************************************************************************************************************************************************************************
fatal: [standalone]: FAILED! => {"changed": false, "err": "  Cannot use /dev/loop1: device is too small (pv_min_size)\n", "msg": "Creating physical volume '/dev/loop1' failed", "rc": 5}

This is the current output of df -h:

# df -h
Filesystem                  Size  Used Avail Use% Mounted on
devtmpfs                     63G     0   63G   0% /dev
tmpfs                        63G     0   63G   0% /dev/shm
tmpfs                        63G  9.0M   63G   1% /run
tmpfs                        63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/rhel_osp1-root   70G   67G  3.5G  96% /
/dev/mapper/rhel_osp1-home  856G  6.0G  850G   1% /home
/dev/nvme0n1p2             1014M  290M  725M  29% /boot
/dev/nvme0n1p1              599M  5.8M  594M   1% /boot/efi
tmpfs                        13G     0   13G   0% /run/user/1000

RHEL was installed with automatic partitioning on a 1TB disk, using up all the available space.

ansible lint error in our CI

 + ansible-lint -v --force-color -x 602 playbooks/certs.yaml playbooks/destroy.yaml playbooks/install_stack.yaml playbooks/local_os_client.yaml playbooks/local_requirements.yaml playbooks/network.yaml playbooks/post_install.yaml playbooks/prepare_host.yaml playbooks/prepare_stack.yaml playbooks/prepare_stack_testconfig.yaml
Traceback (most recent call last):
  File "/usr/local/bin/ansible-lint", line 5, in <module>
    from ansiblelint.__main__ import main
  File "/usr/local/lib/python3.8/site-packages/ansiblelint/__main__.py", line 37, in <module>
    from ansiblelint.generate_docs import rules_as_rich, rules_as_rst
  File "/usr/local/lib/python3.8/site-packages/ansiblelint/generate_docs.py", line 6, in <module>
    from rich.console import render_group
ImportError: cannot import name 'render_group' from 'rich.console' (/usr/local/lib/python3.8/site-packages/rich/console.py)

https://github.com/shiftstack/dev-install/runs/4938208378?check_suite_focus=true

rhsm install fails with eus repositories

Tried a build today and got stuck with the following error on the install dnf-utils task:

[root@rhos-17-1 ~]# dnf install dnf-utils
Updating Subscription Management repositories.
Red Hat Enterprise Linux 9 for x86_64 - AppStream - Extended Update Support (RPMs)                                                                                                                            19  B/s |  10  B     00:00    
Errors during downloading metadata for repository 'rhel-9-for-x86_64-appstream-eus-rpms':
  - Status code: 404 for https://cdn.redhat.com/content/eus/rhel9/9/x86_64/appstream/os/repodata/repomd.xml (IP: 23.60.144.251)
Error: Failed to download metadata for repo 'rhel-9-for-x86_64-appstream-eus-rpms': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

Probably related to this: https://access.redhat.com/solutions/7016067

Removing the eus repos and using the non eus variant solves it via local-override.

rhsm_repos:
  - rhel-9-for-x86_64-baseos-rpms
  - rhel-9-for-x86_64-appstream-rpms
  - rhel-9-for-x86_64-highavailability-rpms
  - openstack-17.1-for-rhel-9-x86_64-rpms
  - fast-datapath-for-rhel-9-x86_64-rpms
  - rhceph-6-tools-for-rhel-9-x86_64-rpms

Validate that ansible_host == standalone_host

Many of us jungle between different hosts and local_overrides.
Sometimes we want to deploy onto a different host but forgot to update the inventory.

We could have a validation that compare both ansible_host and standalone_host vars and ask for user to confirm if it's fine when values are different.

download.devel.redhat.com: Name or service not known

make osp_full fails with:

TASK [install rhos-release] **************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
fatal: [standalone]: FAILED! => {"changed": false, "msg": "Failure downloading http://download.devel.redhat.com/rcm-guest/puddles/OpenStack/rhos-release/rhos-release-latest.noarch.rpm, Request failed: <urlopen error [Errno -2] Name or service not known>", "results": []}

Also locally, I cant reach that file.

$ wget 'http://download.devel.redhat.com/rcm-guest/puddles/OpenStack/rhos-release/rhos-release-latest.noarch.rpm'
--2021-11-17 11:25:13--  http://download.devel.redhat.com/rcm-guest/puddles/OpenStack/rhos-release/rhos-release-latest.noarch.rpm
Resolving download.devel.redhat.com (download.devel.redhat.com)... failed: Name or service not known.
wget: unable to resolve host address ‘download.devel.redhat.com’

nmstate_ifs requires from_yaml

This is appears to still be needed:

installing host running dev-install is fedora 40
target host from inventory is always rhel 9.2

ansible-9.5.1-1.fc40.noarch

If from_yaml is not called then this fails to parse.

--- playbooks/network.yaml	2024-05-26 14:31:34.662552380 -0400
+++ ../../dev-install/playbooks/network.yaml	2024-05-25 18:36:17.919422280 -0400
@@ -113,7 +113,7 @@
         stdin: "{{ network_state | to_nice_json }}"
       vars:
         network_state:
-          interfaces: "{{ nmstate_ifs }}"
+          interfaces: "{{ nmstate_ifs | from_yaml }}"
           # add saved static routes
           routes:
             config: "{{ nmstate_routes }}"

build on centos-stream nova_wait_for_compute_service container failure (stage5)

Fresh build on CentOS stream today using : CentOS-Stream-GenericCloud-9-20230508.0.x86_64.qcow2

Stage5 is pretty late in the deploy so I am curious what might be up.

2023-05-16 09:36:59.049850 | 52540099-d186-cfdd-423f-0000000055d9 |       TASK | Create containers managed by Podman for /var/lib/tripleo-config/container-startup-config/step_5
2023-05-16 09:47:06.112097 |                                      |    WARNING | ERROR: Can't run container nova_wait_for_compute_service
stderr: time="2023-05-16T13:37:01Z" level=info msg="podman filtering at log level info"
time="2023-05-16T13:37:01Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
time="2023-05-16T13:37:01Z" level=info msg="Setting parallel job count to 25"
time="2023-05-16T13:37:01Z" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host"
time="2023-05-16T13:37:01Z" level=info msg="User mount overriding libpod mount at \"/etc/hosts\""
time="2023-05-16T13:37:01Z" level=info msg="Running conmon under slice machine.slice and unitName libpod-conmon-9d63dcda30c82cd018bd00cdb5ec65bb6c01ffed46ee535f09c71b486a06645c.scope"
time="2023-05-16T13:37:01Z" level=info msg="Got Conmon PID as 113598"
+ sudo -E kolla_set_configs
time="2023-05-16T13:37:01Z" level=info msg="Received shutdown.Stop(), terminating!" PID=113578
sudo: unable to send audit message: Operation not permitted
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
INFO:__main__:Validating config file
INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
INFO:__main__:Copying service configuration files
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/passwd.db to /etc/libvirt/passwd.db
INFO:__main__:Deleting /etc/libvirt/qemu.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/qemu.conf to /etc/libvirt/qemu.conf
INFO:__main__:Deleting /etc/libvirt/virtlogd.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/virtlogd.conf to /etc/libvirt/virtlogd.conf
INFO:__main__:Deleting /etc/libvirt/virtnodedevd.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/virtnodedevd.conf to /etc/libvirt/virtnodedevd.conf
INFO:__main__:Deleting /etc/libvirt/virtproxyd.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/virtproxyd.conf to /etc/libvirt/virtproxyd.conf
INFO:__main__:Deleting /etc/libvirt/virtqemud.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/virtqemud.conf to /etc/libvirt/virtqemud.conf
INFO:__main__:Deleting /etc/libvirt/virtsecretd.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/virtsecretd.conf to /etc/libvirt/virtsecretd.conf
INFO:__main__:Deleting /etc/libvirt/virtstoraged.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/libvirt/virtstoraged.conf to /etc/libvirt/virtstoraged.conf
INFO:__main__:Deleting /etc/nova/migration/authorized_keys
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/nova/migration/authorized_keys to /etc/nova/migration/authorized_keys
INFO:__main__:Deleting /etc/nova/migration/identity
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/nova/migration/identity to /etc/nova/migration/identity
INFO:__main__:Creating directory /etc/nova/provider_config
INFO:__main__:Deleting /etc/nova/nova.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/nova/nova.conf to /etc/nova/nova.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/nova/policy.yaml to /etc/nova/policy.yaml
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/nova/secret.xml to /etc/nova/secret.xml
INFO:__main__:Deleting /etc/sasl2/libvirt.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sasl2/libvirt.conf to /etc/sasl2/libvirt.conf
INFO:__main__:Deleting /etc/ssh/ssh_config
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/ssh/ssh_config to /etc/ssh/ssh_config
INFO:__main__:Deleting /etc/ssh/sshd_config
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/ssh/sshd_config to /etc/ssh/sshd_config
INFO:__main__:Deleting /etc/login.defs
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/login.defs to /etc/login.defs
INFO:__main__:Deleting /var/lib/nova/.ssh/config
INFO:__main__:Copying /var/lib/kolla/config_files/src/var/lib/nova/.ssh/config to /var/lib/nova/.ssh/config
INFO:__main__:Writing out command to execute
INFO:__main__:Setting permission for /var/log/nova
INFO:__main__:Setting permission for /var/log/nova/nova-manage.log
INFO:__main__:Setting permission for /var/log/nova/nova-scheduler.log
INFO:__main__:Setting permission for /var/log/nova/nova-conductor.log
INFO:__main__:Setting permission for /var/log/nova/nova-novncproxy.log
INFO:__main__:Setting permission for /var/log/nova/nova-metadata-api.log
INFO:__main__:Setting permission for /var/log/nova/nova-api.log
++ cat /run_command
+ CMD='/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py'
+ ARGS=
+ [[ ! -n '' ]]
+ . kolla_extend_start
+ echo 'Running command: '\''/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py'\'''
+ umask 0022
+ exec /container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py
+ command -v python3
+ python3 /container-config-scripts/nova_wait_for_compute_service.py
2023-05-16 09:47:06.112659 | 52540099-d186-cfdd-423f-0000000055d9 |      FATAL | Create containers managed by Podman for /var/lib/tripleo-config/container-startup-config/step_5 | rdo-centos9 | error={"changed": false, "msg": "Failed containers: nova_wait_for_compute_service"}
2023-05-16 09:47:06.112900 | 52540099-d186-cfdd-423f-0000000055d9 |     TIMING | tripleo_container_manage : Create containers managed by Podman for /var/lib/tripleo-config/container-startup-config/step_5 | rdo-centos9 | 0:27:48.100990 | 607.06s

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.