Content was moved to https://github.com/tungstenfabric/tf-ansible-deployer
juniper / contrail-ansible-deployer Goto Github PK
View Code? Open in Web Editor NEWAnsible deployment for contrail
License: Apache License 2.0
Ansible deployment for contrail
License: Apache License 2.0
Content was moved to https://github.com/tungstenfabric/tf-ansible-deployer
Hi,
Version docker-ce-selinux-17.03.1.ce is apparently obsolated by docker-ce-cli-1:18.09.0-3.el7.x86_64.
When running configure_instances.yml playbooks, it fails during this task
https://github.com/Juniper/contrail-ansible-deployer/blob/master/playbooks/roles/docker/tasks/RedHat.yml
"\n\nTransaction check error:\n file /usr/bin/docker from install of docker-ce-cli-1:18.09.0-3.el7.x86_64 conflicts with file from package docker-ce-18.03.1.ce-1.el7.centos.x86_64\n file /usr/share/bash-completion/completions/docker from install of docker-ce-cli-1:18.09.0-3.el7.x86_64 conflicts with file from package docker-ce-18.03.1.ce-1.el7.centos.x86_64\n
Changing docker version in RedHat docker role from "docker-ce-18.03.1.ce" to "docker-ce-18.09.0" fix the problem.
Dears,
I have an doubt on contrail if i install in all contrail package in kubernets then how Vrouter agent will communicate with compute nodes ? as far my knowledge vrouter agent should run on all compute nodes , then how we can achieve if i install contrail on kubernets ??
Thanks
Salim
Hello,
Not sure if support the tor agent, I am not able to find any example to support it in the instances YAML file.
Thanks.
I noticed that the setup guide on https://github.com/tungstenfabric/website/wiki/Tungsten-Fabric:-10-minute-deployment-with-k8s-on-AWS refers to ceb29b0 as being a "known working" commit.
Indeed, I ran into issues when choosing master
instead, so it seems that in order to make use of these playbooks without running into problems, one must know which commit to use.
This is not a big deal if someone comes across these playbooks by means of this walkthrough, but hardly helpful for those who find it through other means. To cover the bases, we should ensure master
is always in a working state.
Is there a current effort around ensuring this? Are there specific tasks that the community can help with to move towards this goal?
Since at least today playbook configure_instances.yml is broken. Last working deployment for me was on Friday / two days ago with the same procedure (scripted deployment).
[root@controller-1 contrail-ansible-deployer]# ansible --version
ansible 2.4.2.0
config file = /root/contrail-ansible-deployer/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
[root@controller-1 contrail-ansible-deployer]# ansible-playbook -i inventory/ playbooks/provision_instances.yml
...
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=0
[root@controller-1 contrail-ansible-deployer]# ansible-playbook -e orchestrator=openstack -i inventory/ playbooks/configure_instances.yml
...
TASK [create_kolla_playbooks : Clone openstack git repo] *********************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "'no' is not a valid boolean. Valid booleans include: 0, 'on', 'f', 'false', 1, 'no', 'n', '1', '0', 't', 'y', 'off', 'yes', 'true'"}
...ignoring
...
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
10.10.10.11 : ok=38 changed=26 unreachable=0 failed=0
10.10.10.12 : ok=40 changed=28 unreachable=0 failed=0
10.10.10.13 : ok=40 changed=28 unreachable=0 failed=0
localhost : ok=12 changed=7 unreachable=0 failed=0
[root@controller-1 contrail-ansible-deployer]# ansible-playbook -i inventory/ playbooks/install_openstack.yml
[WARNING]: Found both group and host with same name: localhost
[DEPRECATION WARNING]: The use of 'include' for tasks has been deprecated. Use 'import_tasks' for static inclusions or 'include_tasks' for dynamic inclusions. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: include is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: 'include' for playbook includes. You should use 'import_playbook' instead. This feature will be removed in version 2.8. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ERROR! Unable to retrieve file contents
Could not find or access '/root/contrail-kolla-ansible/ansible/site.yml'
``
I'm trying to use the r5.0 branch of this repo to deploy All_In_One mode.
Here is the log output:
TASK [nova : Waiting for nova-compute service up] ***************************************************************************************************
FAILED - RETRYING: Waiting for nova-compute service up (20 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (19 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (18 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (17 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (16 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (15 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (14 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (13 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (12 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (11 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (10 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (9 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (8 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (7 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (6 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (5 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (4 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (3 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (2 retries left).
FAILED - RETRYING: Waiting for nova-compute service up (1 retries left).
fatal: [172.20.1.160 -> 172.20.1.160]: FAILED! => {"attempts": 20, "changed": false, "cmd": ["docker", "exec", "kolla_toolbox", "openstack", "--os-interface", "internal", "--os-auth-url", "http://172.20.1.160:35357/v3", "--os-identity-api-version", "3", "--os-project-domain-name", "default", "--os-tenant-name", "admin", "--os-username", "admin", "--os-password", "contrail123", "--os-user-domain-name", "default", "compute", "service", "list", "-f", "json", "--service", "nova-compute"], "delta": "0:00:00.881186", "end": "2018-11-29 03:52:56.652387", "msg": "non-zero return code", "rc": 1, "start": "2018-11-29 03:52:55.771201", "stderr": "Traceback (most recent call last):\n File \"/opt/ansible/bin/openstack\", line 7, in <module>\n from openstackclient.shell import main\n File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/shell.py\", line 29, in <module>\n from openstackclient.common import clientmanager\n File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/common/clientmanager.py\", line 153, in <module>\n 'openstack.cli.base',\n File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/common/clientmanager.py\", line 124, in get_plugin_modules\n __import__(ep.module_name)\n File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/network/client.py\", line 17, in <module>\n from openstack import profile\nImportError: cannot import name profile", "stderr_lines": ["Traceback (most recent call last):", " File \"/opt/ansible/bin/openstack\", line 7, in <module>", " from openstackclient.shell import main", " File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/shell.py\", line 29, in <module>", " from openstackclient.common import clientmanager", " File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/common/clientmanager.py\", line 153, in <module>", " 'openstack.cli.base',", " File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/common/clientmanager.py\", line 124, in get_plugin_modules", " __import__(ep.module_name)", " File \"/opt/ansible/lib/python2.7/site-packages/openstackclient/network/client.py\", line 17, in <module>", " from openstack import profile", "ImportError: cannot import name profile"], "stdout": "", "stdout_lines": []}
NO MORE HOSTS LEFT **********************************************************************************************************************************
to retry, use: --limit @/root/contrail-ansible-deployer/playbooks/install_openstack.retry
PLAY RECAP ******************************************************************************************************************************************
172.20.1.160 : ok=186 changed=26 unreachable=0 failed=1
localhost : ok=32 changed=2 unreachable=0 failed=0
Hi.
i install all-in-one follow this link
https://www.juniper.net/documentation/en_US/contrail19/topics/concept/install-contrail-ocata-kolla-50.html
this failed
TASK [install_contrail : start redis] **************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SyntaxError: invalid syntax
fatal: [10.19.147.141]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 10.19.147.141 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File "/root/.ansible/tmp/ansible-tmp-1576487203.84-86711495152857/AnsiballZ__docker_service.py", line 102, in \r\n _ansiballz_main()\r\n File "/root/.ansible/tmp/ansible-tmp-1576487203.84-86711495152857/AnsiballZ__docker_service.py", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/root/.ansible/tmp/ansible-tmp-1576487203.84-86711495152857/AnsiballZ__docker_service.py", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.cloud.docker._docker_service', init_globals=None, run_name='main', alter_sys=True)\r\n File "/usr/lib64/python2.7/runpy.py", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code\r\n exec code in run_globals\r\n File "/tmp/ansible_docker_service_payload_eB44gD/ansible_docker_service_payload.zip/ansible/modules/cloud/docker/_docker_service.py", line 483, in \r\n File "/usr/lib/python2.7/site-packages/compose/cli/command.py", line 12, in \r\n from .. import config\r\n File "/usr/lib/python2.7/site-packages/compose/config/init.py", line 6, in \r\n from .config import ConfigurationError\r\n File "/usr/lib/python2.7/site-packages/compose/config/config.py", line 50, in \r\n from .validation import match_named_volumes\r\n File "/usr/lib/python2.7/site-packages/compose/config/validation.py", line 12, in \r\n from jsonschema import Draft4Validator\r\n File "/usr/lib/python2.7/site-packages/jsonschema/init.py", line 33, in \r\n import importlib_metadata as metadata\r\n File "/usr/lib/python2.7/site-packages/importlib_metadata/init.py", line 9, in \r\n import zipp\r\n File "/usr/lib/python2.7/site-packages/zipp.py", line 12, in \r\n import more_itertools\r\n File "/usr/lib/python2.7/site-packages/more_itertools/init.py", line 1, in \r\n from .more import * # noqa\r\n File "/usr/lib/python2.7/site-packages/more_itertools/more.py", line 460\r\n yield from iterable\r\n ^\r\nSyntaxError: invalid syntax\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
when i run ansible-playbook -e orchestrator=openstack -i inventory/ playbooks/install_contrail.yml
where am i wrong?
instances.yaml:
provider_config:
bms:
ssh_pwd: root
ssh_user: root
ntpserver: 1.cn.pool.ntp.org
domainsuffix: local
instances:
bms1:
provider: bms
ip: 10.19.147.141
roles:
config_database:
config:
control:
analytics_database:
analytics:
analytics_alarm:
analytics_snmp:
webui:
vrouter:
openstack:
openstack_compute:
contrail_configuration:
RABBITMQ_NODE_PORT: 5673
AUTH_MODE: keystone
KEYSTONE_AUTH_URL_VERSION: /v3
kolla_config:
kolla_globals:
enable_haproxy: no
kolla_passwords:
keystone_admin_password: 1qazXSW@
TASK [contrail_vrouter : Set 'platform=dpdk' in agent.conf for vif to work for dpdk compute] *************************************************************************************************************************************************
TASK [contrail : create contrail vrouter for Win32NT] ****************************************************************************************************************************************************************************************
TASK [contrail : create contrail tor agents] *************************************************************************************************************************************************************************************************
TASK [contrail : Pluginize legacy compute] ***************************************************************************************************************************************************************************************************
TASK [contrail : create contrail vcenter-plugin] *********************************************************************************************************************************************************************************************
TASK [contrail : create contrail vcenter-manager] ********************************************************************************************************************************************************************************************
TASK [contrail : create win cnm plugin] ******************************************************************************************************************************************************************************************************
TASK [contrail : create tsn haproxy] *********************************************************************************************************************************************************************************************************
to retry, use: --limit @/root/contrail-ansible-deployer/playbooks/install_contrail.retry
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
192.168.10.4 : ok=30 changed=9 unreachable=0 failed=1
192.168.10.5 : ok=19 changed=6 unreachable=0 failed=0
192.168.10.6 : ok=19 changed=6 unreachable=0 failed=0
localhost : ok=59 changed=1 unreachable=0 failed=0
TASK [install_contrail : create /etc/contrail/redis] **************************************************************************** task path: /root/contrail-ansible-deployer/playbooks/roles/install_contrail/tasks/create_redis.yml:2 fatal: [192.168.0.xx]: FAILED! => { "msg": "The conditional check 'roles[instance_name].webui is defined or roles[instance_name].analytics is defined' failed. The error was: error while evaluating conditional (roles[instance_name].webui is defined or roles[instance_name].analytics is defined): 'dict object' has no attribute u'bms1'\n\nThe error appears to have been in '/root/contrail-ansible-deployer/playbooks/roles/install_contrail/tasks/create_redis.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: create /etc/contrail/redis\n ^ here\n"
[user@somewhere]# ansible --version ansible 2.4.2.0 config file = /home/user/contrail-ansible-deployer/ansible.cfg configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Resolution: This is caused by code incompatible with 2.5.1.0 version of Ansible. Please stick to ansible-2.4.2.0 to avoid this issue for now until we fix the code to work with the latest version of Ansible.
Uhm, what version do you recommend now?
P.S Docker, K8S, Kolla, hundreds of states + Ansible bugs - this is gonna be fun.
Hello,
I try to deploy environment with remote compute feature. Here is my instances file :
remote_locations:
pop2:
BGP_ASN: 12345
SUBCLUSTER: pop2
XMPP_SERVER_PORT: 5269
BGP_PORT: 179
CONTROL_INTROSPECT_LISTEN_PORT: 8083
provider_config:
bms:
instances:
bms_contr:
ip: 10.10.0.100
provider: bms
roles:
config_database:
config:
control:
analytics_database:
analytics:
analytics_alarm:
webui:
openstack_control:
openstack_network:
remoteCp_node:
ip: 10.10.0.150
provider: bms
roles:
control_only:
location: pop2
PHYSICAL_INTERFACE: em2
DEFAULT_LOCAL_IP: 10.10.0.150
DEFAULT_IFACE: em2
bms_compute:
ip: 10.40.0.100
provider: bms
roles:
vrouter:
CONTROL_NODES: 10.10.0.150
VROUTER_GATEWAY: 10.40.0.1
location: pop2
PHYSICAL_INTERFACE: "em1"
openstack_compute:
network_interface: "em1"
openstack_storage:
global_configuration:
CONTAINER_REGISTRY: opencontrailnightly
contrail_configuration:
CONTRAIL_VERSION: latest
CLOUD_ORCHESTRATOR: openstack
RABBITMQ_NODE_PORT: 5673
AUTH_MODE: keystone
KEYSTONE_AUTH_URL_VERSION: /v3
KEYSTONE_AUTH_ADMIN_PASSWORD: contrail123
UPGRADE_KERNEL: false
AAA_MODE: rbac
METADATA_PROXY_SECRET: contrail123
CONTROLLER_NODES: 10.10.0.100
CONTROL_DATA_NET_LIST: 10.10.0.0/24,10.40.0.0/24
CONFIGDB_NODES: 10.10.0.100
kolla_config:
kolla_globals:
network_interface: "em2"
kolla_external_vip_interface: "em2"
kolla_external_vip_address: "10.10.0.100"
kolla_internal_vip_address: "10.10.0.100"
kolla_internal_vip_interface: "em2"
enable_haproxy: "no"
enable_ironic: "no"
enable_swift: "yes"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
enable_ceph: "no"
cinder_backup_driver: "swift"
horizon_keystone_multidomain: true
After ansible deployment in almost every log file i get such error:
SANDESH: [DROP: WrongClientSMState] NodeStatusUVE: data = << name = process_status = [ << module_id = contrail-control-nodemgr instance_id = 0 state = Non-Functional connection_infos = [ << type = Collector name = server_addrs = [ 10.10.0.100:8086, ] status = Initializing description = Idle to Connect on EvIdleHoldTimerExpired >>, ] description = Collector connection down >>, ] >>
Collector log file:
SANDESH: Send FAILED: 1566470335049259 [SYS_NOTICE]: NodeStatusUVE: data= [ name = process_status= [ [ [ module_id = contrail-collector instance_id = 0 state = Non-Functional connection_infos= [ [ [ type = Collector name = server_addrs= [ [ (_iter6) = 10.10.0.100:8086, ] ] status = Initializing description = Connect : EvTcpConnected ], [ type = Database name = Cassandra server_addrs= [ [ (_iter6) = 10.10.0.100:9041, ] ] status = Up description = Established Cassandra connection ], [ type = Database name = RabbitMQ server_addrs= [ [ (_iter6) = 10.10.0.100:5673, ] ] status = Up description = RabbitMQ connection established ], [ type = Database name = :Global server_addrs= [ [ (_iter6) = 10.10.0.100:9042, ] ] status = Up description = ], [ type = Redis-UVE name = From server_addrs= [ [ (_iter6) = 127.0.0.1:6379, ] ] status = Up description = ], [ type = Redis-UVE name = To server_addrs= [ [ (_iter6) = 127.0.0.1:6379, ] ] status = Up description = ], [ type = KafkaPub name = 10.10.0.100:9092 server_addrs= [ [ (*_iter6) = 0.0.0.0:0, ] ] status = Down description = ], ] ] description = Collector, KafkaPub:10.10.0.100:9092 connection down ], ] ] ]
Contrail-status on main control node:
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail device-manager ==
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active
I simulate provider network on gns3 so its not connectivity between hosts issue. Openstack control node sees remote compute node.
When I deploy environment without analytics_alarm (and thus Kafka) I get such log in collector log file (other in other log files error is the same, Collector connection down):
[Thread 140371426252544, Pid 1]: SANDESH: Send FAILED: 1566481747362411 [SYS_NOTICE]: NodeStatusUVE: data= [ name = process_status= [ [ [ module_id = contrail-collector instance_id = 0 state = Non-Functional connection_infos= [ [ [ type = Collector name = server_addrs= [ [ (_iter6) = 10.10.0.100:8086, ] ] status = Initializing description = Connect : EvTcpConnected ], [ type = Database name = Cassandra server_addrs= [ [ (_iter6) = 10.10.0.100:9041, ] ] status = Up description = Established Cassandra connection ], [ type = Database name = RabbitMQ server_addrs= [ [ (_iter6) = 10.10.0.100:5673, ] ] status = Up description = RabbitMQ connection established ], [ type = Database name = :Global server_addrs= [ [ (_iter6) = 10.10.0.100:9042, ] ] status = Up description = ], [ type = Redis-UVE name = From server_addrs= [ [ (_iter6) = 127.0.0.1:6379, ] ] status = Up description = Redis(From) handling the auth callback ], [ type = Redis-UVE name = To server_addrs= [ [ (_iter6) = 127.0.0.1:6379, ] ] status = Up description = Redis(To) connecting to CallbackProcess ], ] ] description = Collector connection down ], ] ] ]
Thank you in advance,
Wojtek
The "Click On Deployment of Contrail and Kubernetes in AWS" wiki page is outdated; the referred Cloudformation template points to non-existing CentOS AMIs.
I prepared a version of this kind of AWS deployment which doesn't depend on an deployer server, after deploying the CF template, it can be deployed from localhost. Happy to include.
I'm trying to setup mutlidomain auth with domain choices in horizon UI (branch stable-queens).
When setting in config/instances.yaml :
kolla_globals:
horizon_keystone_multidomain: true
horizon_keystone_domain_choices:
Default: Default
domain: domain
This result on an error when horizon docker launch.
Resulting var in : /etc/kolla/globals
horizon_keystone_domain_choices: {u'Default': u'Default', u'domain': `u'domain'}
Result in local_settings in /etc/kolla/horizon/local_settings :
OPENSTACK_KEYSTONE_DOMAIN_CHOICES = (
(''u'Default'', 'u'default''),
('u'domain'', 'u'domain''),
)
This causes horizon docker to restart infinitely.
When changing in contrail-kolla-ansible/ansible/roles/horizon/templates/local_settings.j2
OPENSTACK_KEYSTONE_DOMAIN_CHOICES = (
{% for key, value in horizon_keystone_domain_choices.items() %}
('{{ key }}', '{{ value }}'),
{% endfor %}
)
with :
OPENSTACK_KEYSTONE_DOMAIN_CHOICES = (
{% for key, value in horizon_keystone_domain_choices.items() %}
({{ key }}, {{ value }}),
{% endfor %}
)
Problem is solved in that case.
Also when setting directly the domain in contrail-kolla-ansible/ansible/roles/horizon/defaults/main.yml and without modifying local_setting template this is OK
horizon_keystone_domain_choices:
Default: default
domain: domain
It seems the way contrail ansible deployer generate kolla_globals variable file is the problem. Has someone a clue to avoid this problem ?
config/instances.yaml
provider_config:
bms:
ssh_pwd: contrail123
ssh_user: root
domainsuffix: local
ntpserver: de.pool.ntp.org
instances:
bms1:
provider: bms
ip: 10.10.10.11
roles:
openstack:
config_database:
config:
control:
analytics_database:
analytics:
webui:
bms2:
provider: bms
ip: 10.10.10.12
roles:
openstack_compute:
vrouter:
VROUTER_GATEWAY: 10.10.10.1
bms3:
provider: bms
ip: 10.10.10.13
roles:
openstack_compute:
vrouter:
VROUTER_GATEWAY: 10.10.10.1
contrail_configuration:
RABBITMQ_NODE_PORT: 5673
KEYSTONE_AUTH_URL_VERSION: /v3
AUTH_MODE: keystone
kolla_config:
customize:
nova.conf: |
[libvirt]
virt_type=qemu
cpu_mode=none
kolla_globals:
kolla_internal_vip_address: 10.10.10.11
kolla_external_vip_address: 10.10.10.11
enable_haproxy: "no"
enable_ironic: "no"
enable_swift: "no"
Docker Repository and Version: opencontrailnightly / master-115
Issue:
[root@controller-1 /]# service-instance list
Traceback (most recent call last):
File "/usr/bin/service-instance", line 253, in <module>
main()
File "/usr/bin/service-instance", line 248, in main
si = ServiceInstanceCmd(args_str)
File "/usr/bin/service-instance", line 52, in __init__
self._novaclient_init()
File "/usr/bin/service-instance", line 145, in _novaclient_init
auth_url='http://' + self._args.ifmap_server_ip + ':5000/v2.0')
AttributeError: 'Namespace' object has no attribute 'ifmap_server_ip'
In this wiki page, the link about how to use the Kolla toolbox has a broken anchor.
It is currently: https://github.com/Juniper/contrail-ansible-deployer/wiki/Provisioning-F.A.Q#how-to-use-the-kolla_toolbox-container-to-run-openstack-cli-commands
Trying to configure All in One with 2 interfaces (management and control/data network):
---mgmt network-------em1[Server]p6p1-----control/data network---- [Router]
10.8.128.x/25 .92 .92 10.92.100.x/24 .1
Here is instances.yaml file:
provider_config:
bms:
ssh_pwd: Juniper!1
ssh_user: root
ssh_public_key:
ssh_private_key:
ntpserver: 10.8.4.105
nameserver: 10.8.4.105
domainsuffix: cse-wf.jnpr.net
instances:
bms1:
provider: bms
roles:
openstack:
config_database:
config:
control:
analytics_database:
analytics:
webui:
vrouter:
PHYSICAL_INTERFACE: p6p1
VROUTER_GATEWAY: 10.92.100.1
openstack_compute:
kubemanager:
k8s_master:
k8s_node:
ip: 10.8.128.92
contrail_configuration:
CONTAINER_REGISTRY: opencontrailnightly
CONTRAIL_VERSION: ocata-master-58
UPGRADE_KERNEL: true
RABBITMQ_NODE_PORT: 5673
AUTH_MODE: keystone
KEYSTONE_AUTH_URL_VERSION: /v3
KEYSTONE_AUTH_ADMIN_PASSWORD: contrail123
CLOUD_ORCHESTRATOR: openstack
VROUTER_GATEWAY: 10.92.100.1
PHYSICAL_INTERFACE: em1
CONTROL_DATA_NET_LIST: 10.92.100.0/24
AAA_MODE: cloud-admin
KUBERNETES_CLUSTER_PROJECT: {}
kolla_config:
customize:
nova.conf: |
[libvirt]
virt_type=qemu
cpu_mode=none
kolla_globals:
network_interface: "em1"
kolla_internal_vip_address: "10.92.100.92"
kolla_external_vip_address: "10.8.128.92"
enable_haproxy: "no"_
Issue is that the "install_contrail" playbook is erroring out at the Task below.
TASK [Contrail default swift container and Temp-URL-Key creation] ************************************************************************* fatal: [10.8.128.92]: FAILED! => {"changed": true, "cmd": "docker exec kolla_toolbox bash -c 'source /var/lib/kolla/config_files/admin-openrc.sh; swift post -r '.r:*' contrail_container; swift post -m \"Temp-URL-Key:mykey\"'", "delta": "0:01:04.557185", "end": "2018-04-12 15:26:29.732715", "msg": "non-zero return code", "rc": 1, "start": "2018-04-12 15:25:25.175530", "stderr": "HTTPConnectionPool(host='10.8.128.92', port=8080): Max retries exceeded with url: /v1/AUTH_df269fdcb0dd469983dc5fd42dff04ad/contrail_container (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\nHTTPConnectionPool(host='10.8.128.92', port=8080): Max retries exceeded with url: /v1/AUTH_df269fdcb0dd469983dc5fd42dff04ad (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))", "stderr_lines": ["HTTPConnectionPool(host='10.8.128.92', port=8080): Max retries exceeded with url: /v1/AUTH_df269fdcb0dd469983dc5fd42dff04ad/contrail_container (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))", "HTTPConnectionPool(host='10.8.128.92', port=8080): Max retries exceeded with url: /v1/AUTH_df269fdcb0dd469983dc5fd42dff04ad (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/root/contrail-ansible-deployer/playbooks/install_contrail.retryPLAY RECAP ********************************************************************************************************************************
10.8.128.92 : ok=373 changed=32 unreachable=0 failed=1
localhost : ok=9 changed=4 unreachable=0 failed=0
This is the last task in the /root/contrail-kolla-ansible/ansible/post-deploy-contrail.yaml file.
`TASK [mariadb : Waiting for MariaDB service to be ready] *********************************************************************************************************************************
FAILED - RETRYING: Waiting for MariaDB service to be ready (10 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (9 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (8 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (7 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (6 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (5 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (4 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (3 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (2 retries left).
FAILED - RETRYING: Waiting for MariaDB service to be ready (1 retries left).
fatal: [192.168.6.13]: FAILED! => {"attempts": 10, "changed": false, "elapsed": 60, "msg": "Timeout when waiting for search string MariaDB in 192.168.6.13:3306"}
to retry, use: --limit @/root/contrail-ansible-deployer/playbooks/install_contrail.retry
PLAY RECAP *******************************************************************************************************************************************************************************
192.168.6.12 : ok=55 changed=21 unreachable=0 failed=0
192.168.6.13 : ok=118 changed=55 unreachable=0 failed=1
localhost : ok=7 changed=2 unreachable=0 failed=0
[root@controller5 db]# docker logs --tail 50 --follow --timestamps mariadb
2018-05-09T16:16:49.792265592Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_waits_summary_by_host_by_event_name.frm
2018-05-09T16:16:49.792271446Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_waits_summary_by_user_by_event_name.frm
2018-05-09T16:16:49.792279855Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_waits_summary_by_account_by_event_name.frm
2018-05-09T16:16:49.792286029Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_waits_summary_by_thread_by_event_name.frm
2018-05-09T16:16:49.792291929Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_waits_summary_global_by_event_name.frm
2018-05-09T16:16:49.792297786Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/file_instances.frm
2018-05-09T16:16:49.792303566Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/file_summary_by_event_name.frm
2018-05-09T16:16:49.792322838Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/file_summary_by_instance.frm
2018-05-09T16:16:49.792328799Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/socket_instances.frm
2018-05-09T16:16:49.792334582Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/socket_summary_by_instance.frm
2018-05-09T16:16:49.792340462Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/socket_summary_by_event_name.frm
2018-05-09T16:16:49.792346235Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/host_cache.frm
2018-05-09T16:16:49.792351929Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/mutex_instances.frm
2018-05-09T16:16:49.792357692Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/objects_summary_global_by_type.frm
2018-05-09T16:16:49.792363482Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/performance_timers.frm
2018-05-09T16:16:49.792369255Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/rwlock_instances.frm
2018-05-09T16:16:49.792374965Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/setup_actors.frm
2018-05-09T16:16:49.792380682Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/setup_consumers.frm
2018-05-09T16:16:49.792386412Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/setup_instruments.frm
2018-05-09T16:16:49.792392750Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/setup_objects.frm
2018-05-09T16:16:49.792398764Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/setup_timers.frm
2018-05-09T16:16:49.792404584Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/table_io_waits_summary_by_index_usage.frm
2018-05-09T16:16:49.792410574Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/table_io_waits_summary_by_table.frm
2018-05-09T16:16:49.792417319Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/table_lock_waits_summary_by_table.frm
2018-05-09T16:16:49.792423249Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/threads.frm
2018-05-09T16:16:49.792428935Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_current.frm
2018-05-09T16:16:49.792434772Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_history.frm
2018-05-09T16:16:49.792440539Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_history_long.frm
2018-05-09T16:16:49.792446292Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_summary_by_thread_by_event_name.frm
2018-05-09T16:16:49.792452202Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_summary_by_host_by_event_name.frm
2018-05-09T16:16:49.792458026Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_summary_by_user_by_event_name.frm
2018-05-09T16:16:49.792467704Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_summary_by_account_by_event_name.frm
2018-05-09T16:16:49.792473801Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_stages_summary_global_by_event_name.frm
2018-05-09T16:16:49.792479714Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_current.frm
2018-05-09T16:16:49.792485498Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_history.frm
2018-05-09T16:16:49.792491318Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_history_long.frm
2018-05-09T16:16:49.792497121Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_summary_by_thread_by_event_name.frm
2018-05-09T16:16:49.792503011Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_summary_by_host_by_event_name.frm
2018-05-09T16:16:49.792508941Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_summary_by_user_by_event_name.frm
2018-05-09T16:16:49.792514852Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_summary_by_account_by_event_name.frm
2018-05-09T16:16:49.792520782Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_summary_global_by_event_name.frm
2018-05-09T16:16:49.792527837Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/hosts.frm
2018-05-09T16:16:49.792533703Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/users.frm
2018-05-09T16:16:49.792539477Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/accounts.frm
2018-05-09T16:16:49.792545177Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/events_statements_summary_by_digest.frm
2018-05-09T16:16:49.792551033Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/session_connect_attrs.frm
2018-05-09T16:16:49.792556920Z INFO:main:Setting permission for /var/lib/mysql/performance_schema/session_account_connect_attrs.frm
2018-05-09T16:16:49.808016993Z Running command: '/usr/bin/mysqld_safe'
2018-05-09T16:16:49.978048481Z 180509 16:16:49 mysqld_safe Logging to '/var/log/kolla/mariadb/mariadb.log'.
2018-05-09T16:16:50.028770833Z 180509 16:16:50 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/
`
Guys!
How to deploy openstack in centos?
I need everyone's help, with sincere greetings!
I tried to add new compute at my existing openstack+tungsten but it failed. My new IP for my new compute is 10.0.141.60. Below are my instances.yaml
provider_config:
bms:
domainsuffix: local
ssh_user: root
ssh_pwd: P@ssw0rd
ntpserver: 1.id.pool.ntp.org
instances:
tungsten:
ip: 10.0.141.53
provider: bms
roles:
config:
config_database:
control:
analytics_database:
analytics:
webui:
openstack:
ip: 10.0.141.58
provider: bms
roles:
openstack:
compute:
ip: 10.0.141.59
provider: bms
roles:
openstack_compute:
vrouter:
compute2:
ip: 10.0.141.60
provider: bms
roles:
openstack_compute:
vrouter:
global_configuration:
CONTAINER_REGISTRY: opencontrailnightly
REGISTRY_PRIVATE_INSECURE: True
contrail_configuration:
UPGRADE_KERNEL: true
CONTRAIL_CONTAINER_TAG: latest
CONTRAIL_VERSION: latest
CLOUD_ORCHESTRATOR: openstack
CONFIG_NODEMGR__DEFAULTS__minimum_diskGB: 2
DATABASE_NODEMGR__DEFAULTS__minimum_diskGB: 2
CONFIG_DATABASE_NODEMGR__DEFAULTS__minimum_diskGB: 2
ENCAP_PRIORITY: "VXLAN,MPLSoUDP,MPLSoGRE"
kolla_config:
kolla_globals:
enable_haproxy: no
enable_ironic: no
enable_swift: no
kolla_passwords:
keystone_admin_password: c0ntrail123
when I execute the "ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/configure_instances.yml" I got error
TASK [instance : Build hosts file with domain suffix when provided] **************************************************************************************************************************
ok: [10.0.141.53] => (item=10.0.141.53)
ok: [10.0.141.58] => (item=10.0.141.53)
ok: [10.0.141.60] => (item=10.0.141.53)
ok: [10.0.141.53] => (item=10.0.141.58)
ok: [10.0.141.58] => (item=10.0.141.58)
ok: [10.0.141.60] => (item=10.0.141.58)
ok: [10.0.141.53] => (item=10.0.141.60)
fatal: [10.0.141.53]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ansible_hostname'\n\nThe error appears to have been in '/root/contrail-ansible-deployer/playbooks/roles/instance/tasks/install_software_Linux.yml': line 95, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: "Build hosts file with domain suffix when provided"\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'ansible_hostname'"}
ok: [10.0.141.58] => (item=10.0.141.60)
fatal: [10.0.141.58]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ansible_hostname'\n\nThe error appears to have been in '/root/contrail-ansible-deployer/playbooks/roles/instance/tasks/install_software_Linux.yml': line 95, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: "Build hosts file with domain suffix when provided"\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'ansible_hostname'"}
ok: [10.0.141.60] => (item=10.0.141.60)
fatal: [10.0.141.60]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ansible_hostname'\n\nThe error appears to have been in '/root/contrail-ansible-deployer/playbooks/roles/instance/tasks/install_software_Linux.yml': line 95, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: "Build hosts file with domain suffix when provided"\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'ansible_hostname'"}
to retry, use: --limit @/root/contrail-ansible-deployer/playbooks/configure_instances.retry
PLAY RECAP ***********************************************************************************************************************************************************************************
10.0.141.53 : ok=24 changed=5 unreachable=0 failed=1
10.0.141.58 : ok=24 changed=6 unreachable=0 failed=1
10.0.141.59 : ok=0 changed=0 unreachable=1 failed=0
10.0.141.60 : ok=26 changed=5 unreachable=0 failed=1
localhost : ok=41 changed=4 unreachable=0 failed=0
for node 10.0.141.59 unreachable is because I shut it down in purpose.
following (ad litteram) the example at gce1.md, the configure instances playbook fails
TASK [Gathering Facts] ***********************************************************************************************************************************************************
fatal: [35.185.237.69]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: [email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
fatal: [35.233.220.159]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: [email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
fatal: [35.233.202.87]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: [email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
fatal: [35.230.33.116]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: [email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
fatal: [35.227.157.13]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: [email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
fatal: [35.199.160.152]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: [email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
provider_config:
gce:
service_account_email: [omitted]
credentials_file: [omitted]/projects/contrail-ansible-deployer/vigilant-design-134823-6eb1d1d5e0eb.json
project_id: [omitted]
ssh_user: centos
machine_type: n1-standard-4
image: centos-7
network: microservice-vn
subnetwork: microservice-sn
zone: us-west1-a
disk_size: 50
instances:
master1:
provider: gce
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
k8s_master:
master2:
provider: gce
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
k8s_master:
master3:
provider: gce
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
k8s_master:
node1:
provider: gce
roles:
vrouter:
k8s_node:
node2:
provider: gce
roles:
vrouter:
k8s_node:
node3:
provider: gce
roles:
vrouter:
k8s_node:
[root@ctfdeploy contrail-ansible-deployer]# ansible-playbook -i inventory/ -e orchestrator=openstack -e ansible_sudo_pass=abc@123 playbooks/install_openstack.yml
TASK [memcached : Copying over config.json files for services] *************************************************
failed: [10.49.252.201] (item=memcached) => {"changed": false, "item": "memcached", "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'ipv4'"}
The problem appears on the ens160 network card and the vhost0 network card
[root@opcontroller ~]# ip a
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:39:e8:7f brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:fe39:e87f/64 scope link
valid_lft forever preferred_lft forever
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
9: vhost0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 00:0c:29:39:e8:7f brd ff:ff:ff:ff:ff:ff
inet 10.49.252.201/24 brd 10.49.252.255 scope global vhost0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe39:e87f/64 scope link
valid_lft forever preferred_lft forever
Before deployment, the IP is on the ens160 network card. After the installation is complete, contrail configures the IP on the vhost0 network card. Why can't I find the IPv4 address on the ens160 network card when expanding the cluster? How can I expand it?
Hello,
When I ran "ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_openstack.yml
", I got an error as below.
TASK [keystone : Creating admin project, user, role, service, and endpoint] ****************************
[WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}.
Found: {{ (keystone_bootstrap.stdout | from_json).changed }}
fatal: [172.27.116.135]: FAILED! => {"msg": "The conditional check '{{ (keystone_bootstrap.stdout | from_json).changed }}' failed. The error was: Expecting ',' delimiter: line 1 column 133 (char 132)"}
contrail-kolla-ansible/ansible/roles/keystone/tasks/register.yml
- name: Creating admin project, user, role, service, and endpoint
command: docker exec keystone kolla_keystone_bootstrap {{ openstack_auth.username }} {{ openstack_auth.password }} {{ openstack_auth.project_name }} admin {{ keystone_admin_url }} {{ keystone_internal_url }} {{ keystone_public_url }} {{ openstack_region_name }}
register: keystone_bootstrap
changed_when: "{{ (keystone_bootstrap.stdout | from_json).changed }}"
failed_when: "{{ (keystone_bootstrap.stdout | from_json).failed }}"
run_once: True
It seems that ansible doesn't allow jinja2 format for parameters. I installed ansible-2.4.2.0.
Thank you.
Please remove all internal references from the wiki.
For example: ci-repo.englab.juniper.net:5000
I've noticed a problem in a wiki page I want to fix.
I do not have the rights to edit the page. So therefore it's not really acting as a wiki.
If I fork it, the wiki isn't cloned.
Why not just move it into the content of the repo itself? That way everyone can contribute.
I follow the steps of this tutorial(https://docs.openstack.org/kolla-ansible/4.0.2/external-ceph-guide.html)
and modify contrail-ansible-deployer to add ceph enable config at globals.yml. but it seems not work. how to integrate the openstack-contrail with a existing ceph?
Hi,
I am trying a single node installation (ip 192.168.56.50), with a basic yaml file. The ansible process failed with the error below :
TASK [install_contrail : create contrail config database compose file] **********************************
fatal: [192.168.56.50]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'cassandra_seeds' is undefined"}
to retry, use: --limit @/home/stack/contrail-ansible-deployer/playbooks/install_contrail.retry
PLAY RECAP **********************************************************************************************
192.168.56.50 : ok=410 changed=33 unreachable=0 failed=1
localhost : ok=7 changed=2 unreachable=0 failed=0
Here is the config/instances.yaml file :
provider_config:
bms:
ssh_pwd:
ssh_user: root
ntpserver: 2.be.pool.ntp.org
domainsuffix: lab
instances:
bms1:
provider: bms
ip: 192.168.56.50
contrail_configuration:
RABBITMQ_NODE_PORT: 5673
AUTH_MODE: keystone
KEYSTONE_AUTH_URL_VERSION: /v3
kolla_config:
kolla_globals:
enable_haproxy: no
kolla_passwords:
keystone_admin_password:
Any solutions for this issue ?
Thank you.
I'm trying to get the deployment running on hosts in a network that doesn't allow direct connection to the internet, following the guidance on https://github.com/Juniper/contrail-ansible-deployer/wiki/Contrail-with-Kolla-Ocata. This requires setting of http_proxy and/or https_proxy, but I can't figure out how to pass environment variables to the playbook. I've tried various places but without success thus far.
Hello,
Could you share the instructions for deploy Contrail and integrate with Kubernetes.
As per your examples, should we have to install docker engine before our deploy
192.168.1.100:
hi, there is any tutorial to integrate OpenContrail with existing openstack. i want to change my existing openstack controller to opencontrail. but i cant find any good tutorial to it? thanks before
Repository: opencontrailnightly
Version: at least master-115 and master-117
[root@controller-1 ~]# cat contrail-ansible-deployer/config/instances.yaml
provider_config:
bms:
ssh_pwd: contrail123
ssh_user: root
domainsuffix: local
ntpserver: de.pool.ntp.org
instances:
bms1:
provider: bms
ip: 10.10.10.11
roles:
openstack:
config_database:
config:
control:
analytics_database:
analytics:
webui:
vrouter:
VROUTER_GATEWAY: 10.10.10.1
bms2:
provider: bms
ip: 10.10.10.12
roles:
openstack_compute:
vrouter:
VROUTER_GATEWAY: 10.10.10.1
bms3:
provider: bms
ip: 10.10.10.13
roles:
openstack_compute:
vrouter:
VROUTER_GATEWAY: 10.10.10.1
contrail_configuration:
#CONTAINER_REGISTRY: opencontrailnightly
#CONTRAIL_VERSION: master-112
#CONTRAIL_VERSION: master-115
RABBITMQ_NODE_PORT: 5673
KEYSTONE_AUTH_URL_VERSION: /v3
AUTH_MODE: keystone
#UPGRADE_KERNEL: true
kolla_config:
customize:
nova.conf: |
[libvirt]
virt_type=qemu
cpu_mode=none
kolla_globals:
kolla_internal_vip_address: 10.10.10.11
kolla_external_vip_address: 10.10.10.11
enable_haproxy: "no"
enable_ironic: "no"
enable_swift: "no"
When trying to deploy a seperate vRouter (kind of (physical) network gateway) the docker container restarts/crashes endlessly.
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::program_options::invalid_config_file_syntax> >'
what(): the options configuration file contains an invalid line '10.10.10.1'
Line 3 ("10.10.10.1/24") and 7 ("10.10.10.1") of the section "[VIRTUAL-HOST-INTERFACE]" of the (derived?) configuration in /etc/contrail/contrail-vrouter-agent.conf seems to be the issue:
[VIRTUAL-HOST-INTERFACE]
name=vhost0
ip=10.10.10.11/24
10.10.10.1/24
physical_interface=vxlan0
gateway=10.10.10.1
compute_node_address=10.10.10.11/24
10.10.10.1
However the problem does not occure when role openstack_compute is deployed as well - the faulty entries are missing then and the container starts properly.
I'm not sure if this is related to the deployer or the container repo / team.
Running simple installation on a single host:
ansible-playbook -i inventory playbooks/deploy.yml
[root@contrail contrail-ansible-deployer]# more inventory/hosts
localhost:
hosts:
localhost:
config_file: ../config/instances.yaml
connection: local
ansible_connection: local
python_interpreter: python
ansible_python_interpreter: python
container_hosts:
hosts:
192.168.1.27:
ansible_ssh_pass: jun123
Installation fails at this step:
TASK [create_containers : start redis] ***************************************************************************************************************************
fatal: [192.168.1.27]: FAILED! => {"changed": false, "msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(2, 'No such file or directory'))"}
to retry, use: --limit @/home/juniper/contrail-ansible-deployer/playbooks/deploy.retry
PLAY RECAP *******************************************************************************************************************************************************
192.168.1.27 : ok=19 changed=6 unreachable=0 failed=1
config/instances.yaml
provider_config:
bms:
ssh_pwd: contrail123
ssh_user: root
domainsuffix: local
ntpserver: de.pool.ntp.org
instances:
bms1:
provider: bms
ip: 10.10.10.11
roles:
openstack:
config_database:
config:
control:
analytics_database:
analytics:
webui:
bms2:
provider: bms
ip: 10.10.10.12
roles:
openstack_compute:
vrouter:
VROUTER_GATEWAY: 10.10.10.1
bms3:
provider: bms
ip: 10.10.10.13
roles:
openstack_compute:
vrouter:
VROUTER_GATEWAY: 10.10.10.1
contrail_configuration:
RABBITMQ_NODE_PORT: 5673
KEYSTONE_AUTH_URL_VERSION: /v3
AUTH_MODE: keystone
kolla_config:
customize:
nova.conf: |
[libvirt]
virt_type=qemu
cpu_mode=none
kolla_globals:
kolla_internal_vip_address: 10.10.10.11
kolla_external_vip_address: 10.10.10.11
enable_haproxy: "no"
enable_ironic: "no"
enable_swift: "no"
Docker Repository and Version: opencontrailnightly / master-115
Issue:
When trying to start/spin the analyzer image (fresh installation, user admin) I'm getting the following error:
05/24/2018 12:21:24 PM [contrail-svc-monitor]: __default__ [SYS_ERR]: SvcMonitorLog: nova error The request you have made requires authentication. (HTTP 401) (Request-ID: req-902fd9cc-4cc6-448a-a948-1f4f55b34d64)
05/24/2018 12:21:24 PM [contrail-svc-monitor]: __default__ [SYS_ERR]: SvcMonitorLog: Flavor not found default-domain:analyzer-template
m1.medium flavor is available:
[root@controller-1 ~]# openstack flavor list
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| 245db933-cab7-4eb7-95fa-7aed9d59f5f1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| a3a6bd2a-1e3b-4c66-9ea9-e0add09964af | m1.small | 2048 | 20 | 0 | 1 | True |
| d8b5fd1e-69a9-4030-8e70-d5e6d1f577a9 | m1.medium | 4096 | 40 | 0 | 2 | True |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
analyzer-vm-console-qcow2 image is available:
[root@controller-1 ~]# openstack image list
+--------------------------------------+----------+--------+
| ID | Name | Status |
+--------------------------------------+----------+--------+
| add2e564-8e63-4851-a047-7a16fbeab015 | analyzer | active |
| 839ce2de-d201-4c68-95ff-f789ebf90788 | cirros2 | active |
+--------------------------------------+----------+--------+
/etc/contrail/contrail-keystone-auth.conf:
[KEYSTONE]
#memcache_servers=127.0.0.1:11211
admin_password = contrail123
admin_tenant_name = admin
admin_user = admin
auth_host = 10.10.10.11
auth_port = 35357
auth_protocol = http
auth_url = http://10.10.10.11:35357/v3
auth_type = password
user_domain_name = Default
project_domain_name = Default
region_name = RegionOne
/etc/contrail/contrail-svc-monitor.conf
[DEFAULTS]
api_server_ip=10.10.10.11
api_server_port=8082
log_file=/var/log/contrail/contrail-svc-monitor.log
log_level=SYS_NOTICE
log_local=1
cassandra_server_list=10.10.10.11:9161
zk_server_ip=10.10.10.11:2181
rabbit_server=10.10.10.11:5673
rabbit_vhost=/
rabbit_user=guest
rabbit_password=guest
rabbit_use_ssl=False
collectors=10.10.10.11:8086
[SECURITY]
use_certs=False
keyfile=/etc/contrail/ssl/private/server-privkey.pem
certfile=/etc/contrail/ssl/certs/server.pem
ca_certs=/etc/contrail/ssl/certs/ca-cert.pem
[SCHEDULER]
# Analytics server list used to get vrouter status and schedule service instance
analytics_server_list=10.10.10.11:8081
aaa_mode = no-auth
[SANDESH]
introspect_ssl_enable=False
sandesh_ssl_enable=False
Btw.: Any attempt spinning v1 service templates does not work.
Folks,
I have completed without errors the deployment of the Contrail with OpenStack Ocata and Kolla Ansible however when I try to create a network with Openstack I am getting the following response.
openstack network create testvn
Error while executing command: HttpException: 500, {"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "type": "HTTPInternalServerError", "detail": ""}}
Has anybody experienced a similar issue ?
I really appreciate your time and help.
Best Regard
I am trying to follow this example on this wiki page.
It fails at the first ansible playbook.
TASK [Gathering Facts] ****************************************************************************************************************
fatal: [10.87.64.23]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 10.87.64.23 port 22: Connection timed out\r\n", "unreachable": true}
to retry, use: --limit @/home/opentelco/contrail-ansible-deployer/playbooks/provision_instances.retry
Am I supposed to change on of the IP addresses in that yaml from the previous step? The guide says that the scripts setup openstack for me on a single bare metal server.
I interpreted that as meaning that I don't have to spin up the VMs myself, therefore there are no other IP addresses I could possibly put into that yaml.
How is this wiki example supposed to be run?
When contrail-ansible-deployer is used for deploying it expects the CONTRAIL_REGISTRY at the global level outside the ‘contrail_configuration’ when executing step 3.4 in this wiki https://github.com/Juniper/contrail-ansible-deployer/wiki/Contrail-all-in-one-setup-with-Openstack-Kolla-Ocata
CONTAINER_REGISTRY: "10.84.33.3:6666"
contrail_configuration:
CONTRAIL_VERSION: "10000-1-centos7-ocata"
When actually deploying the Contrail containers using Step 5 it expects it under contrail_configuration like this:
contrail_configuration:
CONTAINER_REGISTRY: "10.84.33.3:6666"
If you have insecure registry configured... you look into the yaml /root/contrail-ansible-deployer/playbooks/roles/configure_container_hosts/tasks/insecure_registry.yaml
It expects CONTAINER_REGISTRY to be at the top level and not under 'contrail_configuration'
Hello, while executing
ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_contrail.yml_
step I see installation failure at the following task:
TASK [swift : Creating swift rings] ******************************************************************************************************************************
fatal: [192.168.3.45 -> localhost]: FAILED! => {"changed": true, "cmd": ["/bin/swift_ring_builder.sh"], "delta": "0:00:00.023809", "end": "2018-04-07 17:13:05.898269", "msg": "non-zero return code", "rc": 127, "start": "2018-04-07 17:13:05.874460", "stderr": "/bin/swift_ring_builder.sh: line 8: docker: command not found\n/bin/swift_ring_builder.sh: line 17: docker: command not found\n/bin/swift_ring_builder.sh: line 17: docker: command not found\n/bin/swift_ring_builder.sh: line 17: docker: command not found\n/bin/swift_ring_builder.sh: line 27: docker: command not found\n/bin/swift_ring_builder.sh: line 36: docker: command not found\n/bin/swift_ring_builder.sh: line 36: docker: command not found\n/bin/swift_ring_builder.sh: line 36: docker: command not found\n/bin/swift_ring_builder.sh: line 46: docker: command not found\n/bin/swift_ring_builder.sh: line 55: docker: command not found\n/bin/swift_ring_builder.sh: line 55: docker: command not found\n/bin/swift_ring_builder.sh: line 55: docker: command not found\n/bin/swift_ring_builder.sh: line 65: docker: command not found\n/bin/swift_ring_builder.sh: line 65: docker: command not found\n/bin/swift_ring_builder.sh: line 65: docker: command not found", "stderr_lines": ["/bin/swift_ring_builder.sh: line 8: docker: command not found", "/bin/swift_ring_builder.sh: line 17: docker: command not found", "/bin/swift_ring_builder.sh: line 17: docker: command not found", "/bin/swift_ring_builder.sh: line 17: docker: command not found", "/bin/swift_ring_builder.sh: line 27: docker: command not found", "/bin/swift_ring_builder.sh: line 36: docker: command not found", "/bin/swift_ring_builder.sh: line 36: docker: command not found", "/bin/swift_ring_builder.sh: line 36: docker: command not found", "/bin/swift_ring_builder.sh: line 46: docker: command not found", "/bin/swift_ring_builder.sh: line 55: docker: command not found", "/bin/swift_ring_builder.sh: line 55: docker: command not found", "/bin/swift_ring_builder.sh: line 55: docker: command not found", "/bin/swift_ring_builder.sh: line 65: docker: command not found", "/bin/swift_ring_builder.sh: line 65: docker: command not found", "/bin/swift_ring_builder.sh: line 65: docker: command not found"], "stdout": "", "stdout_lines": []}
PLAY RECAP *******************************************************************************************************************************************************
192.168.3.45 : ok=174 changed=76 unreachable=0 failed=1
localhost : ok=7 changed=2 unreachable=0 failed=0
Hello,
When I ran "ansible-playbook -i inventory/ playbooks/install_openstack.yml" under second step I got following error.
`RUNNING HANDLER [ironic-notification-manager : Restart ironic-notification-manager container] *******************************************************
fatal: [192.168.22.3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\n File "/tmp/ansible_xo9rcy/ansible_module_kolla_d
ocker.py", line 801, in main\n result = bool(getattr(dw, module.params.get(\'action\'))())\n File "/tmp/ansible_xo9rcy/ansible_module_kolla
_docker.py", line 598, in recreate_or_restart_container\n self.start_container()\n File "/tmp/ansible_xo9rcy/ansible_module_kolla_docker.py"
, line 610, in start_container\n self.pull_image()\n File "/tmp/ansible_xo9rcy/ansible_module_kolla_docker.py", line 452, in pull_image\n
repository=image, tag=tag, stream=True\n File "/usr/lib/python2.7/site-packages/docker/api/image.py", line 175, in pull\n self._raise_for_st
atus(response)\n File "/usr/lib/python2.7/site-packages/docker/client.py", line 174, in _raise_for_status\n raise errors.APIError(e, response
, explanation=explanation)\nAPIError: 500 Server Error: Internal Server Error ("{"message":"Get https://172.17.0.3:5000/v1/_ping: http: server g
ave HTTP response to HTTPS client"}")\n'"}
to retry, use: --limit @/root/contrail-ansible-deployer/playbooks/install_openstack.retry
PLAY RECAP ******************************************************************************************************************************************
192.168.22.3 : ok=259 changed=18 unreachable=0 failed=1
localhost : ok=7 changed=2 unreachable=0 failed=0
`
Content in config/instances.yaml is:
provider_config:
bms:
ssh_user: root
ntpserver: 210.173.160.27
domainsuffix: localdomain
ssh_public_key: ~/.ssh/id_rsa.pub
ssh_private_key: ~/.ssh/id_rsa
instances:
bms1:
provider: bms
ip: <basehost_ip>
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
vrouter:
openstack:
openstack_compute:
global_configuration:
CONTAINER_REGISTRY: <basehost_ip>:5000
REGISTRY_PRIVATE_INSECURE: True
contrail_configuration:
UPGRADE_KERNEL: True
CONTRAIL_VERSION: ocata-5.0-20180219195722
CLOUD_ORCHESTRATOR: openstack
RABBITMQ_NODE_PORT: 5673
AUTH_MODE: keystone
KEYSTONE_AUTH_URL_VERSION: /v3
kolla_config:
kolla_globals:
enable_haproxy: no
kolla_passwords:
keystone_admin_password: mypass
When I tried to access the URL (https://172.17.0.3:5000/v1/_ping) manually using wget I got following OpenSSL error.
[root@noded1 contrail-ansible-deployer]# wget https://172.17.0.3:5000/v1/_ping --2018-06-12 10:59:39-- https://172.17.0.3:5000/v1/_ping Connecting to 172.17.0.3:5000... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. [root@noded1 contrail-ansible-deployer]#
Can I deploy Tungsten Fabric without OpenStack and Kubernetes? Just Tungsten Fabric alone? If so, then where can I find an example how to do this? Thanks in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.