Giter Site home page Giter Site logo

centos-paas-sig / linchpin Goto Github PK

View Code? Open in Web Editor NEW
115.0 19.0 70.0 45.28 MB

ansible based multicloud orchestrator

Home Page: http://linchpin.readthedocs.io

License: GNU General Public License v3.0

Python 81.56% Shell 13.34% Makefile 0.14% Groovy 1.42% Dockerfile 1.65% JavaScript 0.01% Ruby 0.01% ASL 0.33% Jinja 1.56%
ansible ansible-playbooks topology inventory-files inventory linchpin cloud cli provisioning hybrid-cloud

linchpin's Introduction

LinchPin

LinchPin provides a command-line interface and Python API for provisioning and managing resources across multiple infrastructures. Where multiple infrastructure resource types can be defined with a single topology file.

LinchPin can also generate inventory files for use with additional Ansible playbooks. These are applied using an inventory layout file.

Contributing and Community Participation

Developers are encouraged to contribute to the LinchPin project. Please visit Contributing for details.

Installation

Installation documentation

QuickStart

The Getting Started Guide is a quick guide for LinchPin from the Command-Line.

Using LinchPin

Check out the Getting Started Guide for a quick guide on how to use LinchPin from the Command-Line.

For API documentation, check out the Python API Reference.

linchpin's People

Contributors

14rcole avatar abraverm avatar alexandergharibian avatar amrita42 avatar arilivigni avatar ashcrow avatar dannyb48 avatar dollierp avatar ggallen avatar greg-hellings avatar herlo avatar jaypoulz avatar jcpowermac avatar joejstuart avatar johnbieren avatar jpryor-rh avatar jrobertson855 avatar junqizhang0 avatar lukas-bednar avatar machacekondra avatar mcornea avatar p3ck avatar petr-balogh avatar ryankwilliams avatar samvarankashyap avatar seandst avatar skatlapa avatar wainersm avatar waynesun09 avatar yprokule avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

linchpin's Issues

(2) linchpin should allow for become option to pass to ansible

In the current invocation of linchpin, sudo is not an option. This is due to some code in invoke_playbooks.py which disables become. Instead, the option should be provided to linchpin rise to enable sudo.

It was discussed and agreed that the functionality should follow the form ansible has already provided. See http://docs.ansible.com/ansible/become.html for information on ansible's become functionality.

Thus, the usage may look something like this:

linchpin rise --become

This would default to become_method=sudo, become_user=root. But this could also be overridden as such:

linchpin rise --become --become_user='charlie' --become_method='su'

This should also be available as config options, possibly in the PinFile or linchpin_config.yml. Though, that detail has yet to be determined.

Override name of resource_group_name and res_name on the command line

To avoid collisions when provisioning into an openstack tenant for example we may want to provide an extra-vars for resource_group_name and res_name so we can provision in the same tenant.

I realize you can put multiple resources in a topology but you may not want to tie up all those resources at once and there is still a chance for collisions as well.

requirements.txt is not installable

It is not possible to run a basic pip install of the requirements.txt file as it exists in the repository at present.

Steps to reproduce:

  1. Clone linch-pin
  2. Create and activate new virtualenv
  3. Execute "pip install -r requirements.txt" from the linch-pin directory

Expected result:
Pip should install all necessary packages

Actual result:
Pip exits with the following error message
Collecting python-openstackclient==0.8.6 (from -r requirements.txt (line 18))
Could not find a version that satisfies the requirement python-openstackclient==0.8.6 (from -r requirements.txt (line 18)) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.3.0, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.9.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 3.0.0, 3.0.1, 3.1.0, 3.2.0, 3.3.0)
No matching distribution found for python-openstackclient==0.8.6 (from -r requirements.txt (line 18))

Incorrect default inventory_layout_path in inventory gen role when the layout is not defined

The following error occurs while inventory generation:

TASK [inventory_gen : include the layout file for inventory generation] ********
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "file": "/root/linch-pin/root/linch-pin/inventory_layouts/openshift-3node-cluster.yml", "msg": "Source file not found."}

This is due to
set_fact: which is not updated with the updated config file.
provision/roles/inventory_gen/tasks/main.yml:15: inventory_layouts_path: "{{ playbook_dir | dirname }}/{{ inventory_layouts_path }}"
will be fixed in upcoming PR

Regression introduced in schema_check.py

https://github.com/CentOS-PaaS-SIG/linch-pin/blob/master/library/schema_check.py#L82

In previous versions of schema_check.py, this check either did not happen, or it didn't require the resource_group_vars object.

This commit introduced an apparent regression:

6b72cb4

When you create a topology similar to this:


topology_name: "duffy_single" # topology name
resource_groups:

resource_group_name: "duffy_single"
res_group_type: "duffy"
res_defs:
  -
    res_name: "duffy_nodes"
    res_type: "duffy"
    version: 7
    arch: "x86_64"
    count: 1
assoc_creds: "duffy_creds"

And run the latest version of linch-pin against it as such:

ansible-playbook -vvv /Projects/linch-pin/provision/site.yml -e "state=present schema='/Projects/linch-pin/ex_schemas/schema_v2.json' topology='~/Projects/paas-ci/config/topologies/duffy-single.yml'"

The result is:

..snip..
TASK [common : schema check for given topology file] ***************************
task path: /home/sig-paas/Projects/linch-pin/provision/roles/common/tasks/main.yml:3
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: sig-paas
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo $HOME/.ansible/tmp/ansible-tmp-1473372193.46-270600084285347" && echo ansible-tmp-1473372193.46-270600084285347="echo $HOME/.ansible/tmp/ansible-tmp-1473372193.46-270600084285347" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp7KFPLd TO /home/sig-paas/.ansible/tmp/ansible-tmp-1473372193.46-270600084285347/schema_check
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/sig-paas/sandbox/linch-pin-testing/bin/python /home/sig-paas/.ansible/tmp/ansible-tmp-1473372193.46-270600084285347/schema_check; rm -rf "/home/sig-paas/.ansible/tmp/ansible-tmp-1473372193.46-270600084285347/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 125, in
main()
File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 114, in main
validate_values(module,data_file_path)
File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 95, in validate_values
status = validate_grp_names(data)
File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 82, in validate_grp_names
res_grp_vars = data['resource_group_vars']
KeyError: 'resource_group_vars'

fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "schema_check"}, "module_stderr": "Traceback (most recent call last):\n File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 125, in \n main()\n File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 114, in main\n validate_values(module,data_file_path)\n File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 95, in validate_values\n status = validate_grp_names(data)\n File "/tmp/ansible_nlBZs1/ansible_module_schema_check.py", line 82, in validate_grp_names\n res_grp_vars = data['resource_group_vars']\nKeyError: 'resource_group_vars'\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}

The schema_check.py library should allow an optional resource_group_vars object, but not require it.

Outputs from linch-pin need better sanitization

As discovered earlier today, certain bits of data were pushed into this repository as static output from a run on an internal openstack server. The password and other sensitive data was changed.

To prevent this in the future, an audit of outputs and code should be performed to ensure better security. A good approach may be to write unit tests that ensure outputs are returned as expected, with redacted information represented appropriately.

linchpin rise should not require inventory layout

When creating a PinFile, one should be able to list a Pin target without a declared layout.

Currently the following fails:

---
libvirt-test:
  topology: simple-libvirt-cluster.yml

But

---
libvirt-test:
  topology: simple-libvirt-cluster.yml
  layout: openshift-3node-cluster.yml

succeeds.

The former needs to be allowed. In many cases, a resulting inventory is not desired.

linchpin rise target should be an argument instead of an option

Currently, when running linchpin rise, if a particular target is requested, a --target flag must be applied. It makes more sense to just have it be an argument. If no argument is provided, linchpin rise will just attempt to start each target in the PinFile.

Consider the following Pinfile:

`---
ae2e-test:
topology: simple-ae2e-cluster.yml
layout: openshift-3node-cluster.yml

e2e:
topology: simple-e2e-os-cluster.yml
layout: openshift-3node-cluster.yml
`
Requesting linchpin rise to bring up the e2e target should be simple:

$ linchpin rise e2e

If both targets are desired:

$ linchpin rise

Essentially, eliminate the --target option as it's superfluous.

When generating inventories, if the outputs are empty, do not generate an inventory

It appears that currently, linch-pin attempts to generate inventories for anything that appears in the output file from a provisioning run. This should ignore any that do not have provisioned systems.

Given an output file like this

aws_ec2_res: []
duffy_res:
-   changed: true
    hosts:
    - n63.crusty.ci.centos.org
    - n21.dusty.ci.centos.org
    - n11.dusty.ci.centos.org
    ssid: 5adc53b2
gcloud_gce_res: []
os_server_res: []

Should not generate inventories for anything but the duffy_res. However, the current output looks like so:

TASK [inventory_gen : Generate inventories] ************************************
task path: /home/herlo/Projects/centos/git/linch-pin/provision/roles/inventory_gen/tasks/main.yml:38
failed: [localhost] (item=[u'aws', u'/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.aws", "src": "templates/aws_inventory_formatter.j2"}, "module_name": "template"}, "item": ["aws", "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory"], "msg": "IndexError: pop from empty list"}
failed: [localhost] (item=[u'gcloud', u'/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.gcloud", "src": "templates/gcloud_inventory_formatter.j2"}, "module_name": "template"}, "item": ["gcloud", "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory"], "msg": "AnsibleError: Unexpected templating type error occurred on ({{ topology_outputs | gcloud_inv(inventory_layout) }}\n): get_host_ips() takes exactly 2 arguments (3 given)"}
failed: [localhost] (item=[u'openstack', u'/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.openstack", "src": "templates/openstack_inventory_formatter.j2"}, "module_name": "template"}, "item": ["openstack", "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory"], "msg": "IndexError: pop from empty list"}
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: herlo
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476912333.25-275184830504074 `" && echo ansible-tmp-1476912333.25-275184830504074="` echo $HOME/.ansible/tmp/ansible-tmp-1476912333.25-275184830504074 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpSH7nrQ TO /home/herlo/.ansible/tmp/ansible-tmp-1476912333.25-275184830504074/stat
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/herlo/.ansible/tmp/ansible-tmp-1476912333.25-275184830504074/stat && sleep 0'
<127.0.0.1> PUT /tmp/tmp64ApRO TO /home/herlo/.ansible/tmp/ansible-tmp-1476912333.25-275184830504074/file
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/herlo/.ansible/tmp/ansible-tmp-1476912333.25-275184830504074/file && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/herlo/.ansible/tmp/ansible-tmp-1476912333.25-275184830504074/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => (item=[u'duffy', u'/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory']) => {"changed": false, "diff": {"after": {"path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.duffy"}, "before": {"path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.duffy"}}, "gid": 1000, "group": "herlo", "invocation": {"module_args": {"backup": null, "content": null, "delimiter": null, "dest": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.duffy", "diff_peek": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "original_basename": "duffy_inventory_formatter.j2", "owner": null, "path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.duffy", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": null, "validate": null}}, "item": ["duffy", "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory"], "mode": "0664", "owner": "herlo", "path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.duffy", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1675, "state": "file", "uid": 1000}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476912333.57-148841616150752 `" && echo ansible-tmp-1476912333.57-148841616150752="` echo $HOME/.ansible/tmp/ansible-tmp-1476912333.57-148841616150752 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp0Axs7O TO /home/herlo/.ansible/tmp/ansible-tmp-1476912333.57-148841616150752/stat
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/herlo/.ansible/tmp/ansible-tmp-1476912333.57-148841616150752/stat && sleep 0'
<127.0.0.1> PUT /tmp/tmpdGIiSi TO /home/herlo/.ansible/tmp/ansible-tmp-1476912333.57-148841616150752/file
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/herlo/.ansible/tmp/ansible-tmp-1476912333.57-148841616150752/file && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/herlo/.ansible/tmp/ansible-tmp-1476912333.57-148841616150752/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => (item=[u'generic', u'/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory']) => {"changed": false, "diff": {"after": {"path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.generic"}, "before": {"path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.generic"}}, "gid": 1000, "group": "herlo", "invocation": {"module_args": {"backup": null, "content": null, "delimiter": null, "dest": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.generic", "diff_peek": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "original_basename": "generic_inventory_formatter.j2", "owner": null, "path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.generic", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": null, "validate": null}}, "item": ["generic", "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory"], "mode": "0664", "owner": "herlo", "path": "/home/herlo/Projects/centos/git/linch-pin/provision/../inventories/duffy_3node_cluster.inventory.generic", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1675, "state": "file", "uid": 1000}

The inventories generated here are correct, but the errors occur with attempted generation around the outputs which do not have any nodes provisioned (eg. empty list)

Duffy inventory generation raising error while provisioning openstack resources

I tried provisioning openstack servers using linchpin , Apparently it raises error with duffy inventory which it is not supposed to . However the provisioning and output generation is going as expected but its just error at the end.
@herlo Could you look into it and fix it as soon as possible.

TASK [output_writer : generate outputs] ****************************************
changed: [localhost] => {"changed": true, "checksum": "3f393dbc43e85fab8124e11e2c74648bb2974cd0", "dest": "sample_outputs/ex_os_server2_out.yaml", "gid": 0, "group": "root", "md5sum": "9070b0281b3e7f964ffc3c8314b4568b", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 14521, "src": "/root/.ansible/tmp/ansible-tmp-1470839880.73-29245812992533/source", "state": "file", "uid": 0}

TASK [output_writer : include] *************************************************
included: /root/linch-pin/provision/roles/output_writer/tasks/duffy_inventory.yml for localhost

TASK [output_writer : Updating topology_outputs for duffy] *********************
ok: [localhost] => {"ansible_facts": {"duffy_topo_outputs": {"duffy_res": []}}, "changed": false}

TASK [output_writer : generate duffy inventory] ********************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "IndexError: list index out of range"}
to retry, use: --limit @provision/site.retry

PLAY RECAP *********************************************************************
localhost : ok=39 changed=3 unreachable=0 failed=1

Consistent way to install linch-pin on different platforms

We already have a docker image as one solution, but if someone wants to install this linch-pin other than pip or rpm dependencies don't install in order.

We currently have install.sh but there is a more python way to do the install and do it in order

This should be verified on Fedora, CentOS, RHEL, and Ubuntu (if possible)

Duplicate InventoryFilters paths cause confusion

It appears there are duplicate InventoryFilters folders at the root and provision under the linch-pin project.

It is suggested that we remove the provision/InventoryFilters and ensure any extra files there get moved to the InventoryFilters at the root of the tree.

Installing linchpin Python package is noisy

Python packages should create at most a small number of high level directories inside of site-packages when they get installed. Currently, running an install of the pip packaging for linch-pin results in the creation of 14 top level directories and one top level file (linchpin.py) which gets built into linchpin.pyc and linchpin.pyo by various install options.

A fifteenth directory gets created that is the .egg-info directory.

All of these directories should, in best practices, be placed under a linchpin/ directory to contain all of the "mess". The easiest way to do this would be to put all the installed directories under a top level linchpin/ directory in the source tree. Alternatively, there is likely a way to just change the setup.py configuration to do so.

(8) Linchpin cli list , get to fetch from remote repositories

Currently linchpin cli when passed with argument list fetches , the topologies and layouts from local packaged folder .
As an enhancement , we would like linchpin cli to fetch from remote repository . Remote can be an external server or a github repository

Proposed usage would be as follows:
linchpin list --remote --topos
linchpin list --remote --layouts
linchpin get --remote --topos
linchpin get --remote --layouts

used to set the remote url

linchpin config set --remote

Inconsistent field naming

As mentioned in #147, fields do not have consistent names. For example, topologies use both resource_ and res_ as prefixes. It would be nicer to just use one, preferably resource_. In the case of res_name and res_type, I think name and type should suffice since they reside within the resource definition dict already.

I'm not sure if there are other cases like above. This is the only one I encountered while experimenting with linch-pin.

Translation of host-specific data based upon group and position

Use Case:

Create a variable that all hosts can read based upon a specific host from a specific group. So for example, it might look like so:

---
inventory_layout:
  vars:
    xyz_some_data: __groupA[0].IP__
  host-groups:
    groupA:
      count: 3
      vars:
         debug: yes

This was a specific use case was something provided by @mprahl and @Dlane. They will comment with a more specific example.

Rework LinchPin CLI to create API class

Breaking out functionality into an API that can be used by the cli as a library. This will also allow us to use the api to create a server based REST api and other tools which will call the api itself. Eventually, we may break this api out into client and server components as well.

Allow for different libvirt_node types

Some capabilities differ. Set the default to domain type='kvm' unless a different type is specified.

Could be something like this:

- res_name: "centos72"
  res_type: "libvirt_node"
  uri: 'qemu:///system'
  hypervisor: kvm
  emulator: /usr/bin/qemu-system-x86_64
  count: 2
  memory: 2048
  vcpus: 2
  networks:
    - name: linchpin-centos72

Inventory generation should not happen unless inventory layout is supplied

While running linch-pin and NOT requesting a inventory, because no inventory_layout_file was provided (as below).

ansible-playbook -vvv ~/Projects/centos/git/linch-pin/provision/site.yml -e "state=present topology='~/Projects/centos/git/paas-ci/config/topologies/openstack-3node-cluster.yml'"

..snip...

TASK [inventory_gen : Generate inventories] ************************************
task path: /home/herlo/Projects/centos/git/linch-pin/provision/roles/inventory_gen/tasks/main.yml:47
failed: [localhost] (item=[u'aws', u'/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory.aws", "src": "templates/aws_inventory_formatter.j2"}, "module_name": "template"}, "item": ["aws", "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory"], "msg": "IndexError: pop from empty list"}
failed: [localhost] (item=[u'gcloud', u'/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory.gcloud", "src": "templates/gcloud_inventory_formatter.j2"}, "module_name": "template"}, "item": ["gcloud", "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory"], "msg": "IndexError: pop from empty list"}
failed: [localhost] (item=[u'openstack', u'/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory.openstack", "src": "templates/openstack_inventory_formatter.j2"}, "module_name": "template"}, "item": ["openstack", "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory"], "msg": "IndexError: pop from empty list"}
failed: [localhost] (item=[u'generic', u'/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory.generic", "src": "templates/generic_inventory_formatter.j2"}, "module_name": "template"}, "item": ["generic", "/home/herlo/Projects/centos/git/linch-pin/inventory_outputs/openstack_topology.inventory"], "msg": "IndexError: pop from empty list"}

NO MORE HOSTS LEFT *************************************************************
    to retry, use: --limit @/home/herlo/Projects/centos/git/linch-pin/provision/site.retry

PLAY RECAP *********************************************************************
localhost                  : ok=48   changed=3    unreachable=0    failed=1   

'shade is required for this module' error when provisioning openstack from venv

Created a virtualenv, installed shade with pip.

(venv)[herlo@x99 venv]$ pip show shade
You are using pip version 6.0.8, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

---
Name: shade
Version: 1.12.1
Location: /home/herlo/Projects/venv/lib/python2.7/site-packages
 Requires: python-keystoneclient, python-cinderclient, python-magnumclient, requestsexceptions, munch, jsonpatch, python-ironicclient, python-novaclient, six, netifaces, python-swiftclient, python-troveclient, python-glanceclient, python-neutronclient, pbr, ipaddress, os-client-config, keystoneauth1, dogpile.cache, decorator, python-designateclient, python-heatclient

When I run the ansible-playbook for linch-pin, I get the following output:

(venv)[herlo@x99 venv]$ ansible-playbook -vvv ~/Projects/centos/git/linch-pin/provision/site.yml -e "state=present topology='~/Projects/centos/git/paas-ci/config/topologies/openstack-3node-cluster.yml'"
Using /etc/ansible/ansible.cfg as config file
 [WARNING]: provided hosts list is empty, only localhost is available

..snip..

TASK [openstack : provision/deprovision os_server resources by looping on count] ***
task path: /home/herlo/Projects/centos/git/linch-pin/provision/roles/openstack/tasks/provision_os_server.yml:5
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: herlo
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475686934.36-264020423014197 `" && echo ansible-tmp-1475686934.36-264020423014197="` echo $HOME/.ansible/tmp/ansible-tmp-1475686934.36-264020423014197 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpz1rYPT TO /home/herlo/.ansible/tmp/ansible-tmp-1475686934.36-264020423014197/os_server
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.utf8 LC_ALL=en_US.utf8 LC_MESSAGES=en_US.utf8 /usr/bin/python2 /home/herlo/.ansible/tmp/ansible-tmp-1475686934.36-264020423014197/os_server; rm -rf "/home/herlo/.ansible/tmp/ansible-tmp-1475686934.36-264020423014197/" > /dev/null 2>&1 && sleep 0'
failed: [localhost] (item=None) => {"failed": true, "instance": ["", "", "", "", "present", "rhel-6.5_jeos", "ci-factory", "m1.small", "e2e-openstack", "os", "ha_inst", 0], "invocation": {"module_args": {"api_timeout": 99999, "auth": {"auth_url": "", "password": "", "project_name": "", "username": ""}, "auth_type": null, "auto_ip": true, "availability_zone": null, "boot_from_volume": false, "boot_volume": null, "cacert": null, "cert": null, "cloud": null, "config_drive": false, "endpoint_type": "public", "flavor": "m1.small", "flavor_include": null, "flavor_ram": null, "floating_ip_pools": null, "floating_ips": null, "image": "rhel-6.5_jeos", "image_exclude": "(deprecated)", "key": null, "key_name": "ci-factory", "meta": null, "name": "os_ha_inst_0", "network": "e2e-openstack", "nics": [], "region_name": null, "scheduler_hints": null, "security_groups": ["default"], "state": "present", "terminate_volume": false, "timeout": 180, "userdata": null, "verify": true, "volume_size": false, "volumes": [], "wait": true}, "module_name": "os_server"}, "msg": "shade is required for this module"}

NO MORE HOSTS LEFT *************************************************************
    to retry, use: --limit @/home/herlo/Projects/centos/git/linch-pin/provision/site.retry

PLAY RECAP *********************************************************************
localhost                  : ok=20   changed=1    unreachable=0    failed=1   

schema_check cannot find default schema outside linch-pin

When creating a virtualenv due to the lack of shade, and a git checkout of linch-pin in a separate directory, invoking linch-pin does not find the default schema.

INVOCATION

(venv)[herlo@x99 venv]$ ansible-playbook -vvv ~/Projects/centos/git/linch-pin/provision/site.yml -e "state=present topology='~/Projects/centos/git/paas-ci/config/topologies/openstack-3node-cluster.yml'"
Using /etc/ansible/ansible.cfg as config file
 [WARNING]: provided hosts list is empty, only localhost is available

[DEPRECATION WARNING]: Specifying include variables at the top-level of the task is deprecated. Please 
see:
http://docs.ansible.com/ansible/playbooks_roles.html#task-include-files-and-encouraging-reuse

 for 
currently supported syntax regarding included files and variables.
This feature will be removed in a 
future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in 
ansible.cfg.

PLAYBOOK: site.yml *************************************************************
4 plays in /home/herlo/Projects/centos/git/linch-pin/provision/site.yml

PLAY [schema check and Pre Provisioning Activities  on topology_file] **********

TASK [setup] *******************************************************************
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: herlo
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475612682.44-166143896648227 `" && echo ansible-tmp-1475612682.44-166143896648227="` echo $HOME/.ansible/tmp/ansible-tmp-1475612682.44-166143896648227 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp8vDuY5 TO /home/herlo/.ansible/tmp/ansible-tmp-1475612682.44-166143896648227/setup
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.utf8 LC_ALL=en_US.utf8 LC_MESSAGES=en_US.utf8 /usr/bin/python2 /home/herlo/.ansible/tmp/ansible-tmp-1475612682.44-166143896648227/setup; rm -rf "/home/herlo/.ansible/tmp/ansible-tmp-1475612682.44-166143896648227/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]

TASK [common : Including Linchpin config vars] *********************************
task path: /home/herlo/Projects/centos/git/linch-pin/provision/roles/common/tasks/main.yml:3
ok: [localhost] => {"ansible_facts": {"async": false, "async_timeout": 1000, "inventory_layouts_path": "inventory_layouts", "inventory_outputs_path": "inventory_outputs", "inventory_playbooks": ["duffy_inventory.yml"], "inventory_types": ["aws", "gcloud", "openstack", "generic"], "inventoryfolder_path": "inventory", "keystore_path": "keystore", "no_output": false, "outputfolder_path": "outputs", "schema": "ex_schemas/schema_v2.json"}, "changed": false, "invocation": {"module_args": {"_raw_params": "../linchpin_config.yml"}, "module_name": "include_vars"}}

TASK [common : schema check for given topology file] ***************************
task path: /home/herlo/Projects/centos/git/linch-pin/provision/roles/common/tasks/main.yml:6
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: herlo
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475612683.93-94860539667658 `" && echo ansible-tmp-1475612683.93-94860539667658="` echo $HOME/.ansible/tmp/ansible-tmp-1475612683.93-94860539667658 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp6g9GwS TO /home/herlo/.ansible/tmp/ansible-tmp-1475612683.93-94860539667658/schema_check
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.utf8 LC_ALL=en_US.utf8 LC_MESSAGES=en_US.utf8 /usr/bin/python2 /home/herlo/.ansible/tmp/ansible-tmp-1475612683.93-94860539667658/schema_check; rm -rf "/home/herlo/.ansible/tmp/ansible-tmp-1475612683.93-94860539667658/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"data": "~/Projects/centos/git/paas-ci/config/topologies/openstack-3node-cluster.yml", "data_format": null, "schema": "ex_schemas/schema_v2.json"}, "module_name": "schema_check"}, "msg": "File not found ex_schemas/schema_v2.json not found"}

NO MORE HOSTS LEFT *************************************************************
    to retry, use: --limit @/home/herlo/Projects/centos/git/linch-pin/provision/site.retry

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1   

Defaults from "linchpin init" kinda don't make sense

I did a "linchpin init" in a new directory and got the following values in linchpin_config.yml:

---
keystore_path: "/home/ghelling/venv/linchpin/lib/python2.7/site-packages/keystore"
outputfolder_path: "/home/ghelling/venv/linchpin/lib/python2.7/site-packages/outputs "
inventoryfolder_path: "/home/ghelling/venv/linchpin/lib/python2.7/site-packages/inventory"
async: False
async_timeout : 1000
no_output : False
schema: "/home/ghelling/venv/linchpin/lib/python2.7/site-packages/ex_schemas/schema_v2.json"
inventory_layouts_path: "/home/ghelling/venv/linchpin/lib/python2.7/site-packages/inventory_layouts"
inventory_outputs_path: "/home/ghelling/venv/linchpin/lib/python2.7/site-packages/inventories"

As you see, it's wanting me to put my outputs and inputs inside of my virtualenv site-packages directory instead of locally. It appears the default paths are based from the path to __file__ instead of $PWD.

Allow for a customizable "workspace" for the linchpin CLI

As of right now my $PWD before running the linchpin CLI must contain the linchpin topologies/layouts/etc:

โ”œโ”€โ”€ inventory
โ”‚ย ย  โ””โ”€โ”€ cinch_test_topology.inventory
โ”œโ”€โ”€ keystore
โ”œโ”€โ”€ layouts
โ”‚ย ย  โ”œโ”€โ”€ my_layout.yml
โ”‚ย ย  โ””โ”€โ”€ openshift-3node-cluster.yml
โ”œโ”€โ”€ linchpin_config.yml
โ”œโ”€โ”€ myargs.yml
โ”œโ”€โ”€ outputs
โ”‚ย ย  โ””โ”€โ”€ cinch_test_topology.output.yaml
โ”œโ”€โ”€ PinFile
โ””โ”€โ”€ topologies
    โ”œโ”€โ”€ duffy-3node-cluster.yml
    โ””โ”€โ”€ mytopology.yml

This "workspace" created by linchpin init should be configurable so that the linchpin CLI can be called without needing to be concerned about the $PWD.

(2) linchpin CLI command does not exit non-zero when an Ansible play fails

See test example using linchpin 0.8.6 installed from pypi:

(venv) [vagrant@localhost vars]$ pushd ~/devel/cinch-topologies/openstack/ && linchpin rise
~/devel/cinch-topologies/openstack ~/venv/lib/python2.7/site-packages/provision/roles/openstack/vars
debug:: searching linchpin_config file from path ::
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
debug:: File found returning ::
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
debug:: searching linchpin_config file from path ::
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
debug:: File found returning ::
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
debug::print file path
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
Finding topology in local folder ./topology
debug:: searching linchpin_config file from path ::
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
debug:: File found returning ::
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
debug::print file path
/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml
Finding layout in local folder ./layouts
{'linchpin_config': '/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml', 'inventory_layout_file': '/home/vagrant/devel/cinch-topologies/openstack/layouts/cinch.yml', 'state': 'present', 'outputfolder_path': '/home/vagrant/devel/cinch-topologies/openstack/outputs', 'inventory_outputs_path': '/home/vagrant/devel/cinch-topologies/openstack/inventories', 'topology': '/home/vagrant/devel/cinch-topologies/openstack/topologies/cinch.yml'}
debug:: module path ::/home/vagrant/venv/lib/python2.7/site-packages/library

PLAY [including the configuration files] ***************************************

TASK [Include linchpin_config] *************************************************
ok: [localhost] => (item=/home/vagrant/devel/cinch-topologies/openstack/linchpin_config.yml)

PLAY [schema check and Pre Provisioning Activities  on topology_file] **********

TASK [common : DEBUG: topology and schema] *************************************
ok: [localhost] => {
    "msg": "topology: /home/vagrant/devel/cinch-topologies/openstack/topologies/cinch.yml schema: /home/vagrant/venv/lib/python2.7/site-packages/ex_schemas/schema_v2.json"
}

TASK [common : schema check for given topology file] ***************************
ok: [localhost]

TASK [common : set fact for saving topology_job_ids] ***************************
ok: [localhost]

TASK [common : filtering the output for openstack resource groups] *************
ok: [localhost]

TASK [common : filtering the output for aws resource groups] *******************
ok: [localhost]

TASK [common : filtering the output for google cloud resource groups] **********
ok: [localhost]

TASK [common : filtering the output for duffy resource groups] *****************
ok: [localhost]

TASK [common : filtering the output for libvirt resource groups] ***************
ok: [localhost]

TASK [common : filtering the output for rackspace resource groups] *************
ok: [localhost]

TASK [common : filtering the output for beaker] ********************************
ok: [localhost]

TASK [common : Registering resource group vars from topology] ******************
skipping: [localhost]

TASK [common : output vars] ****************************************************
ok: [localhost]

PLAY [Provisioning resources based on resource group type] *********************

TASK [openstack : DEBUG:: Openstack resource group list] ***********************
ok: [localhost] => {
    "msg": "Currently Provisioning/Deprovisioning the resources under list os_res_grps [{u'resource_group_name': u'-testgroup', u'res_group_type': u'openstack', u'assoc_creds': u'os_creds', u'res_defs': [{u'count': 1, u'res_type': u'os_server', u'image': u'rhel-7.2-server-x86_64-released', u'res_name': u'resource', u'keypair': u'ci-ops-central', u'flavor': u'm1.small', u'networks': [u'QE']}]}]"
}

TASK [openstack : declaring output vars] ***************************************
ok: [localhost]

TASK [openstack : Initiating  Provisioning/Deprovioning of resources Openstack resource group] ***
included: /home/vagrant/venv/lib/python2.7/site-packages/provision/roles/openstack/tasks/provision_resource_group.yml for localhost

TASK [openstack : DEBUG:: provisioning resource group {u'resource_group_name': u'-testgroup', u'res_group_type': u'openstack', u'assoc_creds': u'os_creds', u'res_defs': [{u'count': 1, u'res_type': u'os_server', u'image': u'rhel-7.2-server-x86_64-released', u'res_name': u'resource', u'keypair': u'ci-ops-central', u'flavor': u'm1.small', u'networks': [u'QE']}]}] ***
ok: [localhost] => {
    "msg": "The current server obj is {u'resource_group_name': u'-testgroup', u'res_group_type': u'openstack', u'assoc_creds': u'os_creds', u'res_defs': [{u'count': 1, u'res_type': u'os_server', u'image': u'rhel-7.2-server-x86_64-released', u'res_name': u'resource', u'keypair': u'ci-ops-central', u'flavor': u'm1.small', u'networks': [u'QE']}]} "
}

TASK [openstack : Including credentials of current resource -testgroup] **
fatal: [localhost]: FAILED! => {"ansible_facts": {}, "changed": false, "failed": true, "message": "Unable to find 'roles/openstack/vars/os_creds.yml' in expected paths."}
        to retry, use: --limit @/home/vagrant/venv/lib/python2.7/site-packages/provision/site.retry

PLAY RECAP *********************************************************************
localhost                  : ok=16   changed=0    unreachable=0    failed=1

(venv) [vagrant@localhost openstack]$ echo $?
0

Feature Request: CLI honors standard path env variable for configuration files, etc.

Make sure the command line tool can find the configurations in a standard path.

The idea is that the linchpin_config.yml might be found in some standard locations. For example:

/etc/linchpin/linchpin_config.yml
~/.linchpin_config.yml
./linchpin_config.yml
etc.

Thus, make a standard env variable or somesuch that gets set either by installation or configuration to ensure the configuration files can be located easily and in predictable locations.

(POC) (8) Create inception-based testing for libvirt functionality

Because linch-pin now has a plugin for libvirt provisioning, additional libraries must be installed. It seems it may be unreasonable to expect a jenkins (or other CI) to be willing to install libvirt on a node which they control. To that end, testing the libvirt plugin in linch-pin may require a bit of inception-based testing.

The concept is as follows:

  1. Provision a single node from a CI slave using another plugin (duffy, ec2, openstack). Alternatively, a container could be provisioned, instead?
  2. Use an ansible playbook to connect to the single node to provision via libvirt and perform tests
  3. ????
  4. Profit

(POC) (8) Manage credentials according to the cloud provider

Currently, credentials are assigned in the vars of the role. This can obviously be overwritten by global vars or possibly in the linchpin_config.yml. However, many of the cloud providers (openstack, rackspace, aws/ec2, gce) have alternative ways of authenticating with configuration files they already provide.

The rationale here is that users will want to utilize things they already know. It isn't advantageous to add complexity by providing another way to configure credentials. We should provide a way to access the configs that already exist or are already documented.

As an example, check out http://docs.openstack.org/developer/os-client-config/.

topology_output_file requires .yaml due to update to jinja2 2.8

Because we updated to jinja2 2.8, ansible expects .yaml, .yml, or .json extensions for files that are being processed internally. Thus, calling linch-pin without topology_output_file=somevalue.yaml will result in a file that ends in .output.

Here's an example of the error from jenkins: https://ci.centos.org/job/paas-bfs-origin-0-test-matrix/PYTHON=system-CPython-2.7,TOPOLOGY=duffy_3node_cluster,nodes=paas-sig-ci-slave01/1/console

16:01:44 fatal: [localhost]: FAILED! => {"ansible_facts": {}, "changed": false, "failed": true, "message": "/home/sig-paas/workspace/paas-bfs-origin-0-test-matrix/PYTHON/system-CPython-2.7/TOPOLOGY/duffy_3node_cluster/nodes/paas-sig-ci-slave01/linch-pin/outputs/duffy_3node_cluster.output does not have a valid extension: yaml, yml, json"}

library path is missing from linch-pin/provision

This issue causes the schema check to fail when provisioning. Steps to reproduce:

  1. Install linch-pin:
(linchpin-test) [vagrant@localhost workspace]$ pip install -e ~/linch-pin
Obtaining file:///home/vagrant/linch-pin
Requirement already satisfied: Click in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from linchpin==0.8.1)
Requirement already satisfied: ansible in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from linchpin==0.8.1)
Requirement already satisfied: jinja2 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from linchpin==0.8.1)
Requirement already satisfied: tabulate in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from linchpin==0.8.1)
Requirement already satisfied: jsonschema in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from linchpin==0.8.1)
Requirement already satisfied: PyYAML in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from ansible->linchpin==0.8.1)
Requirement already satisfied: paramiko in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from ansible->linchpin==0.8.1)
Requirement already satisfied: pycrypto>=2.6 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from ansible->linchpin==0.8.1)
Requirement already satisfied: setuptools in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from ansible->linchpin==0.8.1)
Requirement already satisfied: MarkupSafe in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from jinja2->linchpin==0.8.1)
Requirement already satisfied: functools32; python_version == "2.7" in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from jsonschema->linchpin==0.8.1)
Requirement already satisfied: pyasn1>=0.1.7 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from paramiko->ansible->linchpin==0.8.1)
Requirement already satisfied: cryptography>=1.1 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from paramiko->ansible->linchpin==0.8.1)
Requirement already satisfied: idna>=2.0 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible->linchpin==0.8.1)
Requirement already satisfied: cffi>=1.4.1 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible->linchpin==0.8.1)
Requirement already satisfied: enum34 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible->linchpin==0.8.1)
Requirement already satisfied: ipaddress in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible->linchpin==0.8.1)
Requirement already satisfied: six>=1.4.1 in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible->linchpin==0.8.1)
Requirement already satisfied: pycparser in /home/vagrant/linchpin-test/lib/python2.7/site-packages (from cffi>=1.4.1->cryptography>=1.1->paramiko->ansible->linchpin==0.8.1)
Installing collected packages: linchpin
  Running setup.py develop for linchpin
Successfully installed linchpin
  1. Attempt to provision:
(linchpin-test) [vagrant@localhost workspace]$ linchpin rise
Finding topology in local folder ./topology
Finding layout in local folder ./layouts
{'linchpin_config': '/home/vagrant/devel/workspace/linchpin_config.yml', 'inventory_layout_file': '/home/vagrant/devel/workspace/layouts/my_layout.yml', 'state': 'present', 'outputfolder_path': '/home/vagrant/devel/workspace/outputs', 'inventory_outputs_path': '/home/vagrant/devel/workspace/inventory', 'topology': '/home/vagrant/devel/workspace/topologies/mytopology.yml'}
Traceback (most recent call last):
  File "/home/vagrant/linchpin-test/bin/linchpin", line 11, in <module>
    load_entry_point('linchpin', 'console_scripts', 'linchpin')()
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/click/core.py", line 716, in __call__
    return self.main(*args, **kwargs)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/click/core.py", line 696, in main
    rv = self.invoke(ctx)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/click/core.py", line 889, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/click/decorators.py", line 64, in new_func
    return ctx.invoke(f, obj, *args[1:], **kwargs)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "/home/vagrant/linch-pin/linchpin.py", line 201, in rise
    output = lpcli.lp_rise(lpf, target)
  File "/home/vagrant/linch-pin/cli/cli.py", line 99, in lp_rise
    console=True)
  File "/home/vagrant/linch-pin/linchpin_api/v1/invoke_playbooks.py", line 110, in invoke_linchpin
    results = pbex.run()
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", line 81, in run
    pb = Playbook.load(playbook_path, variable_manager=self._variable_manager, loader=self._loader)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/__init__.py", line 53, in load
    pb._load_playbook_data(file_name=file_name, variable_manager=variable_manager)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/__init__.py", line 98, in _load_playbook_data
    entry_obj = Play.load(entry, variable_manager=variable_manager, loader=self._loader)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/play.py", line 116, in load
    return p.load_data(data, variable_manager=variable_manager, loader=loader)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/base.py", line 249, in load_data
    self._attributes[name] = method(name, ds[name])
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/play.py", line 196, in _load_roles
    roles.append(Role.load(ri, play=self))
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/role/__init__.py", line 127, in load
    r._load_role_data(role_include, parent_role=parent_role)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/role/__init__.py", line 180, in _load_role_data
    self._task_blocks = load_list_of_blocks(task_data, play=self._play, role=self, loader=self._loader, variable_manager=self._variable_manager)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/helpers.py", line 57, in load_list_of_blocks
    loader=loader
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/block.py", line 76, in load
    return b.load_data(data, variable_manager=variable_manager, loader=loader)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/base.py", line 249, in load_data
    self._attributes[name] = method(name, ds[name])
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/block.py", line 112, in _load_block
    use_handlers=self._use_handlers,
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/helpers.py", line 302, in load_list_of_tasks
    t = Task.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/task.py", line 139, in load
    return t.load_data(data, variable_manager=variable_manager, loader=loader)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/base.py", line 239, in load_data
    ds = self.preprocess_data(ds)
  File "/home/vagrant/linchpin-test/lib/python2.7/site-packages/ansible/playbook/task.py", line 181, in preprocess_data
    raise AnsibleParserError(to_native(e), obj=ds)
ansible.errors.AnsibleParserError: no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in '/home/vagrant/linch-pin/provision/roles/common/tasks/main.yml': line 10, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


- name: "schema check for given topology file"
  ^ here


The error appears to have been in '/home/vagrant/linch-pin/provision/roles/common/tasks/main.yml': line 10, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


- name: "schema check for given topology file"
  ^ here

Workaround: Symlink library path for the schema check in linchpin-pin/provision:
(linchpin-test) [vagrant@localhost workspace]$ ln -s ~/linch-pin/library/ ~/linch-pin/provision/library

Running the 'rise' command again will be successful at this point:

(linchpin-test) [vagrant@localhost workspace]$ linchpin rise
Finding topology in local folder ./topology
Finding layout in local folder ./layouts
{'linchpin_config': '/home/vagrant/devel/workspace/linchpin_config.yml', 'inventory_layout_file': '/home/vagrant/devel/workspace/layouts/my_layout.yml', 'state': 'present', 'outputfolder_path': '/home/vagrant/devel/workspace/outputs', 'inventory_outputs_path': '/home/vagrant/devel/workspace/inventory', 'topology': '/home/vagrant/devel/workspace/topologies/mytopology.yml'}

PLAY [including the configuration files] ***************************************

TASK [Include linchpin_config] *************************************************
ok: [localhost] => (item=/home/vagrant/linch-pin/provision/../linchpin_config.yml)

PLAY [schema check and Pre Provisioning Activities  on topology_file] **********

TASK [common : Including Linchpin config vars] *********************************
ok: [localhost]

Password not redacted in ansible-playbook output.

"instance": ["http://10.8.188.11:5000/v2.0", "atomic-e2e-jenkins", "(redacted but actually displayed)", "atomic-e2e-jenkins-test", "present", "rhel-6.5_jeos", "ci-factory", "m1.small", "atomic-e2e-jenkins-test", "os", "ha_inst", 0]

Linch-pin feedback

Hi all,

Thought I'd give some feedback from an exercise in trying out linch-pin for the first time. I understand a lot of the below is simply due to the project still being in its infancy, though I hope you still appreciate the feedback. :) Sorry in advance for the massive dump of issues in one shot. Some of those may already exist as independent issues.

Major issues:

  1. Documentation is incomplete. There are no mentions of essential things like basic CLI usage (e.g. init/rise/drop), the configuration file, credentials, etc... This makes it very hard for beginners to get started.

  2. Python packages are not namespaced, so e.g. library/provision/outputs etc... are all dumped directly in site-packages rather than e.g. a linchpin/ dir. Esp. important since most of those are not even actual python modules.

  3. Credentials files have to be stored in the vars dir, which is in the bowels of the python site-packages dir.

Minor issues:

  1. Inconsistent field naming. E.g. using res_ in some places and resource_ in others. (I'd prefer the latter everywhere). Also, it's generally nicer in YAML to use - instead of _, but that's subjective of course :).

  2. Unconditional debug print statements used rather than e.g. a logging module. It makes the output a bit harder to understand.

(1) Ability to override the res_name on the command line

Currently whatever the res_name in the topology file is that is always the name you get when you deploy. This can create collisions if you are doing multiple deployments in the same openstack tenant for instance.

We either need a way to override this or deploy with a unique identifier to avoid this

(POC) (8) Linchpin Hooks: Pinfile should provide a 'post_actions' yaml list

Currently, the Pinfile provides the ability to have a particular target, with both a topology and a layout option. Additionally, it may be useful to pull down and run ansible playbooks based upon the inventory returned from the provisioning.

Simply put, this could make CI and developer tooling very useful.

Consider the following Pinfile:

duffy3node:
  topology_name: duffy_3node_cluster.yml
  inventory_layout_file: duffy_3node_inventory_layout.yml
  post_actions:
    type: ansible
    base_dir: ./ansible
    targets:
      - setup.yml
      - origin_from_source.yml
      - openshift_ansible_from_source.yml
      - deploy_aosi.yml
      - run_e2e_tests.yml
    type: shell
    base_dir: ./scripts
    upstream: git://github.com/herlo/test_scripts_to_run.git
    targets:
      - gather_stats.sh

Essentially, this option would provision resources from the named topology / inventory combination. Proceeding from top down, the post_actions would first run ansible files already existing locally from the relative ./ansible path. Starting with setup.yml, and continuing down the list. If at any point it failed, execution would stop.

If successful, the shell post_actions would run. The scripts, if non-existent (or a cached timeout expired), would be cloned from the named repository, placed in the relative ./scripts path. Following the download/checkout, the gather_stats.sh would be executed.

When providing and inventory layout without host_groups, linch-pin fails

The inventory file looks as such:

inventory_layout:
  hosts:
    openshift-ansible:
      host_groups:
        - all

But technically, this could be less verbose:


---
inventory_layout:
  hosts:
    - openshift-ansible

The topology output appears as such:

aws_cfn_res: []
aws_ec2_key_res: []
aws_ec2_res: []
aws_s3_res: []
duffy_res:
-   changed: true
    hosts:
    - n7.gusty.ci.centos.org
    ssid: c1644078
gcloud_gce_res: []
os_heat_res: []
os_keypair_res: []
os_obj_res: []
os_server_res: []
os_volume_res: []
rax_server_res: []

What happens

$ ansible-playbook -vvv ~/Projects/linch-pin/provision/site.yml -e "state=present schema='~/Projects/linch-pin/ex_schemas/schema_v2.json' topology='~/Projects/paas-ci/config/topologies/duffy-single.yml'" -e inventory_layout_file='~/Projects/paas-ci/config/inv_layouts/openshift-ansible-single.yml' -e topology_output_file='~/sandbox/linch-pin-testing/outputs/duffy-single.output'

..snip..

TASK [inventory_gen : Generate inventories] ************************************
task path: /home/sig-paas/Projects/linch-pin/provision/roles/inventory_gen/tasks/main.yml:47
failed: [localhost] (item=[u'aws', u'/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory.aws", "src": "templates/aws_inventory_formatter.j2"}, "module_name": "template"}, "item": ["aws", "/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory"], "msg": "KeyError: 'host_groups'"}
failed: [localhost] (item=[u'gcloud', u'/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory.gcloud", "src": "templates/gcloud_inventory_formatter.j2"}, "module_name": "template"}, "item": ["gcloud", "/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory"], "msg": "KeyError: 'host_groups'"}
failed: [localhost] (item=[u'openstack', u'/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory']) => {"failed": true, "invocation": {"module_args": {"dest": "/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory.openstack", "src": "templates/openstack_inventory_formatter.j2"}, "module_name": "template"}, "item": ["openstack", "/home/sig-paas/Projects/linch-pin/inventory_outputs/duffy_single.inventory"], "msg": "KeyError: 'host_groups'"}

What should happen

Linch-pin should not expect host_groups to exist, thus skipping past it and not writing anything. However, if there are no host_groups listed under any of the hosts, the 'all' group should be assumed.

An inventory like so should be generated:

[all]
n7.gusty.ci.centos.org

Performance fixes for Openstack/GCE

revisit openstack module for multi instance provisioning since looping in ansible is taking toll on provisioning time. Further changing over to openstack client or python nova with count based provisioning would definitely increase performance .

revisit gcloud gce module for multi instance provisioning since looping in ansible is taking toll on provisioning time.

https://trello.com/c/MX1CaaZB/103-8-ci-provisioning-linch-pin-performance-fixes-openstack-gce-instances

linchpin rise seems to be unable to find the linchpin_config and fails with Unicode parse error

$ linchpin rise
debug:: current working directory
/home/herlo/Projects/lpdemo
debug:: c_file
/home/herlo/Projects/lpdemo/linchpin_config.yml
debug:: c_file
/home/herlo/Projects/lpdemo/linch-pin/linchpin_config.yml
debug:: c_file
~/.linchpin_config.yml
debug:: c_file
/home/herlo/.virtualenvs/venv3/lib/python2.7/site-packages/linchpin_config.yml
debug:: c_file
/etc/linchpin_config.yml
debug:: current working directory
/home/herlo/Projects/lpdemo
debug:: c_file
/home/herlo/Projects/lpdemo/linchpin_config.yml
debug:: c_file
/home/herlo/Projects/lpdemo/linch-pin/linchpin_config.yml
debug:: c_file
~/.linchpin_config.yml
debug:: c_file
/home/herlo/.virtualenvs/venv3/lib/python2.7/site-packages/linchpin_config.yml
debug:: c_file
/etc/linchpin_config.yml
debug:: config_path
None
debug::print file path
None
Traceback (most recent call last):
File "/home/herlo/.virtualenvs/venv3/bin/linchpin", line 11, in
sys.exit(cli())
File "/usr/lib/python2.7/site-packages/click/core.py", line 716, in call
return self.main(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/usr/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args[1:], **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/home/herlo/.virtualenvs/venv3/lib/python2.7/site-packages/linchpin.py", line 191, in rise
output = lpcli.lp_rise(pf, target)
File "/home/herlo/.virtualenvs/venv3/lib/python2.7/site-packages/cli/cli.py", line 86, in lp_rise
pf)
File "/home/herlo/.virtualenvs/venv3/lib/python2.7/site-packages/cli/cli.py", line 22, in find_topology
config = self.get_config()
File "/home/herlo/.virtualenvs/venv3/lib/python2.7/site-packages/linchpin_api/v1/api.py", line 58, in get_config
config = parse_yaml(config_path)
File "/home/herlo/.virtualenvs/venv3/lib/python2.7/site-packages/linchpin_api/v1/utils.py", line 31, in parse_yaml
with open(pf, 'r') as stream:
TypeError: coercing to Unicode: need string or buffer, NoneType found

Include license information

Please include a LICENSE or COPYING file, or the like, to indicate the license that the code is released under. Such will be necessary for many (all?) downstream packaging efforts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.