Giter Site home page Giter Site logo

kitchen-lxd_cli's People

Contributors

bradenwright avatar inokappa avatar parak avatar sc0ttruss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kitchen-lxd_cli's Issues

any way to pass raw.lxc arguments?

I have fallen on a common issue of container with httpd install on centos requiring some capabilities

a workaround is already documented
https://lists.linuxcontainers.org/pipermail/lxc-users/2014-June/007085.html

Look for /usr/share/lxc/config/fedora.common.conf (or whatever it is on
fedora, try "rpm -ql lxc"), then comment out this line

lxc.cap.drop = setfcap

It doesn't seem possible to use as an official config so need to use raw.lxc
https://github.com/lxc/lxd/issues/1982

lxc config set container-name raw.lxc=lxc.cap.drop=some-cap

any way to pass that inside kitchen? per platform as only centos but probably other case than can apply to each one.

Thanks

scp async upload failing?

Hello,

I have a simble ansible role that I'm testing with
https://github.com/juju4/ansible-adduser/blob/master/.travis.yml

From a jenkins instance on digitalocean/ubuntu1404

14:37:55        Setting up chef (12.12.15-1) ...
14:37:55        Thank you for installing Chef!
14:37:55 D      sudo -E rm -rf /tmp/kitchen/modules /tmp/kitchen/roles /tmp/kitchen/group_vars /tmp/kitchen/host_vars; mkdir -p /tmp/kitchen
14:37:55 D      [SSH] [email protected]<{:user_known_hosts_file=>"/dev/null", :paranoid=>false, :port=>22, :compression=>false, :compression_level=>0, :keepalive=>true, :keepalive_interval=>60, :timeout=>15, :user=>"root"}> (sudo -E rm -rf /tmp/kitchen/modules /tmp/kitchen/roles /tmp/kitchen/group_vars /tmp/kitchen/host_vars; mkdir -p /tmp/kitchen)
14:37:55        Transferring files to <default-ubuntu-1604>
14:37:55 D      TIMING: scp async upload (Kitchen::Transport::Ssh)
14:37:55 D      Cleaning up local sandbox in /tmp/default-ubuntu-1604-sandbox-20160730-1929-9ns4pd
14:37:55 -----> Cleaning up any prior instances of <default-ubuntu-1404>
14:37:55 -----> Destroying <default-ubuntu-1404>...

It seems to fail on scp async upload as there is no subsequent finished message but we don't know why.

same here
https://travis-ci.org/juju4/ansible-adduser/jobs/148556247

here it transfers but with nothing in the role...
https://travis-ci.org/juju4/ansible-adduser/jobs/148556246

At this point, I'm unsure if it's in test-kitchen or any plugins part. I believe to had with other provider (vagrant/virtualbox, docker,...) but a lot less often.

Any ideas how to debug?

thanks

common container name = multiples builds on same container?

Hello

As said, I'm working on jenkins with ansible and lxd. while having 2 jobs running and checking lxc list, I observe that there is only one container whereas 2 jobs running on same system.
Also got job interrupted by halt message.

06:25:53        TASK [initcfg : Debian | Extra packages install] *******************************
06:26:02        �
06:26:02        Broadcast message from root@default-ubuntu-1204
06:26:02            (unknown) at 10:26 ...
06:26:02        
06:26:02        
The system is going down for halt NOW!
06:26:02        SIGPWR received 
06:26:02 D      Cleaning up local sandbox in /tmp/default-ubuntu-1204-sandbox-20160801-26173-wllkhn
06:26:02 >>>>>> ------Exception-------
06:26:02 >>>>>> Class: Kitchen::ActionFailed
06:26:02 >>>>>> Message: 3 actions failed.
06:26:02 >>>>>>     Converge failed on instance <default-ubuntu-1604>.  Please see .kitchen/logs/default-ubuntu-1604.log for more details
06:26:02 >>>>>>     Converge failed on instance <default-ubuntu-1404>.  Please see .kitchen/logs/default-ubuntu-1404.log for more details
06:26:02 >>>>>>     Failed to complete #converge action: [closed stream] on default-ubuntu-1204

It would be nice for kitchen-lxd to generate a unique container name per test & platform in order to avoid conflicts. test name can still be default-distribution-version but behind the scene should be different per job.
Or maybe are you nesting some containers? I view active process of containers on host but files linked to a job seems to have only temporary life on active containers. Always have the same

# lxc list
+---------------------+---------+--------------------------------+------+------------+-----------+
|        NAME         |  STATE  |              IPV4              | IPV6 |    TYPE    | SNAPSHOTS |
+---------------------+---------+--------------------------------+------+------------+-----------+
| default-ubuntu-1404 | RUNNING | 10.252.116.82 (eth0)           |      | PERSISTENT | 0         |
+---------------------+---------+--------------------------------+------+------------+-----------+
| default-ubuntu-1604 | RUNNING | 10.252.116.1 (lxdbr0)          |      | PERSISTENT | 0         |
|                     |         | 10.252.116.28 (eth0)           |      |            |           |
+---------------------+---------+--------------------------------+------+------------+-----------+

Also an option so container are deleted or frozen at the end of the test even after failure would be nice to avoid resources exhaustion.
current default is good as it allow quick manual check after test execution.

Comments?

Thanks

container not stopping: force ?

Hello Braden,

small issue. most of the time my container are not easily stopped and kitchen stalled there.
I just edit lxd_cli.rb to add --force.

Either do that, either a timeout because else, it seems to last indefinitely.

Thanks

concurrency issue?

Tried to execute test in parallels and in some conditions, it seems to generate container with identical id

$ gem list |grep kitchen
kitchen-ansible (0.45.4)
kitchen-digitalocean (0.9.5)
kitchen-docker (2.6.0)
kitchen-lxd_cli (2.0.1)
kitchen-sync (2.1.1)
kitchen-vagrant (0.20.0)
kitchen-verifier-serverspec (0.5.2)
test-kitchen (1.13.2)
$ kitchen list
Instance             Driver  Provisioner      Verifier  Transport  Last Action
default-ubuntu-1604  LxdCli  AnsiblePlaybook  Busser    Sftp       <Not Created>
default-ubuntu-1404  LxdCli  AnsiblePlaybook  Busser    Sftp       <Not Created>
default-ubuntu-1204  LxdCli  AnsiblePlaybook  Busser    Sftp       <Not Created>
default-centos-7     LxdCli  AnsiblePlaybook  Busser    Sftp       <Not Created>
$ time kitchen test --concurrency 3
-----> Starting Kitchen (v1.13.2)
-----> Cleaning up any prior instances of <default-ubuntu-1604>
-----> Cleaning up any prior instances of <default-ubuntu-1404>
-----> Destroying <default-ubuntu-1404>...
-----> Destroying <default-ubuntu-1604>...
-----> Cleaning up any prior instances of <default-ubuntu-1204>
-----> Destroying <default-ubuntu-1204>...
       Finished destroying <default-ubuntu-1204> (0m0.02s).
-----> Testing <default-ubuntu-1204>
-----> Creating <default-ubuntu-1204>...
       Finished destroying <default-ubuntu-1404> (0m0.02s).
-----> Testing <default-ubuntu-1404>
-----> Creating <default-ubuntu-1404>...
error: not found
-----> Cleaning up any prior instances of <default-centos-7>
-----> Destroying <default-centos-7>...
       Initializing container default-centos-7-1476365945
       Initializing container default-centos-7-1476365945
error: The container already exists
error: The container already exists
       Stopping container default-centos-7-1476365945
error: Container is not running.
       Deleting container default-centos-7-1476365945
       Finished destroying <default-centos-7> (0m1.51s).
-----> Testing <default-centos-7>
-----> Creating <default-centos-7>...
       Initializing container default-centos-7-1476365945
^C

host is ubuntu xenial.
Running without concurrency is fine. --parallel gave same result than --concurrency.

Thanks

Incorrect hostnamed added to /etc/hosts

Hello,

a minor issue I identified in one of my role

[Mon Sep 12 18:09:03.527851 2016] [unique_id:alert] [pid 17346] (EAI 2)Name or service not known: AH01564: unable to find IPv4 address of "default
-ubuntu-1604-1472173265"

$ cat /etc/hosts
[...]
#***** Setup by Kitchen-LxdCli driver *****#
127.0.1.1       default-ubuntu-1604

I suppose it's only a forgotten place from the change of container name :)

Thanks

misleading message - "#config[:public_key_path] cannot be blank"

When using an account where there is no existing ssh key, kitchen-lxd_cli returns

Kitchen::Driver::LxdCli<default-ubuntu-1604>#config[:public_key_path] cannot be blank

Message is misleading you to review .kitchen.yml, while you only need to ssh-keygen on account calling kitchen.
Also, like vagrant, I think it would be better if plugin was using its own temporary ssh key (one per container) instead of user one to ensure thing are separated and not mixing testing and production keys.

Centos support? -- Waiting for /root/.ssh to become available...

Hello,

I discovered recently your lxd module and it's fantastic to get light testing with kitchen and ansible.
I start to leverage travis (see other issue) but centos is not starting correctly

D      Found Ip Address 10.252.116.34
       Setting up public key /home/travis/.ssh/id_rsa.pub on default-centos-7
       Check /root/.ssh on default-centos-7
D      Waiting for /root/.ssh to become available...
D      run_local_command ran: lxc exec default-centos-7 -- ls /root/.ssh > /dev/null 2>&1
D      Command finished: pid 5753 exit 2
D      Waiting for /root/.ssh to become available...
D      run_local_command ran: lxc exec default-centos-7 -- ls /root/.ssh > /dev/null 2>&1
D      Command finished: pid 5766 exit 2

see details on
https://travis-ci.org/juju4/ansible-adduser/jobs/148556427 (centos-7)
https://travis-ci.org/juju4/ansible-adduser/jobs/148556426 (centos-6)

any hints to make it work ?

Thanks

Note: normally it's master so it's different from #2

Could not load the 'lxd_cli' driver from the load path

Since ~10days, all my ansible role travis testing with kitchen+lxd_cli failed with

-----> Starting Kitchen (v2.2.5)
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ClientError
>>>>>> Message: Could not load the 'lxd_cli' driver from the load path. Did you mean: dummy, exec, proxy ? Please ensure that your driver is installed as a gem or included in your Gemfile if using Bundler.
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration
D      ------Exception-------
D      Class: Kitchen::ClientError
D      Message: Could not load the 'lxd_cli' driver from the load path. Did you mean: dummy, exec, proxy ? Please ensure that your driver is installed as a gem or included in your Gemfile if using Bundler.
D      ----------------------
D      ------Backtrace-------

https://travis-ci.org/juju4/ansible-caldera/jobs/535235128#L1966
https://travis-ci.org/juju4/ansible-kolide/jobs/535655194#L1600
https://travis-ci.org/juju4/ansible-osquery/jobs/536087623#L1688

In most case, no change with previous working state
travis includes kitchen diagnose which does not give more information to guess root cause

gem install is fine

$ gem install kitchen-lxd_cli
Fetching kitchen-lxd_cli-2.0.2.gem
Fetching net-ssh-gateway-1.3.0.gem
Fetching net-scp-1.2.1.gem
Fetching test-kitchen-1.24.0.gem
Fetching net-ssh-4.2.0.gem
Successfully installed net-ssh-4.2.0
Successfully installed net-ssh-gateway-1.3.0
Successfully installed net-scp-1.2.1
Successfully installed test-kitchen-1.24.0
Successfully installed kitchen-lxd_cli-2.0.2
5 gems installed

Not aware of any change on travis side either
working one
https://travis-ci.org/juju4/ansible-dnscrypt-proxy/jobs/529879631
failing
https://travis-ci.org/juju4/ansible-dnscrypt-proxy/jobs/535878926
both using
Description: Ubuntu 16.04.6 LTS
and ruby 2.6
Nothing else from https://changelog.travis-ci.com/ or https://blog.travis-ci.com/

any ideas on why?

Thanks

Leftover containers?

I've got kitchen-lxd_cli running for my jenkins workers to run tests using kitchen test. The problem is that randomly containers are left sitting around, taking up disk space, crashing and making apport crazy, etc.

example:
+--------------------------------+---------+-------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------------------------------+---------+-------------------+------+------------+-----------+ | default-ubuntu-1404-1487243184 | RUNNING | 10.0.3.144 (eth0) | | PERSISTENT | 0 | +--------------------------------+---------+-------------------+------+------------+-----------+ | default-ubuntu-1404-1487260892 | RUNNING | 10.0.3.89 (eth0) | | PERSISTENT | 0 | +--------------------------------+---------+-------------------+------+------------+-----------+ | default-ubuntu-1404-1487261192 | RUNNING | 10.0.3.243 (eth0) | | PERSISTENT | 0 | +--------------------------------+---------+-------------------+------+------------+-----------+ | default-ubuntu-1404-1487265388 | RUNNING | 10.0.3.162 (eth0) | | PERSISTENT | 0 | +--------------------------------+---------+-------------------+------+------------+-----------+ | default-ubuntu-1404-1487270791 | RUNNING | 10.0.3.34 (eth0) | | PERSISTENT | 0 | +--------------------------------+---------+-------------------+------+------------+-----------+ | default-ubuntu-1404-1487351543 | RUNNING | 10.0.3.196 (eth0) | | PERSISTENT | 0 | +--------------------------------+---------+-------------------+------+------------+-----------+ | default-ubuntu-1404-1487351849 | RUNNING | 10.0.3.160 (eth0) | | PERSISTENT | 0 | +--------------------------------+---------+-------------------+------+------------+-----------+

This is 1 day after going in and cleaning up old instances.

Is this a bug in the teardown step at the end of kitchen-test? Anyone know a fix?

Thanks!

Concurrency possibly broken

Hi again,

I recently started testing the latest master of this combined with latest master of test-kitchen (with some minor unrelated tweaks to both), and it seems that concurrency seems to have been broken by something. I'm trying to dig around where the problem originates, but if someone can confirm having the same issue that'd be great. Basically, it appears that concurrency doesn't actually start all kitchen containers, but a single one multiple times:

$ kitchen destroy all
-----> Starting Kitchen (v1.13.2)
-----> Destroying <ldap-ubuntu-1404>...
       Finished destroying <ldap-ubuntu-1404> (0m0.01s).
-----> Destroying <ldap-centos-68>...
       Deleting container ldap-centos-68
       Finished destroying <ldap-centos-68> (0m1.23s).
-----> Kitchen is finished. (0m1.37s)
$ kitchen converge all --concurrency=2 -l debug
-----> Starting Kitchen (v1.13.2)
-----> Creating <ldap-ubuntu-1404>...
-----> Creating <ldap-centos-68>...
D      Container ldap-centos-68 doesn't exist
D      Publish Image Name is kitchen-ldap-ubuntu-1404
D      Image Name is lxc-images:ubuntu1404-schrodinger
D      Container ldap-centos-68 doesn't exist
D      Publish Image Name is kitchen-ldap-centos-68
D      Image Name is lxc-images:centos68-schrodinger
D      Image lxc-images:centos68-schrodinger exists
D      security.privileged=true is added to Config Args, b/c its needed for mount binding
D      Config Args:  -c security.privileged=true
       Initializing container ldap-centos-68
D      run_local_command ran: lxc init lxc-images:centos68-schrodinger ldap-centos-68   -c security.privileged=true
D      Image lxc-images:ubuntu1404-schrodinger exists
D      security.privileged=true is added to Config Args, b/c its needed for mount binding
D      Config Args:  -c security.privileged=true
       Initializing container ldap-centos-68
D      run_local_command ran: lxc init lxc-images:ubuntu1404-schrodinger ldap-centos-68   -c security.privileged=true
error: The container already exists
D      Command finished: pid 13355 exit 1
D      Container ldap-centos-68 isn't running
       Starting container ldap-centos-68
D      run_local_command ran: lxc start ldap-centos-68 
error: Error calling 'lxd forkstart ldap-centos-68 /var/lib/lxd/containers /var/log/lxd/ldap-centos-68/lxc.conf': err='exit status 1'
  lxc 20161021122900.297 ERROR lxc_start - start.c:start:1439 - No such file or directory - failed to exec /sbin/init
  lxc 20161021162900.297 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20161021162900.297 ERROR lxc_start - start.c:__lxc_start:1354 - failed to spawn 'ldap-centos-68'

Try `lxc info --show-log ldap-centos-68` for more info
D      Command finished: pid 13384 exit 1
D      Source path for the logs doesn't exist, creating /tmp/kitchen-logs/bluebird-chef/0/ldap-ubuntu-1404
D      run_local_command ran: lxc config device add ldap-centos-68 logs disk source=/tmp/kitchen-logs/bluebird-chef/0/ldap-ubuntu-1404 path=/var/log/chef-reports
D      Command finished: pid 13465 exit 0
error: Container is not running.
                                error: Container is not running.

The one created container does get provisioned, but the overall kitchen run results in a failure. Concurrency of 1 (or not specifying it) works just fine, and both instances are provisioned with no errors if ran sequentially. This also worked fine with test-kitchen 1.4.2, but I've had to upgrade in my test environment for gem compatibility reasons, so likely that's where things broke.

kitchen speed

Just a remark, inside travis,

any idea what can be tuned?

As sidenote, lxd on travis is heavily network bound as we download images...

Network issue running create

Hi,

I'm running test-kitchen 1.4.2 with kitchen-lxd_cli 0.1.6 on Ubuntu 16.04 with LXC 2.0.1 and LXD 2.0.1. I also ran lxd init and set up bridged networking for both IPv4 and IPv6. On a create for an instance, I get the following:

$ kitchen create test-ubuntu-1404 -l debug
-----> Starting Kitchen (v1.4.2)
-----> Creating <test-ubuntu-1404>...
D      Container test-ubuntu-1404 doesn't exist
D      Publish Image Name is kitchen-test-ubuntu-1404
D      Image Name is ubuntu1404
D      Image ubuntu1404 exists
       Initializing container test-ubuntu-1404
D      run_local_command ran: lxc init ubuntu1404 test-ubuntu-1404  
D      Command finished: pid 16118 exit 0
D      Container test-ubuntu-1404 isn't running
       Starting container test-ubuntu-1404
D      run_local_command ran: lxc start test-ubuntu-1404 
D      Command finished: pid 16132 exit 0
D      Waiting for /etc/resolvconf/resolv.conf.d/base to become available...
D      run_local_command ran: lxc exec test-ubuntu-1404 -- ls /etc/resolvconf/resolv.conf.d/base > /dev/null 2>&1
D      Command finished: pid 16226 exit 0
D      Found /etc/resolvconf/resolv.conf.d/base
D      Setting up the following dns servers via /etc/resolvconf/resolv.conf.d/base:
D      nameserver 8.8.8.8 nameserver 8.8.4.4 
D      Waiting for /run/resolvconf/interface to become available...
D      run_local_command ran: lxc exec test-ubuntu-1404 -- ls /run/resolvconf/interface > /dev/null 2>&1
D      Command finished: pid 16400 exit 0
D      Found /run/resolvconf/interface
D      Setting up /etc/hosts
D      Waiting for /etc/hosts to become available...
D      run_local_command ran: lxc exec test-ubuntu-1404 -- ls /etc/hosts > /dev/null 2>&1
D      Command finished: pid 16670 exit 0
D      Found /etc/hosts
       Waiting for network to become ready
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...
D      Still waiting for IP Address...

The image in question was created from public ubuntu:14.04 image, with just my public key added to root authorized_keys. The container itself is up and running at this point, and I can ping as well as ssh into it using my key just fine, so I'm confused as to what kitchen is expecting still here.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.