Giter Site home page Giter Site logo

karmab / kcli Goto Github PK

View Code? Open in Web Editor NEW
482.0 16.0 134.0 15.62 MB

Management tool for libvirt/aws/gcp/kubevirt/openstack/ovirt/vsphere/packet

Home Page: https://kcli.readthedocs.io/en/latest/

License: Apache License 2.0

Python 71.49% CSS 0.88% JavaScript 22.13% HTML 2.09% Shell 2.75% Makefile 0.03% Jinja 0.61% Dockerfile 0.01% templ 0.01%
libvirt gcp kubevirt ovirt aws openstack openshift kubernetes vsphere

kcli's People

Contributors

cherusk avatar e-minguez avatar efenex avatar hj-johannes-lee avatar iconeb avatar iranzo avatar javilinux avatar jmolmo avatar jtudelag avatar karmab avatar kshtsk avatar larsbingbong avatar larssb avatar leo8a avatar lhp-nemlig avatar likid0 avatar liudalibj avatar manuvaldi avatar mikelolasagasti avatar mvazquezc avatar pablomh avatar pacevedom avatar palonsoro avatar rsevilla87 avatar soukron avatar sshnaidm avatar stevenhorsman avatar toozej avatar vashirov avatar yuvalk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kcli's Issues

Disk size is not respected

Using a profile like this (with an LVM backend):

x:
  template: y
  disks:
    - size: 20

gives me disks like:

lvs
LV                                          VG   Attr       LSize  Pool Origin                                      Data%  Meta%  Move Log Cpy%Sync Convert
x.iso                                       hdd  -wi-ao----  4.00m                                                                                         
x_1.img                                     hdd  swi-aos---  2.01g      y.qcow2 2.57                                   
y.qcow2                                     hdd  owi-a-s---  2.00g                                    

Updating the template to be:

x:
  template: y
  disks:
    - size: 200

Results in the same size for the disk. Not sure if it's being based on the template size (if so, then why +0.01g?) or issue is elsewhere.

Create CHANGELOG.md

It's hard to follow the evolution of new features in KCLI, creating a CHANGELOG.md file could help to improve this :)

kcli switch messes config

$ kcli bootstrap
Bootstrapping env
Environment bootstrapped!
$ cat config.yml 
default:
  client: local
  cloudinit: true
  enableroot: true
  insecure: true
  nested: true
  reservedns: false
  reservehost: false
  reserveip: false
  start: true
  tunnel: false
local:
  nets:
  - default
  pool: default
$ # Add a new host
$ vi config.yml 
$ cat config.yml 
default:
  client: local
  cloudinit: true
  enableroot: true
  insecure: true
  nested: true
  reservedns: false
  reservehost: false
  reserveip: false
  start: true
  tunnel: false
local:
  nets:
  - default
  pool: default
mad:
  nets:
  - host-bridge
  pool: default
$ kcli switch mad
Switching to client mad...
$ cat config.yml 
default:
 client: mad
  cloudinit: true
  enableroot: true
  insecure: true
  nested: true
  reservedns: false
  reservehost: false
  reserveip: false
  start: true
  tunnel: false
local:
  nets:
  - default
  pool: default
mad:
  nets:
  - host-bridge
  pool: default
$ kcli list
Couldn't parse yaml in .kcli/config.yml. Leaving...
mapping values are not allowed here
  in "/home/edu/.kcli/config.yml", line 3, column 12

The indentation is messed.

[ISSUE]Volumes option in containers is not working

When the container has no volumes:

`

kcli plan -f test2.yml
Using local hypervisor as no kcli.yml was found...
Deploying Containers...
Container centos deployed!
Traceback (most recent call last):
File "/usr/bin/kcli", line 9, in
load_entry_point('kcli==4.2', 'console_scripts', 'kcli')()
File "/usr/lib64/python2.7/site-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib64/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/click/decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args[1:], **kwargs)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 838, in plan
k.create_container(name=container, image=image, nets=nets, cmd=cmd, ports=ports, volumes=volumes, label=label)
File "/usr/lib/python2.7/site-packages/kvirt/init.py", line 1546, in create_container
for i, volume in enumerate(volumes):
TypeError: 'NoneType' object is not iterable

jtudelag @ jtudelag-t460s in ~/RedHat/POCs/fluentd [9:09:10] C:1
$ cat test2.yml
centos:
type: container
image: centos
cmd: /bin/bash
ports:

  • 5500

jtudelag @ jtudelag-t460s in ~/RedHat/POCs/fluentd [9:12:49]
$ docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-4.gitf499e8b.fc25.x86_64
Go version: go1.7.4
Git commit: f499e8b/1.12.6
Built: Fri Jan 13 11:03:22 2017
OS/Arch: linux/amd64

Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-4.gitf499e8b.fc25.x86_64
Go version: go1.7.4
Git commit: f499e8b/1.12.6
Built: Fri Jan 13 11:03:22 2017
OS/Arch: linux/amd64

jtudelag @ jtudelag-t460s in ~/RedHat/POCs/fluentd [9:13:02] C:2
$ kcli --version
kcli, version 4.2`

When the container has volumes:

sudo kcli plan -f test2.yml
[sudo] password for jtudelag:
Using local hypervisor as no kcli.yml was found...
Deploying Containers...
Container centos deployed!
Traceback (most recent call last):
File "/bin/kcli", line 9, in
load_entry_point('kcli==4.2', 'console_scripts', 'kcli')()
File "/usr/lib64/python2.7/site-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib64/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/click/decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args[1:], **kwargs)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 838, in plan
k.create_container(name=container, image=image, nets=nets, cmd=cmd, ports=ports, volumes=volumes, label=label)
File "/usr/lib/python2.7/site-packages/kvirt/init.py", line 1575, in create_container
d.containers.run(image, name=name, command=cmd, detach=True, ports=ports, volumes=volumes, stdin_open=True, tty=True, labels=labels)
File "/usr/lib/python2.7/site-packages/docker/models/containers.py", line 653, in run
detach=detach, **kwargs)
File "/usr/lib/python2.7/site-packages/docker/models/containers.py", line 696, in create
create_kwargs = _create_container_args(kwargs)
File "/usr/lib/python2.7/site-packages/docker/models/containers.py", line 882, in _create_container_args
create_kwargs['volumes'] = [v.split(':')[0] for v in binds]
AttributeError: 'dict' object has no attribute 'split'

jtudelag @ jtudelag-t460s in ~/RedHat/POCs/fluentd [9:18:01] C:1
$ cat test2.yml
centos:
type: container
image: centos
cmd: /bin/bash
ports:

  • 5500
    volumes:
  • /tmp/test

Create a folder for templates

Create a folder for templates i.e. ~/.kcli/templates.d/ to have a set of templates there.
kcli could also download default templates there ...

add parameters logic in kweb

  • add a link to show available parameters of a plan
  • when ordering a plan, display a page ( or pop up window?) with the current parameters and option to override them
  • make sure cancel of a product order is properly handled

Cannot delete plans with snapshots

kcli fails when trying to delete a plan with a snapshot

$ kcli delete rhel73
Using local hypervisor as no kcli.yml was found...
Do you want to continue? [y/N]: y
Deleted vm rhel73...
Traceback (most recent call last):
File "/usr/bin/kcli", line 9, in
load_entry_point('kcli==5.14', 'console_scripts', 'kcli')()
File "/usr/lib64/python2.7/site-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib64/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/click/decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args[1😏, **kwargs)
File "/usr/lib64/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 188, in delete
k.delete(name)
File "/usr/lib/python2.7/site-packages/kvirt/kvm/init.py", line 710, in delete
vm.undefine()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2831, in undefine
if ret == -1: raise libvirtError ('virDomainUndefine() failed', dom=self)
libvirt.libvirtError: Requested operation is not valid: cannot delete inactive domain with 1 snapshots

Total control of cloudinit user-data and meta-data

I'd like to be able to have complete control over the user-data and meta-data - is there a way in kcli to pass the entire contents of each of these files which then gets stored in the cidata iso?

Incidentally, if you are interested there is pycdlib - a pure python library for making the cidata iso. The good aspect being that you don't need to write the files to hard disk prior to writing, which is what needs to be done with genisoimage.

You can see the Python source for doing so here clalancette/pycdlib#7

kcli switch shouldnt try opening a connection

you cant switch to another host when the currently enabled one isn't available, since the constructor of the Kconfig class actually sets the connection at this moment ( though you might have some methods that dont need this)
possible fix would be to move the corresponding methods to a base class for the config one

Virtualbox provider broken

Using latest virtualbox sdk segfaults.
This provider should be rewritten using web services ( so it can also run within docker)

RFE: allocate static IP's via DHCP

It would be nice if we could hand out IP's to VM's through the dnsmasq configuration that is generated by libvirt. This should be possible by using the "host" element under the block, i.e.:

  <ip address='10.0.1.1' netmask='255.255.255.0'>
    <host mac="AA:BB:CC:DD:EE:FF" name="hostname" ip="10.0.1.2">
    <dhcp>
      <range start='10.0.1.2' end='10.0.1.254'/>
    </dhcp>
  </ip>

I tried doing this without specifying the mac-address (should be possible cfr. IPv6 capabilities), but that doesn't actually hand out the IP's.

Benefit of this approach is that hosts are given the correct IP from the moment they boot (they all have DHCP enabled), and don't rely on cloud-init mechanisms to configure/bring up the IP correctly (which they don't always).

diskthin set to false doesnt work

diskthin: false breaks because the generated xml is incorrect.
we should rather copy from backing disk and then create the proper xml section in the vm definition

kcli plan fails to create plan: No such file or directory

kcli plan -f ocp-lab-36.yml
Traceback (most recent call last):
File "/usr/bin/kcli", line 11, in
load_entry_point('kcli==9.3', 'console_scripts', 'kcli')()
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 1145, in cli
args.func(args)
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 628, in plan
config.plan(plan, ansible=ansible, get=get, path=path, autostart=autostart, container=container, noautostart=noautostart, inputfile=inputfile, start=start, stop=stop, delete=delete, delay=delay, topologyfile=topologyfile, scale=scale)
File "/usr/lib/python2.7/site-packages/kvirt/config.py", line 861, in plan
common.lastvm(name)
File "/usr/lib/python2.7/site-packages/kvirt/common/init.py", line 260, in lastvm
for line in fileinput.input("%s/vm" % configdir, inplace=True):
File "/usr/lib64/python2.7/fileinput.py", line 237, in next
line = self._readline()
File "/usr/lib64/python2.7/fileinput.py", line 316, in _readline
os.rename(self._filename, self._backupfilename)
OSError: [Errno 2] No such file or directory

kcli looks for the file ~/.kcli/vm, but It doesn't exist.

Meanwhile, the work-around is "touch ~/.kcli/vm"

remarque

j ai remarque que ton code ne marchait pas trop bien. t es un peu nul toi

Use compressed templates

CentOS provides qcow images compressed that are 1/3 size of the normal version

CentOS-7-x86_64-GenericCloud.qcow2 1.3G
CentOS-7-x86_64-GenericCloud.qcow2.xz 426M

Nested flag not properly working on AMD processors.

kcli tries setting the "vmx" flag on guest processors when the nested flag is set (which, by default, it is), but "vmx" is an Intel extension. The equivalent AMD flag is "svm".

I was trying to fix this and send a pull request, but I encountered a slight problem. We will be needing to check current CPU features. I examined two options:

  1. Using /proc/cpuinfo. This is obviously Linux-specific. I wasn't quite sure if kcli is supposed to be cross-platform or not. Is it?

  2. Using the py-cpuinfo package which is cross-platform, but it is slow. It takes about a second to get current CPU features.

There might be other options obviously.

How to use bridged network?

Hello,

My virtual machines can access the network, but they cannot be accessed from other machines. My understanding is that it is necessary to create the virtual machines as part of a bridged network. I can't see anything in the documentation or examples of how to do this.

When I create a vm it creates it on the default network which is 192.168.122.0/24 nat routed.

What do I need to do to create vms on the bridged network?

thanks!

(venv2.7) ubuntu@kvmhost:/opt/bootrinoserver$ kcli list -n
Using local hypervisor as no client was specified...
Listing Networks...
+---------+---------+------------------+------+---------+------+
| Network |   Type  |       Cidr       | Dhcp |  Domain | Mode |
+---------+---------+------------------+------+---------+------+
| default |  routed | 192.168.122.0/24 | True | default | nat  |
| enp3s0  | bridged |  192.168.1.0/24  | N/A  |   N/A   | N/A  |
+---------+---------+------------------+------+---------+------+
(venv2.7) ubuntu@kvmhost:/opt/bootrinoserver$

my network config looks like this:

(venv2.7) ubuntu@kvmhost:/opt/foo$ ifconfig -a
br0       Link encap:Ethernet  HWaddr d2:d6:16:58:19:e0
          inet addr:192.168.1.139  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::d0d6:16ff:fe58:19e0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1793715 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1069639 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1045650082 (1.0 GB)  TX bytes:160601041 (160.6 MB)

enp3s0    Link encap:Ethernet  HWaddr fc:aa:14:a9:95:bf
          inet addr:192.168.1.140  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1893677 errors:0 dropped:698 overruns:0 frame:0
          TX packets:1096978 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1126484261 (1.1 GB)  TX bytes:162790016 (162.7 MB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:7284 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7284 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:1619559 (1.6 MB)  TX bytes:1619559 (1.6 MB)

virbr0    Link encap:Ethernet  HWaddr 52:54:00:e3:e6:93
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:187 errors:0 dropped:0 overruns:0 frame:0
          TX packets:182 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:12891 (12.8 KB)  TX bytes:18134 (18.1 KB)

virbr0-nic Link encap:Ethernet  HWaddr 52:54:00:e3:e6:93
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vnet0     Link encap:Ethernet  HWaddr fe:54:00:06:ec:c1
          inet6 addr: fe80::fc54:ff:fe06:ecc1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:69 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1564 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5761 (5.7 KB)  TX bytes:84936 (84.9 KB)

wlp2s0    Link encap:Ethernet  HWaddr d0:7e:35:69:cf:b2
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

(venv2.7) ubuntu@kvmhost:/opt/foo$

Ansible dynamic inventory klist.py is broken

The message "Using local hypervisor as kcli.yml wad found" should not be displayed, only the json.
Otherwise, it does not work as Ansible doesnt know how to parse it.

./klist.py --list
Using local hypervisor as no kcli.yml was found...
{"kvirt": {"hosts": ["ansible-graph", "minishift"], "vars": {"profile": "", "plan": "kvirt"}}, "_meta": {"hostvars": {"ocp-infra": {"status": "up", "ansible_host": "192.168.122.15", "ansible_user": "cloud-user"}, "ocp-node": {"status": "up", "ansible_host": "192.168.122.224", "ansible_user": "cloud-user"}, "minishift": {"status": "down"}, "ansible-graph": {"status": "up", "ansible_host": "192.168.122.43"}, "ocp-master": {"status": "up", "ansible_host": "192.168.122.40", "ansible_user": "cloud-user"}}}, "ocp-lab": {"hosts": ["ocp-master", "ocp-infra", "ocp-node"], "vars": {"profile": "rhel73_medium", "plan": "ocp-lab"}}}

session or system

investigate whether qemu:///session is a better default that qemu:///system ( requires root)

Networking limitations

I was changing up my networking a bit to have a PFSense VM handle the LAN networks and noticed that this breaks cloudinit deployments. It seems that even if I configure a static IP and gateway as well as interface name, it never comes online, but if I use the default network and manually configure it, it works fine. Additionally how can I manage networks using kcli which aren't NAT networks but rather macvtap or isolated.

Include support for google-rebobinator

When working with slides or documents, by default the URI sets the access to the last page visited.

google-rebobinator allows document to start back at first page

Speaking with developer over telegram, he suggested an issue is opened to track this request so kcli can now integrate with google-rebobinator

bootstrap crashes

$ kcli bootstrap
Using local hypervisor as no kcli.yml was found...
Bootstrapping env
We will configure kcli together !
Enter your default client name[local]: ex1
Enter your default pool[default]:
Enter your client first disk size[10]:
Enter your client first network[default]:
Use cloudinit[True]:
Use thin disks[True]:
Enter your client hostname/ip[localhost]:
Enter your client url:
Enter your client port:
Enter your client user[root]:
Enter your client pool[default]:
Create pool if not there[Y]:
Enter yourpool path[/var/lib/libvirt/images]:
Enter your client first disk size[10]:
Enter your client first network[default]:
Create net if not there[Y]:
Enter cidr [192.168.122.0/24]:
Create cinet network for uci demos if not there[N]
Use cloudinit for this client[True]:
Use thin disks for this client[True]:
Traceback (most recent call last):
File "/home/duck/bin/kcli", line 11, in
sys.exit(cli())
File "/home/duck/.local/lib/python2.7/site-packages/click/core.py", line 716, in call
return self.main(_args, *_kwargs)
File "/home/duck/.local/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/home/duck/.local/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/duck/.local/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, *_ctx.params)
File "/home/duck/.local/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(_args, **kwargs)
File "/home/duck/.local/lib/python2.7/site-packages/kvirt/cli.py", line 714, in bootstrap
k.bootstrap(pool=pool, poolpath=poolpath, pooltype=pooltype, nets=nets)
UnboundLocalError: local variable 'pooltype' referenced before assignment

fix too long lines

try to adhere to pep8 E501 or at least fix a maximum number of characters per line (say 120)

Kickstart

Is there a way to point a guest to a kickstart file using kcli? There is currently a bug in RHEL 7.4 with setting the network gateway

[RFE] Plan resources should be treate as a whole, and not individually.

As a plan is a set of resources, all the operations regarding plans (kcli plan ...)should be applied to all the resources of the plan. For individual actions, you can use kcli normally.

So for example, "kcli plan --list" should list vms and containers. The same way for the rest of the operations.

On the other hand I would deprecate the use of the container flag (-c) for plans, as If this RFE is implemented, It would not make any sense.

Backingvolume is incorrectly determined when template is available in more than 1 pool

When a template is available in more than 1 pool, there is no guarantee the correct one is selected.

Example:

LVM pool "hdd", LVM pool "ssd"
Template "debian" is uploaded to both pools (in order to be able to spawn instances from either)

When creating a new VM, the following line:

backing = backingvolume.path()

Always returns the "last" pool where the template is found. I believe the issue originates here:

    for p in conn.listStoragePools():
        poo = conn.storagePoolLookupByName(p)
        for vol in poo.listAllVolumes():
            print("looking up volume %s in pool %s: %s" % (vol,p,vol.path()))
            volumes[vol.name()] = {'pool': poo, 'object': vol}

Where volumes are distinguished by name only, and not pool (in the volumes dict). This means that a volume that exists in two pools with the same name is only available in this dict once, with whichever pool happens to have been processed last.

Workaround is to use either different names for templates in different pools (i.e. to add the pool-name into the template-name), or just make sure that a template is only available in 1 pool.

Ideally, kcli would use the pool path in the volumes dict as part of the key, i.e.:

            volumes["%s/%s" % (pool.name(),vol.name())] = {'pool': poo, 'object': vol}

This is not a minor change however, as this logic is used in several locations (and should be changed in all of them). Additionally, other code relies on the key-value to access the template information, and needs to be updated accordingly. It may be advisable to use a UUID rather than a string (i.e. the ones in /dev/disk/by-uuid/ ?) for the key value.

kcli -C all list outputs traceback

Using kcli version 7.20, kcli list output is displayed ok, but it outputs a traceback at the end of the command:

~> kcli -C all list
+-----------------------+---------------+--------+-----------------+----------------------------------------------+------------------+---------------+--------+
| Name | Host | Status | Ips | Source | Description/Plan | Profile | Report |
+-----------------------+---------------+--------+-----------------+----------------------------------------------+-----------
...............
...................
| liberty | openstackbox2 | up | | rhel-guest-image-7.2-20160302.0.x86_64.qcow2 | liberty | kvirt | |
+-----------------------+---------------+--------+-----------------+----------------------------------------------+------------------+---------------+--------+

Traceback (most recent call last):
File "/usr/bin/kcli", line 9, in
load_entry_point('kcli==7.20', 'console_scripts', 'kcli')()
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 890, in cli
config.k.close()
AttributeError: Kconfig instance has no attribute 'k'

RFE: be able to redeploy plans when a single host is changed

Currently, redeploying an existing plan causes exceptions on already existing volumes/machines. As I'm currently redeploying frequently, it's faster to do it partially rather than redeploy the full stack.

I patched this by catching some libvirtErrors in the create() function for storagepool.createXML() and conn.defineXML(), and returning failure when they occur (which still lets the plan go through); if you want I can provide the diff/PR, but I imagine there is a cleaner solution.

Server is created even if creation fails (not available memory for example)

Creating a server from a plan with more memory than available:

$ cat rhel73.yml
rhel73:
template: rhel-guest-image-7.3-35.x86_64.qcow2
memory: 10241024
numcpus: 2
disks:

  • size: 15
    nets:
  • default
    pool: default

$ kcli plan -f rhel73.yml rhel73
Deploying Vms...
Traceback (most recent call last):
File "/usr/bin/kcli", line 11, in
load_entry_point('kcli==10.4', 'console_scripts', 'kcli')()
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 1205, in cli
args.func(args)
File "/usr/lib/python2.7/site-packages/kvirt/cli.py", line 645, in plan
config.plan(plan, ansible=ansible, get=get, path=path, autostart=autostart, container=container, noautostart=noautostart, inputfile=inputfile, start=start, stop=stop, delete=delete, delay=delay, topologyfile=topologyfile, scale=scale, overrides=overrides)
File "/usr/lib/python2.7/site-packages/kvirt/config.py", line 1183, in plan
result = z.create(name=name, plan=plan, profile=profilename, cpumodel=cpumodel, cpuflags=cpuflags, numcpus=int(numcpus), memory=int(memory), guestid=guestid, pool=pool, template=template, disks=disks, disksize=disksize, diskthin=diskthin, diskinterface=diskinterface, nets=nets, iso=iso, vnc=bool(vnc), cloudinit=bool(cloudinit), reserveip=bool(reserveip), reservedns=bool(reservedns), reservehost=bool(reservehost), start=bool(start), keys=keys, cmds=cmds, ips=ips, netmasks=netmasks, gateway=gateway, dns=dns, domain=domain, nested=nested, tunnel=tunnel, files=files, enableroot=enableroot, overrides=overrides)
File "/usr/lib/python2.7/site-packages/kvirt/kvm/init.py", line 414, in create
vm.create()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1062, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirt.libvirtError: internal error: process exited while connecting to monitor: 2018-01-05T10:05:29.192112Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/2 (label charserial0)
2018-01-05T10:05:29.213137Z qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory

$ kcli list
+------------+--------+-----+--------------------------------------+--------+---------+--------+
| Name | Status | Ips | Source | Plan | Profile | Report |
+------------+--------+-----+--------------------------------------+--------+---------+--------+
| rhel73 | down | | rhel-guest-image-7.3-35.x86_64.qcow2 | rhel73 | kvirt | |
+------------+--------+-----+--------------------------------------+--------+---------+--------+

[RFE] Allow to delete plans with snapshots

If trying to delete a plan with a snapshot, kcli fails.

Add something like --force to delete command to allow deletion of plans with snapshots

kcli delete rhel72 --force

[BUG]kcli fails to be installed on Fedora 25, needs redhat-rpm-config dependency.

sudo pip install kcli
Collecting kcli
Using cached kcli-4.2.tar.gz
Collecting libvirt-python>=2.2.0 (from kcli)
Using cached libvirt-python-2.5.0.tar.gz
Collecting docker (from kcli)
Using cached docker-2.0.2-py2.py3-none-any.whl
Collecting Click (from kcli)
Using cached click-6.7-py2.py3-none-any.whl
Collecting iptools (from kcli)
Using cached iptools-0.6.1-py2.py3-none-any.whl
Collecting netaddr (from kcli)
Using cached netaddr-0.7.19-py2.py3-none-any.whl
Requirement already satisfied: PyYAML in /usr/lib64/python2.7/site-packages (from kcli)
Collecting prettytable (from kcli)
Using cached prettytable-0.7.2.zip
Collecting websocket-client>=0.32.0 (from docker->kcli)
Using cached websocket_client-0.40.0.tar.gz
Requirement already satisfied: requests!=2.11.0,!=2.12.2,>=2.5.2 in /usr/lib/python2.7/site-packages (from docker->kcli)
Collecting backports.ssl-match-hostname>=3.5; python_version < "3.5" (from docker->kcli)
Using cached backports.ssl_match_hostname-3.5.0.1.tar.gz
Requirement already satisfied: ipaddress>=1.0.16; python_version < "3.3" in /usr/lib/python2.7/site-packages (from docker->kcli)
Requirement already satisfied: six>=1.4.0 in /usr/lib/python2.7/site-packages (from docker->kcli)
Collecting docker-pycreds>=0.2.1 (from docker->kcli)
Using cached docker_pycreds-0.2.1-py2.py3-none-any.whl
Requirement already satisfied: urllib3==1.15.1 in /usr/lib/python2.7/site-packages (from requests!=2.11.0,!=2.12.2,>=2.5.2->docker->kcli)
Installing collected packages: libvirt-python, backports.ssl-match-hostname, websocket-client, docker-pycreds, docker, Click, iptools, netaddr, prettytable, kcli
Running setup.py install for libvirt-python ... error
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-II5Pqu/libvirt-python/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-C6qnTb-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
/bin/pkg-config --print-errors --atleast-version=0.9.11 libvirt
/usr/bin/python generator.py libvirt /usr/share/libvirt/api/libvirt-api.xml
Found 415 functions in /usr/share/libvirt/api/libvirt-api.xml
Found 0 functions in libvirt-override-api.xml
Generated 343 wrapper functions
/usr/bin/python generator.py libvirt-qemu /usr/share/libvirt/api/libvirt-qemu-api.xml
Found 5 functions in /usr/share/libvirt/api/libvirt-qemu-api.xml
Found 0 functions in libvirt-qemu-override-api.xml
Generated 3 wrapper functions
/bin/pkg-config --atleast-version=1.0.2 libvirt
/usr/bin/python generator.py libvirt-lxc /usr/share/libvirt/api/libvirt-lxc-api.xml
Found 4 functions in /usr/share/libvirt/api/libvirt-lxc-api.xml
Found 0 functions in libvirt-lxc-override-api.xml
Generated 2 wrapper functions
running build_py
creating build/lib.linux-x86_64-2.7
copying build/libvirt.py -> build/lib.linux-x86_64-2.7
copying build/libvirt_qemu.py -> build/lib.linux-x86_64-2.7
copying build/libvirt_lxc.py -> build/lib.linux-x86_64-2.7
running build_ext
building 'libvirtmod' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/build
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I. -I/usr/include/python2.7 -c libvirt-override.c -o build/temp.linux-x86_64-2.7/libvirt-override.o
gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory
error: command 'gcc' failed with exit status 1

----------------------------------------

Command "/usr/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-II5Pqu/libvirt-python/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-C6qnTb-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-II5Pqu/libvirt-python/

$ dnf provides /usr/lib/rpm/redhat/redhat-hardened-cc1
Last metadata expiration check: 1:20:16 ago on Sun Jan 22 20:34:20 2017.
redhat-rpm-config-45-1.fc25.noarch : Red Hat specific rpm configuration files
Repo : updates

redhat-rpm-config-44-1.fc25.noarch : Red Hat specific rpm configuration files
Repo : fedora

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.