Giter Site home page Giter Site logo

virtlet's Introduction

virtlet CircleCI Maintainability Go Report Card

Virtlet is a Kubernetes runtime server which allows you to run VM workloads, based on QCOW2 images.

It is possible to run Virtlet by following the instructions from either Setting up the environment or Deploying Virtlet as a DaemonSet on kubeadm-dind-cluster documents. There's also separate document describing the process of installing Virtlet on real clusters.

See here for the description of Virtlet architecture.

Description & Documentation

See here for user-facing Virtlet description and documentation.

Community

You can join #virtlet channel on Kubernetes Slack (register at slack.k8s.io if you're not in k8s group already). Both the users and developers are welcome!

Getting started with Virtlet

To try out Virtlet follow the instructions from Setting up the environment and try out examples documents.

Virtlet introduction video

You can watch and listen to Virtlet demo video that was recorded on Kubernetes Community Meeting here.

Command line interface

Virtlet comes with a helper tool, virtletctl, that helps managing the VM pods. The binaries are available for Linux and Mac OS X in the Releases section. You can also install virtletctl as a kubectl plugin:

virtletctl install

After that you can use kubectl plugin virt instead of virtletctl (plugin subcommand will not be necessary when kubectl plugins become stable):

kubectl plugin virt ssh cirros@cirros-vm -- -i examples/vmkey

Virtlet usage demo

You can watch sample usage session under this link.

You can also give Virtlet a quick try using our demo script (requires Docker 1.12+):

wget https://raw.githubusercontent.com/Mirantis/virtlet/master/deploy/demo.sh
chmod +x demo.sh
# './demo.sh --help' displays the description
./demo.sh

The demo will start a test cluster, deploy Virtlet on it and then boot a CirrOS VM there. You may access sample nginx server via curl http://nginx.default.svc.cluster.local from inside the VM. To disconnect from VM, press Ctrl-D. After the VM has booted, you can also use virtletctl tool to connect to its SSH server:

virtletctl ssh cirros@cirros-vm -- -i examples/vmkey [command...]

By default, CNI bridge plugin is used for cluster networking. It's also possible to override this with calico, flannel or weave plugin, e.g.:

CNI_PLUGIN=flannel ./demo.sh

There's also an option to deploy Virtlet on master node of the DIND cluster, which can be handy e.g. if you don't want to use worker nodes (i.e. start the cluster with NUM_NODES=0):

VIRTLET_ON_MASTER=1 ./demo.sh

The demo script will check for KVM support on the host and will make Virtlet use KVM if it's available on Docker host. If KVM is not available, plain QEMU will be used.

The demo is based on kubeadm-dind-cluster project. Docker btrfs storage driver is currently unsupported. Please refer to kubeadm-dind-cluster documentation for more info.

You can remove the test cluster with ./dind-cluster-v1.14.sh clean when you no longer need it.

External projects using Virtlet

There are some external projects using Virtlet already. One interesting usecase is that of MIKELANGELO project that runs OSv unikernels on Kubernetes using Virtlet. Unikernels are special case of VMs that are extremely small in size (20MB or so) and can only run a single process each. Nevertheless, Virtlet has no problems handling them on Kubernetes as demonstrated in this video. Microservice Demo is available here.

Need any help with Virtlet?

If you will encounter any issue when using Virtlet please look into our issue tracker on github. If your case is not mentioned there - please fill new issue for it. In case of any questions you may also use #virtlet channel on Kubernetes Slack.

Contributing

Virtlet is an open source project and any contributions are welcomed. Look into Contributing guidelines document for our guidelines and further instructions on how to set up Virtlet development environment.

Licensing

Unless specifically noted, all parts of this project are licensed under the Apache 2.0 license.

virtlet's People

Contributors

d-kononov avatar dandrushko avatar gitter-badger avatar hemanthnakkina avatar hwchiu avatar istalker2 avatar isuzdal avatar ivan4th avatar jellonek avatar keyingliu avatar lukaszo avatar miha-plesko avatar nhlfr avatar ologvinova avatar pigmej avatar skolekonov avatar sofat1989 avatar vefimova avatar warmchang avatar yanxuean avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

virtlet's Issues

Pod stuck in ContainerCreating state while VM is actually running

Trying to run example after following instructions from README.md (k8s v1.5.0-alpha.1):

vagrant@devbox:~/work/kubernetes/src/github.com/Mirantis/virtlet/examples (master *) $ kubectl create -f virt-cirros.yaml
pod "virtlet-example-cirros" created
vagrant@devbox:~/work/kubernetes/src/github.com/Mirantis/virtlet/examples (master *) $ kubectl get pods
NAME                     READY     STATUS              RESTARTS   AGE
virtlet-example-cirros   0/1       ContainerCreating   0          31s
vagrant@devbox:~/work/kubernetes/src/github.com/Mirantis/virtlet/examples (master *) $ kubectl get pods
NAME                     READY     STATUS              RESTARTS   AGE
virtlet-example-cirros   0/1       ContainerCreating   0          33s
vagrant@devbox:~/work/kubernetes/src/github.com/Mirantis/virtlet/examples (master *) $ kubectl get pods
NAME                     READY     STATUS              RESTARTS   AGE
virtlet-example-cirros   0/1       ContainerCreating   0          38s

Pod events:

Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath           Type            Reason          Message
  ---------     --------        -----   ----                    -------------           --------        ------          -------
  28s           28s             1       {default-scheduler }                            Normal          Scheduled       Successfully assigned virtlet-example-cirros to 127.0.0.1
  27s           27s             1       {kubelet 127.0.0.1}                             Normal          SandboxReceived Pod sandbox received, it will be created.
  27s           26s             2       {kubelet 127.0.0.1}     spec.containers{cirros} Normal          Pulling         pulling image "172.18.0.1/cirros"
  27s           26s             2       {kubelet 127.0.0.1}     spec.containers{cirros} Normal          Pulled          Successfully pulled image "172.18.0.1/cirros"
  27s           26s             2       {kubelet 127.0.0.1}     spec.containers{cirros} Normal          Created         Created container with id 6f112ee2-373b-4191-6f8e-ccd14d4ff21d
  27s           26s             2       {kubelet 127.0.0.1}     spec.containers{cirros} Normal          Started         Started container with id 6f112ee2-373b-4191-6f8e-ccd14d4ff21d

It's possible to log into VM though:

vagrant@devbox:~/work/kubernetes/src/github.com/Mirantis/virtlet/examples (master *) $ docker exec -it dockercompose_libvirt_1 /bin/bash
root@7e9b36d62f93:/# kubectl get pods
bash: kubectl: command not found
root@7e9b36d62f93:/# virsh list
 Id    Name                           State
----------------------------------------------------
 1     cirros                         running

root@7e9b36d62f93:/# virsh console cirros
Connected to domain cirros
Escape character is ^]

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login: cirros
Password:
$

Virtlet keeps downloading the image. Looks like it doesn't see that the domain was successfully created. Excerpt from virtlet log:

irtlet_1  | I1108 16:36:36.041646       1 manager.go:112] Sandbox config labels: map[string]string{"io.kubernetes.pod.uid":"83ac6852-a5d1-11e6-8ab5-525400bfe758", "io.kubernetes.managed":"true", "io.kubernetes.pod.name":"virtlet-example-cirros", "io.kubernetes.pod.namespace":"default"}
virtlet_1  | I1108 16:36:36.041709       1 manager.go:113] Sandbox config annotations: map[string]string{"kubernetes.io/config.seen":"2016-11-08T16:36:35.73630963Z", "kubernetes.io/config.source":"api"}
virtlet_1  | I1108 16:36:36.061909       1 manager.go:292] PullImage called for: 172.18.0.1/cirros:latest
virtlet_1  | I1108 16:36:36.062347       1 download.go:56] Start downloading http://172.18.0.1/cirros
virtlet_1  | I1108 16:36:36.131768       1 manager.go:125] StopPodSandbox called for pod e1f4f67f-a51b-11e6-8bcf-525400bfe758
virtlet_1  | I1108 16:36:36.135899       1 download.go:68] Data from url http://172.18.0.1/cirros saved in /tmp/cirros
virtlet_1  | libvirt: Storage Driver error : Storage volume not found: no storage vol with matching name 'cirros'
virtlet_1  | I1108 16:36:36.256991       1 manager.go:169] CreateContainer called for name: cirros
virtlet_1  | I1108 16:36:36.258660       1 manager.go:190] StartContainer called for containerID: 6f112ee2-373b-4191-6f8e-ccd14d4ff21d
virtlet_1  | I1108 16:36:37.093211       1 manager.go:292] PullImage called for: 172.18.0.1/cirros:latest
virtlet_1  | I1108 16:36:37.095627       1 download.go:56] Start downloading http://172.18.0.1/cirros
virtlet_1  | I1108 16:36:37.137495       1 download.go:68] Data from url http://172.18.0.1/cirros saved in /tmp/cirros
virtlet_1  | I1108 16:36:37.183267       1 manager.go:169] CreateContainer called for name: cirros
virtlet_1  | I1108 16:36:37.187440       1 manager.go:190] StartContainer called for containerID: 6f112ee2-373b-4191-6f8e-ccd14d4ff21d
virtlet_1  | libvirt: QEMU Driver error : Requested operation is not valid: domain is already running
virtlet_1  | I1108 16:36:38.128058       1 manager.go:125] StopPodSandbox called for pod e1f4f67f-a51b-11e6-8bcf-525400bfe758
virtlet_1  | I1108 16:36:40.130252       1 manager.go:125] StopPodSandbox called for pod e1f4f67f-a51b-11e6-8bcf-525400bfe758
virtlet_1  | I1108 16:36:42.132732       1 manager.go:125] StopPodSandbox called for pod e1f4f67f-a51b-11e6-8bcf-525400bfe758
virtlet_1  | I1108 16:36:44.128433       1 manager.go:125] StopPodSandbox called for pod e1f4f67f-a51b-11e6-8bcf-525400bfe758

Integration tests

We need integration test "framework" in virtlet and tests for the features we have already.

Items:

  • "framework"
  • image service - by using fake grcp request
  • virtualization service - by using fake grpc request
  • k8s integration - running local k8s cluster with virtlet

etcd container remain spawned after migrated to bolt

docker ps
[sudo] password for eugen:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39e094d89372 dockercompose_virtlet "/usr/local/bin/virtl" 9 minutes ago Up 9 minutes dockercompose_virtlet_1
52a22d89ffc8 dockercompose_libvirt "/start.sh" 15 minutes ago Up 9 minutes dockercompose_libvirt_1
b922672c18cf quay.io/coreos/etcd "/usr/local/bin/etcd " 18 hours ago Up 9 minutes 2379-2380/tcp dockercompose_etcd_1

libvirt container failed to start if no default pool created

libvirt_1 | 2016-09-30 13:07:55.480+0000: 1676: info : hostname: 52a22d89ffc8
libvirt_1 | 2016-09-30 13:07:55.480+0000: 1676: warning : virLogParseOutputs:1206 : Ignoring invalid log output setting.
libvirt_1 | libvirt: Storage Driver error : Storage pool not found: no storage pool with matching name 'default'
libvirt_1 | Cleaning up VMs
libvirt_1 | All VMs cleaned
libvirt_1 | Traceback (most recent call last):
libvirt_1 | File "/cleanup.py", line 77, in
libvirt_1 | main()
libvirt_1 | File "/cleanup.py", line 60, in main
libvirt_1 | pool = conn.storagePoolLookupByName("default")
libvirt_1 | File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4575, in storagePoolLookupByName
libvirt_1 | if ret is None:raise libvirtError('virStoragePoolLookupByName() failed', conn=self)
libvirt_1 | libvirt.libvirtError: Storage pool not found: no storage pool with matching name 'default'

libvirt container failed to start if no default pool created

libvirt_1 | 2016-09-30 13:07:55.480+0000: 1676: info : hostname: 52a22d89ffc8
libvirt_1 | 2016-09-30 13:07:55.480+0000: 1676: warning : virLogParseOutputs:1206 : Ignoring invalid log output setting.
libvirt_1 | libvirt: Storage Driver error : Storage pool not found: no storage pool with matching name 'default'
libvirt_1 | Cleaning up VMs
libvirt_1 | All VMs cleaned
libvirt_1 | Traceback (most recent call last):
libvirt_1 | File "/cleanup.py", line 77, in
libvirt_1 | main()
libvirt_1 | File "/cleanup.py", line 60, in main
libvirt_1 | pool = conn.storagePoolLookupByName("default")
libvirt_1 | File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4575, in storagePoolLookupByName
libvirt_1 | if ret is None:raise libvirtError('virStoragePoolLookupByName() failed', conn=self)
libvirt_1 | libvirt.libvirtError: Storage pool not found: no storage pool with matching name 'default'

ListContainers freaks after RemoveContainer

Looks like we do not cleanup correctly, or we should answer in other way to kubelet on ListContainers call after container removal.

virtlet_1  | I1027 19:13:50.060782       1 manager.go:254] RemoveContainer called for containerID: d4161e4d-d170-464c-7357-86c49ccfad57
virtlet_1  | libvirt: QEMU Driver error : Requested operation is not valid: domain is not running
virtlet_1  | libvirt: QEMU Driver error : Requested operation is not valid: domain is not running
virtlet_1  | E1027 19:13:50.076271       1 manager.go:273] Error when listing containers with filter &runtime.ContainerFilter{Id:(*string)(nil), State:(*runtime.ContainerState)(nil), PodSandboxId:(*string)(nil), LabelSelector:map[string]string{"io.kubernetes.managed":"true"}, XXX_unrecognized:[]uint8(nil)}: &errors.errorString{s:"Bucket 'd4161e4d-d170-464c-7357-86c49ccfad57' doesn't exist"}

etcd container remain spawned after migrated to bolt

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39e094d89372 dockercompose_virtlet "/usr/local/bin/virtl" 9 minutes ago Up 9 minutes dockercompose_virtlet_1
52a22d89ffc8 dockercompose_libvirt "/start.sh" 15 minutes ago Up 9 minutes dockercompose_libvirt_1
b922672c18cf quay.io/coreos/etcd "/usr/local/bin/etcd " 18 hours ago Up 9 minutes 2379-2380/tcp dockercompose_etcd_1

storage: Ephemeral volumes are not removed after domain is destroyed.

We are creating new volumes during CreateContainer/processVolumes using volumeStorage.CreateVol (btw. imo this should be renamed into CreateVolume) but we are not removing them at any point later.

Because of lacking this removal right now we have situation like described by @nhlfr:

virtlet_1  | libvirt: Storage Driver error : storage volume 'cirros__var_run_secrets_kubernetes.io_serviceaccount' exists already
virtlet_1  | E1010 15:07:49.279520       1 manager.go:180] Error when creating container cirros: &errors.errorString{s:"storage volume 'cirros__var_run_secrets_kubernetes.io_serviceaccount' exists already"}

Good place for this seems to be RemoveContainer just after call to C.destroyAndUndefineDomain.

XML handling

We should stop rely on plain string templates for domain XML generation. Instead we should fill initial values in structs, operate on them and only once at the end generate from this XML, cutting down need to parse it at first.

So this is placeholder issue for future refactor.

docker-compose up failed

Wersja sciagnieta przed chwila z mastera, po wywolaniu docker-compose up dostaje:

Step 15 : CMD /usr/local/bin/virtlet -logtostderr=true -libvirt-uri=qemu+tcp://libvirt/system -etcd-endpoint=http://etcd:2379
 ---> Running in fcb03fad9d38
 ---> 5494dabd61d6
Removing intermediate container fcb03fad9d38
Successfully built 5494dabd61d6
WARNING: Image for service virtlet was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating dockercompose_etcd_1
Creating dockercompose_libvirt_1
Creating dockercompose_virtlet_1
Attaching to dockercompose_etcd_1, dockercompose_libvirt_1, dockercompose_virtlet_1
etcd_1     | 2016-09-29 13:20:37.632488 I | etcdmain: etcd Version: 3.0.10
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
etcd_1     | 2016-09-29 13:20:37.632587 I | etcdmain: Git SHA: 546c0f7
etcd_1     | 2016-09-29 13:20:37.632593 I | etcdmain: Go Version: go1.6.3
etcd_1     | 2016-09-29 13:20:37.632597 I | etcdmain: Go OS/Arch: linux/amd64
etcd_1     | 2016-09-29 13:20:37.632602 I | etcdmain: setting maximum number of CPUs to 48, total number of available CPUs is 48
etcd_1     | 2016-09-29 13:20:37.632608 W | etcdmain: no data-dir provided, using default data-dir ./etcd0.etcd
etcd_1     | 2016-09-29 13:20:37.632723 I | etcdmain: listening for peers on http://0.0.0.0:2380
etcd_1     | 2016-09-29 13:20:37.632761 I | etcdmain: listening for client requests on 0.0.0.0:2379
etcd_1     | 2016-09-29 13:20:37.632784 I | etcdmain: listening for client requests on 0.0.0.0:4001
etcd_1     | 2016-09-29 13:20:37.634361 I | etcdserver: name = etcd0
etcd_1     | 2016-09-29 13:20:37.634375 I | etcdserver: data dir = etcd0.etcd
etcd_1     | 2016-09-29 13:20:37.634381 I | etcdserver: member dir = etcd0.etcd/member
etcd_1     | 2016-09-29 13:20:37.634394 I | etcdserver: heartbeat = 100ms
etcd_1     | 2016-09-29 13:20:37.634399 I | etcdserver: election = 1000ms
etcd_1     | 2016-09-29 13:20:37.634403 I | etcdserver: snapshot count = 10000
etcd_1     | 2016-09-29 13:20:37.634411 I | etcdserver: advertise client URLs = http://0.0.0.0:2379,http://0.0.0.0:4001
etcd_1     | 2016-09-29 13:20:37.634417 I | etcdserver: initial advertise peer URLs = http://0.0.0.0:2380
etcd_1     | 2016-09-29 13:20:37.634428 I | etcdserver: initial cluster = etcd0=http://0.0.0.0:2380
etcd_1     | 2016-09-29 13:20:37.635889 I | etcdserver: starting member 2006f2ad47deea09 in cluster 77c597fe24ed2ce3
etcd_1     | 2016-09-29 13:20:37.635956 I | raft: 2006f2ad47deea09 became follower at term 0
etcd_1     | 2016-09-29 13:20:37.635973 I | raft: newRaft 2006f2ad47deea09 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
etcd_1     | 2016-09-29 13:20:37.635979 I | raft: 2006f2ad47deea09 became follower at term 1
etcd_1     | 2016-09-29 13:20:37.638713 I | etcdserver: starting server... [version: 3.0.10, cluster version: to_be_decided]
etcd_1     | 2016-09-29 13:20:37.639533 I | membership: added member 2006f2ad47deea09 [http://0.0.0.0:2380] to cluster 77c597fe24ed2ce3
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:38.220116       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
dockercompose_libvirt_1 exited with code 127
etcd_1     | 2016-09-29 13:20:38.636339 I | raft: 2006f2ad47deea09 is starting a new election at term 1
etcd_1     | 2016-09-29 13:20:38.636391 I | raft: 2006f2ad47deea09 became candidate at term 2
etcd_1     | 2016-09-29 13:20:38.636398 I | raft: 2006f2ad47deea09 received vote from 2006f2ad47deea09 at term 2
etcd_1     | 2016-09-29 13:20:38.636421 I | raft: 2006f2ad47deea09 became leader at term 2
etcd_1     | 2016-09-29 13:20:38.636438 I | raft: raft.node: 2006f2ad47deea09 elected leader 2006f2ad47deea09 at term 2
etcd_1     | 2016-09-29 13:20:38.636921 I | etcdserver: published {Name:etcd0 ClientURLs:[http://0.0.0.0:2379 http://0.0.0.0:4001]} to cluster 77c597fe24ed2ce3
etcd_1     | 2016-09-29 13:20:38.636935 I | etcdmain: ready to serve client requests
etcd_1     | 2016-09-29 13:20:38.638565 I | etcdmain: ready to serve client requests
etcd_1     | 2016-09-29 13:20:38.639599 I | etcdserver: setting up the initial cluster version to 3.0
etcd_1     | 2016-09-29 13:20:38.639675 N | etcdmain: serving insecure client requests on 0.0.0.0:4001, this is strongly discouraged!
etcd_1     | 2016-09-29 13:20:38.639694 N | etcdmain: serving insecure client requests on 0.0.0.0:2379, this is strongly discouraged!
etcd_1     | 2016-09-29 13:20:38.639991 N | membership: set the initial cluster version to 3.0
etcd_1     | 2016-09-29 13:20:38.641058 I | api: enabled capabilities for version 3.0
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
dockercompose_libvirt_1 exited with code 127
dockercompose_virtlet_1 exited with code 1
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:38.220116       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:39.803341       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:40.385590       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
dockercompose_virtlet_1 exited with code 1
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:38.220116       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:39.803341       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:40.385590       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:41.124494       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:42.326060       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
dockercompose_libvirt_1 exited with code 127
dockercompose_virtlet_1 exited with code 1
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:38.220116       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:39.803341       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:40.385590       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:41.124494       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:42.326060       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:44.332849       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:47.890940       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
dockercompose_libvirt_1 exited with code 127
dockercompose_virtlet_1 exited with code 1
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
libvirt_1  | /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:38.220116       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:39.803341       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:40.385590       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:41.124494       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:42.326060       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:44.332849       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:47.890940       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:20:54.663555       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
virtlet_1  | libvirt: XML-RPC error : unable to connect to server at 'libvirt:16509': Connection refused
virtlet_1  | E0929 13:21:07.862834       1 virtlet.go:45] Initializing server failed: &errors.errorString{s:"unable to connect to server at 'libvirt:16509': Connection refused"}
dockercompose_libvirt_1 exited with code 127
dockercompose_virtlet_1 exited with code 1

Metadata for images shredded on image redownload request

When we are starting vm from image which is not in store, we are downloading it, storing in bolt it's metadata then we are starting vm, but if for some reason this vm will not start, kubelet will resend request to start container and then we can notice that there is also request for redownload for image (maybe something is broken with ImagesList) which ends with shredding of information about image file location in bolt. This ends with broken domain xml for vm with empty value for image file path.

"User-friendly" Makefile

Add Makefile that can be used to build virtlet using virtlet_build container, run tests (unit/integration/e2e) and local environment.

Move Makefile.am and other autotools stuff to build/ so as to avoid confusion,

cleanup.py is failing on domains which are not running

docker exec -ti dockercompose_libvirt_1 /cleanup.py                                                              
Cleaning up VMs
Destroying VM cirros
libvirt: QEMU Driver error : Requested operation is not valid: domain is not running
Traceback (most recent call last):
  File "/cleanup.py", line 61, in <module>
    main()
  File "/cleanup.py", line 33, in main
    if domain.destroy() < 0:
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1103, in destroy
    if ret == -1: raise libvirtError ('virDomainDestroy() failed', dom=self)
libvirt.libvirtError: Requested operation is not valid: domain is not running

Virtlet fix volume creation issues

Virtlet crashes with following backtrace due to volume name is null:

virtlet_1  | panic: runtime error: invalid memory address or nil pointer dereference
virtlet_1  | [signal 0xb code=0x1 addr=0x0 pc=0x555201]
virtlet_1  |
virtlet_1  | goroutine 76 [running]:
virtlet_1  | panic(0xa3e720, 0xc820010080)
virtlet_1  |    /usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6
virtlet_1  | github.com/Mirantis/virtlet/pkg/libvirttools.(*VirtualizationTool).processVolumes(0xc8201ee450, 0xc82005a420, 0x1, 0x1, 0xc82005dc00, 0x3fc, 0x0, 0x0, 0x0, 0x0)
virtlet_1  |    /go/src/github.com/Mirantis/virtlet/pkg/libvirttools/virtualization.go:145 +0x481
virtlet_1  | github.com/Mirantis/virtlet/pkg/libvirttools.(*VirtualizationTool).CreateContainer(0xc8201ee450, 0xc82040a0c0, 0xc8203ccc40, 0x1e, 0x0, 0x0, 0x0, 0x0)
virtlet_1  |    /go/src/github.com/Mirantis/virtlet/pkg/libvirttools/virtualization.go:267 +0x289
virtlet_1  | github.com/Mirantis/virtlet/pkg/manager.(*VirtletManager).CreateContainer(0xc8201ee600, 0x7f01eb9a5758, 0xc82040a060, 0xc82040a0c0, 0x0, 0x0, 0x0)
virtlet_1  |    /go/src/github.com/Mirantis/virtlet/pkg/manager/manager.go:174 +0x151
virtlet_1  | github.com/Mirantis/virtlet/vendor/k8s.io/kubernetes/pkg/kubelet/api/v1alpha1/runtime._RuntimeService_CreateContainer_Handler(0xae4b40, 0xc8201ee600, 0x7f01eb9a5758, 0xc82040a060, 0xc8203ce0a0, 0x0, 0x0, 0x0, 0x0, 0x0)

In latest master Volume names were removed completly:
kubernetes/kubernetes#33970

virtlet should create /var/lib/virtlet if it doesn't exist

./virtlet
libvirt: Storage Driver error : Storage pool not found: no storage pool with matching name 'default'
libvirt: Storage Driver error : Storage pool not found: no storage pool with matching name 'volumes'
libvirt: Storage Driver error : cannot open path '/var/lib/virtlet': No such file or directory
E0926 18:33:22.935635   14979 virtlet.go:45] Initializing server failed: &errors.errorString{s:"cannot open path '/var/lib/virtlet': No such file or directory"}

Problem with pod termination

After pod delete we have called stop/remove container, stop pod sandbox, and there is a loop on last one. Probably because something with statuses (get pods gives "terminating" state or something similar).
After ~5min. afair we have final pod sandbox remove request which cleans remains properly, but it's called probably after timeout, even if we still have "terminating" state.

This should be covered by tests also.

Small http server acting as docker registry like for VMs

We should have small server which would:

  • replace in some way decoded all weird characters, dots, extensions encoded in valid for k8s to their original form
  • do something with :tags (for the start - possibly stripping them)
  • redirect (301/302) downloader onto decoded original url after handling tags

PodSync Issue

Stop and Remove container if exists

virtlet_1  | libvirt: Domain Config error : operation failed: domain 'test-vm-fedora' already exists with uuid 45e9dddc-6229-4d23-7df4-890a4238f930
virtlet_1  | E1011 10:50:35.156804       1 manager.go:180] Error when creating container test-vm-fedora: &errors.errorString{s:"operation failed: domain 'test-vm-fedora' already exists with uuid 45e9dddc-6229-4d23-7df4-890a4238f930"}

Incorrect name of image in bolt

./cluster/kubectl.sh get pods --watch --show-all                                                                                                                                                  
NAME                     READY     STATUS                                                                                                                                                                                                                                       
                                                                                                                                RESTARTS   AGE
virtlet-example-cirros   0/1       rpc error: code = 2 desc = Key 'b808d11e23ff391ba1511f64fabdae355201d273fc022bad90ba92e3d670ed7c' doesn't exist in the bucket: &bolt.Bucket{bucket:(*bolt.bucket)(0x7f92f1f4a036), tx:(*bolt.Tx)(0xc8204449a0), buckets:map[string]*bolt.Buck
et(nil), page:(*bolt.page)(0x7f92f1f4a046), rootNode:(*bolt.node)(nil), nodes:map[bolt.pgid]*bolt.node(nil), FillPercent:0.5}   0          4s

Provide possibility to pass loglevel to binaries in containers

If we want to use different log levels (we should start doing that ASAP) - we also should have a possibility to change this loglevel values (if it should be set), per container (separately for virtlet, separately for libvirt) in manner close to passing cleanup request.

This requires passing variables through docker-compose what we have probably badly done in cleanup case.

readme file issues

  • wrong command for creating virtual environment "mkvirtualenv docker-compose" probably should be "virtualenv docker-compose"
  • virtual env activate command omitted
  • adding current user into docker group not mentioned (groupadd docker)
  • apparmor/selinux disabling notes not mentioned
  • host process of libvirt disabling notes not mentioned
  • recomended kubernetes version not mentioned for kubelet

Support calico networking

  • add calico daemons into package
  • tap device creation (save info about it somewhere in sandbox)
  • inform calico about new workload/endpoint
  • pass tap device to libvirt as network device
  • pass addressing data from calico in PodSandboxStatus

Snapshot removal issue

virtlet_1       | libvirt: Storage Driver error : Storage volume not found: no storage vol with matching name '/var/lib/libvirt/images/snapshot_702d1086-12bf-4398-791c-843158379044'
virtlet_1       | E1209 15:10:55.016604       1 manager.go:329] Error when removing image snapshot with path '/var/lib/libvirt/images/snapshot_702d1086-12bf-4398-791c-843158379044': Storage volume not found: no storage vol with matching name '/var/lib/libvirt/images/snapshot_702d1086-12bf-4398-791c-843158379044'

While domain is created with:

<source file='/var/lib/virtlet/snapshot_702d1086-12bf-4398-791c-843158379044'></source>

Fix filtering for get requests from kubelet

Container filtering was fixed a bit here to get rid of pod sync problem:
#116
22d4da7
But that's insufficient, see for example
https://github.com/kubernetes-incubator/cri-o/blob/8d275cebb9988df2c834978c1c334682913cace1/server/container.go#L452
Also make sure that filtering other objects (pod sandboxes, images) works correctly, too.
(E.g. there's possibility we don't handle nil state in sandbox filter properly)
This part needs to get test coverage too, of course.

Abstraction levels separation

We still have a spaghetti code, where we are mixing different responsibilities in different places, e.x. in all places as bolt utils, libvirt utils and manager we have assembling of CRI response for k8s. Instead of this only manager should know all about CRI api, while libvirt/bolt utils should send to manager data in internally only know structures.

Second thing discussed which should be refactored is libvirt dom xml handling. Now we have this based on marshaling/unmarshaling - but this could be replaced with simple string formatting (so whole approach would be simpler, but also easier to maintain - PTAL on adding new parts of xml like networking devices). Also this should be done in single place, out of current sources - @ivan4th have an idea how to do this properly.

when kubelet unable to read config file - virtlet don't undefine VM from libvirt

commit f4c985b
Merge: 9c1d32a ab3f861
Author: Piotr Skamruk [email protected]
Date: Tue Nov 15 09:03:30 2016 +0100

Merge pull request #114 from Mirantis/ivan4th/fix-k8s-version-pinning

Fix k8s version pinning

In case of losing config file for pod need to execute next scenario:

1)Shutdown domain
2)Undefine domain
3)Remove created boot image/snapshot
4)Remove created volumes
5)Clean virtlet internal pod info

but we have only step1 executed:
kubelet logs
http://paste.openstack.org/show/589562/

docker exec -it dockercompose_libvirt_1 virsh list --all
Id Name State

  • test-vm-cirros                 shut off
    

docker exec -it dockercompose_libvirt_1 virsh vol-list default
Name Path

cirros-0.3.4-x86_64-disk.img /var/lib/libvirt/images/cirros-0.3.4-x86_64-disk.img

also I don't understand how config file is tracked

example:
sudo kubernetes/server/kubernetes/server/bin/kubelet --container-runtime-endpoint=/run/virtlet.sock --image-service-endpoint=/run/virtlet.sock --config=/home/vagrant/test.yaml > /tmp/kubelog 2>&1

I tried to move test.yaml to /tmp/test.yaml - nothing changed, all worked.
I tried to move /tmp/test.yaml /tmp/test2.yaml
And when test2.yaml was removed - Only then I recieve "unable to read config path "/home/vagrant/test.yaml" in kubelet log.
And path in log message was incorrect.

Invalid start date

We have now:

Containers:
  cirros:
    Container ID:       virtlet://820cbbe2-1736-4396-7e47-37d46c78ed67
    Image:              172.17.0.1/cirros
    Image ID:
    Port:
    State:              Running
      Started:          Thu, 01 Jan 1970 01:00:00 +0100

Should be easy to fix.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.