Giter Site home page Giter Site logo

smartxworks / virtink Goto Github PK

View Code? Open in Web Editor NEW
481.0 15.0 37.0 779 KB

Lightweight Virtualization Add-on for Kubernetes

License: Apache License 2.0

Makefile 3.05% Dockerfile 1.82% Shell 0.90% Mustache 0.89% Go 93.34%
kubernetes cloud-hypervisor kvm rust-vmm virtualization

virtink's People

Contributors

carezkh avatar dependabot[bot] avatar fengye87 avatar makoto126 avatar scuzhanglei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

virtink's Issues

VM could not start in centos:7 for kernel 3.10.0-1160.45.1.el7.x86_64 but work in 5.4.15-1.el7.elrepo.x86_64

We have a k8s cluster built in two nodes

image

both use centos7 but have different kernel version 3.10.0-1160.45.1.el7.x86_64 and 5.4.15-1.el7.elrepo.x86_64. And I found that only in 5.4.15-1.el7.elrepo.x86_64 the VM can start, the other has the pod error below

[root@test-0011 virtink]# k get po vm-test-vm-tlt7v -owide
NAME               READY   STATUS   RESTARTS   AGE   IP            NODE                             NOMINATED NODE   READINESS GATES
vm-test-vm-tlt7v   0/1     Error    0          14m   172.16.1.32   test-0003.host   <none>           <none>

[root@test-0011 virtink]# k logs -f vm-test-vm-tlt7v
thread 'vmm' panicked at 'called `Result::unwrap()` on an `Err` value: CheckExtensions(Capability missing: ImmediateExit)', vmm/src/vm.rs:731:48
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Error booting VM: ResponseRecv(RecvError)

So whether it seems cloud-hypervisor does not support 3.10.0-1160.45.1.el7.x86_64 host kernel version?

Bridge interface down

Hey there, I can't seem to get my networking to function.

I'm trying to setup a network configuration that is migratable. We are using multus with cilium, and I'm trying to just use the bridge plugin as a secondary interface.

But whatever I try I can't seem to get the bridge plugin to work with virtink while the bridge plugin does seem to work with non-virtink pods.

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: test-bridge
  namespace: virtink-system
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "type": "bridge",
      "bridge": "mybr0",
      "ipam": {
          "type": "host-local",
          "subnet": "192.168.12.0/24",
          "rangeStart": "192.168.12.10",
          "rangeEnd": "192.168.12.200",
          "gateway": "192.168.12.1"
       }
    }
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: ubuntu-container-rootfs
spec:
  instance:
    memory:
      size: 1Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      #- name: pod
      - name: migration
        bridge: {}
  volumes:
    - name: ubuntu
      containerRootfs:
        image: smartxworks/virtink-container-rootfs-ubuntu
        size: 4Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    #- name: pod
    #  pod: {}
    - name: migration
      multus:
        networkName: test-bridge

the net0 interface is always DOWN

/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: net0-nic@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-net0 state UP group default
    link/ether 52:54:00:af:e1:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::5054:ff:feaf:e17a/64 scope link
       valid_lft forever preferred_lft forever
3: br-net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 52:54:00:af:e1:7a brd ff:ff:ff:ff:ff:ff
    inet 169.254.200.1/30 brd 169.254.200.3 scope global br-net0
       valid_lft forever preferred_lft forever
    inet6 fe80::90ab:8bff:fe1d:a85/64 scope link
       valid_lft forever preferred_lft forever
4: net0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 1a:cc:07:82:31:d8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.12.18/24 brd 192.168.12.255 scope global net0
       valid_lft forever preferred_lft forever
5: tap-net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-net0 state UP group default qlen 1000
    link/ether e6:7e:f9:99:fa:8c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e47e:f9ff:fe99:fa8c/64 scope link
       valid_lft forever preferred_lft forever
44: eth0@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:29:fc:06:24:57 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 240.1.77.208/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::3029:fcff:fe06:2457/64 scope link
       valid_lft forever preferred_lft forever

I can reach the pod network/internet, but not the pods on the bridge plugin network.

➜ k exec -it vm-ubuntu-container-rootfs-62mz7 sh

/ # ip route
default via 240.1.77.88 dev eth0 mtu 1450
169.254.200.0/30 dev br-net0 proto kernel scope link src 169.254.200.1
240.1.77.88 dev eth0 scope link

/ # cat /var/run/virtink/dnsmasq/br-net0.conf
port=0
interface=br-net0
bind-interfaces
dhcp-range=192.168.12.24,static,255.255.255.0
dhcp-host=52:54:00:6f:7a:51,192.168.12.24,infinite
dhcp-option=option:classless-static-route,192.168.12.0/24,0.0.0.0
dhcp-option=option:dns-server,172.16.13.10
dhcp-option=option:domain-search,virtink-system.svc.cluster.local,svc.cluster.local,cluster.local,maas
dhcp-authoritative
shared-network=br-net0,192.168.12.24

/ # ping google.com
PING google.com (142.250.9.139): 56 data bytes
64 bytes from 142.250.9.139: seq=0 ttl=107 time=1.230 ms
^C
--- google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.230/1.230/1.230 ms

/ # ping 192.168.12.17
PING 192.168.12.17 (192.168.12.17): 56 data bytes
^C
--- 192.168.12.17 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/ #

Will VM Pod support hostNetwork?

Such a NICE project.

Will VM Pod support hostNetwork in the future ? Therefore, VM can support OVS bridge, support fcsan and RBD cloud disks.

How to manage the VM in older version Kubernetes

Hi @fengye87 .I am using virtink v0.13.0 and Kubernetes v1.20.9 which have no --subresource option.
So how I can use it to manage the VM because I can't use kubectl patch vm $VM_NAME --subresource=status --type=merge -p "{\"status\":{\"powerAction\":\"$POWER_ACTION\"}}"

Is there any examples for using a Block pvc as root disk?

I try to write a vm.yaml below

apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: ubuntu-container-rootfs
spec:
  instance:
    memory:
      size: 4Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      persistentVolumeClaim:
        claimName: test-vm
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}

and have a PVC test-vm like

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-vm
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  selector:
    matchLabels:
      pv: test-vm
  storageClassName: csi-rbd-storageclass
  volumeMode: Block
  volumeName: test-vm
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 50Gi
  phase: Bound

And I want to use the pvc test-vm as root disk for vm. Is there any examples for how to import rootfs into this block pvc device?

Add Kubernetes Health Checks CRD

I want to specify health checks. Whenever a VM starts. I want to ensure that SSH is available before attempting any connection. It would be great if I can specify health checks through our virtual machine crd

ch.sock: connect: no such file or directory

I used CDI to do it. But there are some problems.

  1. the Dockerfile is:
FROM ubuntu:jammy AS rootfs
RUN apt-get update -y && \
    apt-get install -y --no-install-recommends systemd-sysv udev lsb-release cloud-init sudo openssh-server && \
    rm -rf /var/lib/apt/lists/*
  1. the vm.yaml:
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: test-vm
spec:
  instance:
    memory:
      size: 4Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      persistentVolumeClaim:
        claimName: rootfs-dv
      #dataVolume:
      # volumeName: ubuntu
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}

when I apply the vm.yaml,the pod is runinng, but it would quit or block later.
I try to solve the problem.

  1. use kubectl logs -f vm-test-vm-xxx:
    it looks like successfuly, but the pod will quit later.

image

image

  1. use kubectl describe vm test-vm :

image

Cannot access internet, DNS resolves to Kubernetes DNS timesout

This is my virtual machine config

apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  labels:
    virtlink.io/os: linux
    virtlink.io/vm: "{{ .Values.virtualMachine.name }}"
  name: "{{ .Values.virtualMachine.name }}"
spec:
  instance:
    # TODO: Set up cpu
    # https://github.com/smartxworks/virtink/blob/main/docs/dedicated_cpu_placement.md
    memory:
      size: "{{ .Values.memory }}"
    interfaces:
      - name: pod
    disks:
      - name: image
      - name: cloud-init
  volumes:
    - name: image
      containerRootfs:
       # CDI Image for ubuntu jammy
        image: "{{ .Values.image }}"
        size: 10Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          hostname: {{ .Values.virtualMachine.name }}
          password: ubuntu
          chpasswd: { expire: False }
          package_update: true
          package_upgrade: true
          ssh_authorized_keys:
            - {{ .Values.virtualMachine.sshKey }}
  networks:
    - name: pod
      pod: {}

Network Debugging Logs

ubuntu@node-01:~$ dig +short google.com
;; communications error to 127.0.0.53#53: timed out
;; communications error to 127.0.0.53#53: timed out
;; communications error to 127.0.0.53#53: timed out
;; no servers could be reached

ubuntu@node-01:~$ cat /etc/resolv.conf 
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search lab-c7a83719-938d-4836-9071-7efa3a790d23--978000.svc.cluster.local svc.cluster.local cluster.local

# Checking DNS IP configured via DHCP
ubuntu@node-01:~$ cat /run/systemd/resolve/resolv.conf
# This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 10.43.0.10
search lab-c7a83719-938d-4836-9071-7efa3a790d23--978000.svc.cluster.local svc.cluster.local cluster.local

Other pods can access the internet with the same core DNS configuration.

kubectl run netshoot --image=nicolaka/netshoot --rm -it --restart=Never --command -- dig +short google.com
216.58.211.238
pod "netshoot" deleted

With this configuration the VM is created but it has not internet. Because DNS resolution is not working. The DNS IP retrieved from DHCP is 10.43.0.10 (kube-dns) service IP.

I am getting timeout errors, whenever trying to resolve a domain name from the VM.
Note, I am able to SSH into the VM (because it connected to my pod network).

Note: If I add 8.8.8.8 to /etc/resolv.conf. I am able to access the internet

Connect VM to kube-ovn logical_switch and fix ip address

Hi, I use kube-ovn cni

create a pod connect to custom logical_switch and fix ip 10.10.10.11 with follow metadata

  annotations:
    ovn.kubernetes.io/logical_switch: subnet-jx00000003
    ovn.kubernetes.io/ip_address: 10.10.10.11

the full yaml is

apiVersion: v1
kind: Pod
metadata:
  namespace: ns-jx00000003
  name: jx00000003-nginx-11
  annotations:
    ovn.kubernetes.io/logical_switch: subnet-jx00000003
    ovn.kubernetes.io/ip_address: 10.10.10.11
spec:
  containers:
    - name: jx00000003-nginx-11
      image: registry.jxit.net.cn:5000/qdcloud/jxcentos:7

write vm pod yaml with nodeSelector the follow,
but it can not be scheduler with error kubelet Predicate NodeAffinity failed

spec:
  nodeSelector:
    kubernetes.io/hostname: k8s-node-03

node label the follow

root@k8s-master-01:~# kubectl get node --show-labels | grep hostname
k8s-node-02     Ready    <none>                 43h   v1.29.3+k3s1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-02,kubernetes.io/os=linux,node.kubernetes.io/instance-type=k3s
k8s-node-03     Ready    <none>                 43h   v1.29.3+k3s1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-03,kubernetes.io/os=linux,node.kubernetes.io/instance-type=k3s
k8s-master-01   Ready    control-plane,master   43h   v1.28.8+k3s1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kube-ovn/role=master,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s

full pod yaml

apiVersion: v1
kind: Pod
metadata:
  namespace: ns-jx00000003
  name: jx00000003-nginx-11
  annotations:
    ovn.kubernetes.io/logical_switch: subnet-jx00000003
    ovn.kubernetes.io/ip_address: 10.10.10.11
  labels:
    virtink.io/vm.name: jx00000003-nginx-11
spec:
  nodeSelector:
    kubernetes.io/hostname: k8s-node-03
  dnsPolicy: "None"
  dnsConfig:
    nameservers:
      - 10.16.255.254
      - 114.114.114.114
  initContainers:
  - args:
    - /mnt/virtink-kernel/vmlinux
    image: smartxworks/virtink-kernel-5.15.12
    imagePullPolicy: Always
    name: init-kernel
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/virtink-kernel
      name: virtink-kernel
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-hx29d
      readOnly: true
  - args:
    - /mnt/ubuntu/rootfs.raw
    - "4294967296"
    image: smartxworks/virtink-container-rootfs-ubuntu
    imagePullPolicy: Always
    name: init-volume-ubuntu
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/ubuntu
      name: ubuntu
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-hx29d
      readOnly: true
  - args:
    - cloud-init
    - aW5zdGFuY2UtaWQ6IDVkZTlkNTNlLTAyMTYtNGI1MC04NmVhLWY2M2QzZmFiNzMyYwpsb2NhbC1ob3N0bmFtZTogdWJ1bnR1LWNvbnRhaW5lci1yb290ZnM=
    - I2Nsb3VkLWNvbmZpZwpwYXNzd29yZDogcGFzc3dvcmQKY2hwYXNzd2Q6IHsgZXhwaXJlOiBGYWxzZSB9CnNzaF9wd2F1dGg6IFRydWU=
    - ""
    - /mnt/cloud-init/cloud-init.iso
    command:
    - virt-init-volume
    image: smartxworks/virt-prerunner:v0.13.0@sha256:44311e42fb3fb4823a755d487c728535ba928efa8e449a3b3b5b8617360bacf6
    imagePullPolicy: IfNotPresent
    name: init-volume-cloud-init
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/cloud-init
      name: cloud-init
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-hx29d
      readOnly: true
  containers:
  - args:
    - --vm-data
    - eyJraW5kIjoiVmlydHVhbE1hY2hpbmUiLCJhcGlWZXJzaW9uIjoidmlydC52aXJ0aW5rLnNtYXJ0eC5jb20vdjFhbHBoYTEiLCJtZXRhZGF0YSI6eyJuYW1lIjoidWJ1bnR1LWNvbnRhaW5lci1yb290ZnMiLCJuYW1lc3BhY2UiOiJkZWZhdWx0IiwidWlkIjoiNWRlOWQ1M2UtMDIxNi00YjUwLTg2ZWEtZjYzZDNmYWI3MzJjIiwicmVzb3VyY2VWZXJzaW9uIjoiMjU2MTEiLCJnZW5lcmF0aW9uIjoxLCJjcmVhdGlvblRpbWVzdGFtcCI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiYW5ub3RhdGlvbnMiOnsia3ViZWN0bC5rdWJlcm5ldGVzLmlvL2xhc3QtYXBwbGllZC1jb25maWd1cmF0aW9uIjoie1wiYXBpVmVyc2lvblwiOlwidmlydC52aXJ0aW5rLnNtYXJ0eC5jb20vdjFhbHBoYTFcIixcImtpbmRcIjpcIlZpcnR1YWxNYWNoaW5lXCIsXCJtZXRhZGF0YVwiOntcImFubm90YXRpb25zXCI6e30sXCJuYW1lXCI6XCJ1YnVudHUtY29udGFpbmVyLXJvb3Rmc1wiLFwibmFtZXNwYWNlXCI6XCJkZWZhdWx0XCJ9LFwic3BlY1wiOntcImluc3RhbmNlXCI6e1wiZGlza3NcIjpbe1wibmFtZVwiOlwidWJ1bnR1XCJ9LHtcIm5hbWVcIjpcImNsb3VkLWluaXRcIn1dLFwiaW50ZXJmYWNlc1wiOlt7XCJuYW1lXCI6XCJwb2RcIn1dLFwia2VybmVsXCI6e1wiY21kbGluZVwiOlwiY29uc29sZT10dHlTMCByb290PS9kZXYvdmRhIHJ3XCIsXCJpbWFnZVwiOlwic21hcnR4d29ya3MvdmlydGluay1rZXJuZWwtNS4xNS4xMlwifSxcIm1lbW9yeVwiOntcInNpemVcIjpcIjFHaVwifX0sXCJuZXR3b3Jrc1wiOlt7XCJuYW1lXCI6XCJwb2RcIixcInBvZFwiOnt9fV0sXCJ2b2x1bWVzXCI6W3tcImNvbnRhaW5lclJvb3Rmc1wiOntcImltYWdlXCI6XCJzbWFydHh3b3Jrcy92aXJ0aW5rLWNvbnRhaW5lci1yb290ZnMtdWJ1bnR1XCIsXCJzaXplXCI6XCI0R2lcIn0sXCJuYW1lXCI6XCJ1YnVudHVcIn0se1wiY2xvdWRJbml0XCI6e1widXNlckRhdGFcIjpcIiNjbG91ZC1jb25maWdcXG5wYXNzd29yZDogcGFzc3dvcmRcXG5jaHBhc3N3ZDogeyBleHBpcmU6IEZhbHNlIH1cXG5zc2hfcHdhdXRoOiBUcnVlXCJ9LFwibmFtZVwiOlwiY2xvdWQtaW5pdFwifV19fVxuIn0sIm1hbmFnZWRGaWVsZHMiOlt7Im1hbmFnZXIiOiJrdWJlY3RsLWNsaWVudC1zaWRlLWFwcGx5Iiwib3BlcmF0aW9uIjoiVXBkYXRlIiwiYXBpVmVyc2lvbiI6InZpcnQudmlydGluay5zbWFydHguY29tL3YxYWxwaGExIiwidGltZSI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiZmllbGRzVHlwZSI6IkZpZWxkc1YxIiwiZmllbGRzVjEiOnsiZjptZXRhZGF0YSI6eyJmOmFubm90YXRpb25zIjp7Ii4iOnt9LCJmOmt1YmVjdGwua3ViZXJuZXRlcy5pby9sYXN0LWFwcGxpZWQtY29uZmlndXJhdGlvbiI6e319fSwiZjpzcGVjIjp7Ii4iOnt9LCJmOmluc3RhbmNlIjp7Ii4iOnt9LCJmOmRpc2tzIjp7fSwiZjppbnRlcmZhY2VzIjp7fSwiZjprZXJuZWwiOnsiLiI6e30sImY6Y21kbGluZSI6e30sImY6aW1hZ2UiOnt9fSwiZjptZW1vcnkiOnsiLiI6e30sImY6c2l6ZSI6e319fSwiZjpuZXR3b3JrcyI6e30sImY6dm9sdW1lcyI6e319fX0seyJtYW5hZ2VyIjoidmlydC1jb250cm9sbGVyIiwib3BlcmF0aW9uIjoiVXBkYXRlIiwiYXBpVmVyc2lvbiI6InZpcnQudmlydGluay5zbWFydHguY29tL3YxYWxwaGExIiwidGltZSI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiZmllbGRzVHlwZSI6IkZpZWxkc1YxIiwiZmllbGRzVjEiOnsiZjpzdGF0dXMiOnsiZjpwaGFzZSI6e30sImY6dm1Qb2ROYW1lIjp7fX19LCJzdWJyZXNvdXJjZSI6InN0YXR1cyJ9LHsibWFuYWdlciI6InZpcnQtZGFlbW9uIiwib3BlcmF0aW9uIjoiVXBkYXRlIiwiYXBpVmVyc2lvbiI6InZpcnQudmlydGluay5zbWFydHguY29tL3YxYWxwaGExIiwidGltZSI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiZmllbGRzVHlwZSI6IkZpZWxkc1YxIiwiZmllbGRzVjEiOnsiZjpzdGF0dXMiOnt9fSwic3VicmVzb3VyY2UiOiJzdGF0dXMifV19LCJzcGVjIjp7InJlc291cmNlcyI6e30sInJ1blBvbGljeSI6Ik9uY2UiLCJpbnN0YW5jZSI6eyJjcHUiOnsic29ja2V0cyI6MSwiY29yZXNQZXJTb2NrZXQiOjF9LCJtZW1vcnkiOnsic2l6ZSI6IjFHaSJ9LCJrZXJuZWwiOnsiaW1hZ2UiOiJzbWFydHh3b3Jrcy92aXJ0aW5rLWtlcm5lbC01LjE1LjEyIiwiY21kbGluZSI6ImNvbnNvbGU9dHR5UzAgcm9vdD0vZGV2L3ZkYSBydyJ9LCJkaXNrcyI6W3sibmFtZSI6InVidW50dSJ9LHsibmFtZSI6ImNsb3VkLWluaXQifV0sImludGVyZmFjZXMiOlt7Im5hbWUiOiJwb2QiLCJtYWMiOiI1Mjo1NDowMDplMjoxNjplYSIsImJyaWRnZSI6e319XX0sInZvbHVtZXMiOlt7Im5hbWUiOiJ1YnVudHUiLCJjb250YWluZXJSb290ZnMiOnsiaW1hZ2UiOiJzbWFydHh3b3Jrcy92aXJ0aW5rLWNvbnRhaW5lci1yb290ZnMtdWJ1bnR1Iiwic2l6ZSI6IjRHaSJ9fSx7Im5hbWUiOiJjbG91ZC1pbml0IiwiY2xvdWRJbml0Ijp7InVzZXJEYXRhIjoiI2Nsb3VkLWNvbmZpZ1xucGFzc3dvcmQ6IHBhc3N3b3JkXG5jaHBhc3N3ZDogeyBleHBpcmU6IEZhbHNlIH1cbnNzaF9wd2F1dGg6IFRydWUifX1dLCJuZXR3b3JrcyI6W3sibmFtZSI6InBvZCIsInBvZCI6e319XX0sInN0YXR1cyI6eyJwaGFzZSI6IlNjaGVkdWxpbmciLCJ2bVBvZE5hbWUiOiJ2bS11YnVudHUtY29udGFpbmVyLXJvb3Rmcy1xZ3JkeCJ9fQ==
    image: smartxworks/virt-prerunner:v0.13.0@sha256:44311e42fb3fb4823a755d487c728535ba928efa8e449a3b3b5b8617360bacf6
    imagePullPolicy: IfNotPresent
    name: cloud-hypervisor
    resources:
      limits:
        devices.virtink.io/kvm: "1"
        devices.virtink.io/tun: "1"
      requests:
        devices.virtink.io/kvm: "1"
        devices.virtink.io/tun: "1"
    securityContext:
      capabilities:
        add:
        - SYS_ADMIN
        - NET_ADMIN
        - SYS_RESOURCE
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/virtink
      name: virtink
    - mountPath: /mnt/virtink-kernel
      name: virtink-kernel
    - mountPath: /mnt/ubuntu
      name: ubuntu
    - mountPath: /mnt/cloud-init
      name: cloud-init
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-hx29d
      readOnly: true
  enableServiceLinks: true
  nodeName: k8s-master-01
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: virtink
  - emptyDir: {}
    name: virtink-kernel
  - emptyDir: {}
    name: ubuntu
  - emptyDir: {}
    name: cloud-init
  - name: kube-api-access-hx29d
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace

Does virtink support mount a PVC into some path in vm?

Sometimes, we want to mount a PVC into the VM to persistent some data. For example, we have a PVC called my-data and we want to mount it into /data path into vm. So does virtink support it?
To do this thing, I think there're two hands:

  1. mount volume from host machine into VM (hostPath or local-path-provisioner)
  2. mount volume from network (directly mount ceph rbd from network into VM)
    So does virtink support one of the two method above?

Does not work in a KinD cluster

Even though the VM I used to install the KinD cluster satisfies the conditions, it does not work. When I run "kubectl get vm" I can see it running, and also the relevant pod is running too. However, I cannot ssh into the virtink VM. For the record, I successfully created a VM inside a KinD cluster via KubeVirt. I wonder what the difference is and what actually causes it.

DataVolume GC cause VM deployment failure

Recent version of CDI will garbage collect completed DataVolume. For VM uses DataVolume, virtink waits its DataVolume import to be completed before starting the VM. But with default CDI deployment, DataVolume will be GCed once completed. Hence virtink fails deploy VM with build VM Pod: DataVolume.cdi.kubevirt.io "cdi-volume" not found error.

Perhaps it's worth mentioning this in CDI VM example. Tuning .spec.config.dataVolumeTTLSeconds of cdi.kubevirt.io/v1beta1 works.(or adapting virtink to CDI's GC action?)

Failed to start Apply the …ngs specified in cloud-config. #28

Getting some errors during the "Waiting for control plane to be initialized..." phase:

kubectl logs vm-quickstart-cp-dc74x-qrvvf

[ 12.822875] cloud-init[1039]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 'modules:config' at Thu, 01 Jun 2023 02:40:30 +0000. Up 12.67 seconds.
[ 12.900775] cloud-init[1039]: 2023-06-01 02:40:30,914 - util.py[WARNING]: Running module locale (<module 'cloudinit.config.cc_locale' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_locale.py'>) failed
[ 12.939189] cloud-init[1039]: 2023-06-01 02:40:30,952 - cc_set_passwords.py[WARNING]: Ignoring config 'ssh_pwauth: None'. SSH service 'ssh' is not installed.
[FAILED] Failed to start Apply the …ngs specified in cloud-config.
See 'systemctl status cloud-config.service' for details.
Starting Execute cloud user/final scripts...

[ 57.034344] cloud-init[1055]: CGROUPS_BLKIO: missing
[ 57.035390] cloud-init[1055]: [WARNING SystemVerification]: missing optional cgroups: blkio
[ 57.037697] cloud-init[1055]: [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exec: "modprobe": executable file not found in $PATH

[ 101.474076] cloud-init[1055]: [kubelet-check] Initial timeout of 40s passed.
[ 322.483336] cloud-init[1055]: Unfortunately, an error has occurred:
[ 322.497123] cloud-init[1055]: timed out waiting for the condition
[ 322.501165] cloud-init[1055]: This error is likely caused by:
[ 322.501887] cloud-init[1055]: - The kubelet is not running
[ 322.509514] cloud-init[1055]: - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

Please give me some advice how to solve this problem, TKS

VM network isn't reacheable with flannel CNI

I've tried to get Virtink running on stock k3s and can't connect to VM:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml
kubectl apply -f https://github.com/smartxworks/virtink/releases/download/v0.10.0/virtink.yaml
cat <<EOF | kubectl apply -f -
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: ubuntu-container-rootfs
spec:
  instance:
    memory:
      size: 1Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      containerRootfs:
        image: smartxworks/virtink-container-rootfs-ubuntu
        size: 4Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}
EOF
export VM_NAME=ubuntu-container-rootfs
export VM_POD_NAME=$(kubectl get vm $VM_NAME -o jsonpath='{.status.vmPodName}')
export VM_IP=$(kubectl get pod $VM_POD_NAME -o jsonpath='{.status.podIP}')
kubectl run ssh-$VM_NAME --rm --image=alpine --restart=Never -it -- /bin/sh -c "apk add openssh-client && ssh ubuntu@$VM_IP"
If you don't see a command prompt, try pressing enter.
ssh: connect to host 10.42.0.14 port 22: Host is unreachable

Switching to Calico helps (saw that in e2e tests).

multus network with bridge cni plugins

Hello

I'm trying to use multus with bridge cni plugin:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: overlay
spec:
  config: '{
    "cniVersion": "0.3.1",
    "name": "overlay",
    "type": "bridge",
    "bridge": "my-bridge",
    "ipam": {
        "type": "host-local",
        "subnet": "10.88.0.0/16"
    }
  }'

---
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: ubuntu-rootfs
spec:
  instance:
    cpu:
      sockets: 4
      coresPerSocket: 1
    memory:
      size: 4Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      imagePullPolicy: IfNotPresent
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
        bridge: {}
      - name: overlay
        bridge: {}
  networks:
    - name: pod
      pod: {}
    - name: overlay
      multus:
        networkName: overlay
  volumes:
    - name: ubuntu
      containerRootfs:
        image: smartxworks/virtink-container-rootfs-ubuntu
        imagePullPolicy: IfNotPresent
        size: 16Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True

and got vm failed:

% k get vm
NAME            STATUS   NODE
ubuntu-rootfs   Failed   

there is no any logs (and Pod with VM also destroyed) but I've catch related logs:

2022/09/15 08:54:05 Failed to build VM config: setup bridge network: start DHCP server: start dnsmasq: "/usr/sbin/dnsmasq --conf-file=/var/run/virtink/dnsmasq/br-net1.conf --pid-file=/var/run/virtink/dnsmasq/br-net1.pid": exit status 1: 
dnsmasq: bad IP address at line 6 of /var/run/virtink/dnsmasq/br-net1.conf

As I understand on line 6 ether is router details usually (dhcp-option=option:router)

VM started only when bridge has setting IsDefaultGateway: true:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: overlay
spec:
  config: '{
    "cniVersion": "0.3.1",
    "name": "overlay",
    "type": "bridge",
    "bridge": "my-bridge",
    "isDefaultGateway": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.88.0.0/16"
    }
  }'

but then networking inside pod with vm looks weird and pod not available via net

create bridge: file exists

[root@master jx24000041]# kubectl -n ns-jx24000041 logs pod/jx24000041-ops-81 cloud-hypervisor 
2024/05/01 13:51:04 Failed to build VM config: setup bridge network: create bridge: file exists

pod yaml is

apiVersion: v1
kind: Pod
metadata:
  namespace: ns-jx24000041
  name: jx24000041-ops-81
  annotations:
    ovn.kubernetes.io/logical_switch: subnet-jx24000041
    ovn.kubernetes.io/ip_address: 10.10.10.81
spec:
  nodeSelector:
    kubernetes.io/hostname: ubuntu-22-04
  dnsPolicy: "None"
  dnsConfig:
    nameservers:
      - 10.16.255.254
      - 114.114.114.114
  initContainers:
  - args:
    - /mnt/virtink-kernel/vmlinux
    image: smartxworks/virtink-kernel-5.15.12
    imagePullPolicy: Always
    name: init-kernel
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/virtink-kernel
      name: virtink-kernel
  - args:
    - /mnt/ubuntu/rootfs.raw
    - "42949672960"
    image: registry.jxit.net.cn:5000/qdcloud/init-rootfs-ubuntu:no-over
    imagePullPolicy: Always
    name: init-volume-ubuntu
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/ubuntu
      name: ubuntu
  - args:
    - cloud-init
    - aW5zdGFuY2UtaWQ6IDVkZTlkNTNlLTAyMTYtNGI1MC04NmVhLWY2M2QzZmFiNzMyYwpsb2NhbC1ob3N0bmFtZTogdWJ1bnR1LWNvbnRhaW5lci1yb290ZnM=
    - I2Nsb3VkLWNvbmZpZwpzc2hfcHdhdXRoOiBUcnVlCmNocGFzc3dkOgogIGxpc3Q6IHwKICAgICByb290OjEyMwogICAgIHVidW50dToxMjMKICBleHBpcmU6IEZhbHNlCmJvb3RjbWQ6CiAgLSBlY2hvIFBlcm1pdFJvb3RMb2dpbiB5ZXMgPj4gL2V0Yy9zc2gvc3NoZF9jb25maWc=
    - dmVyc2lvbjogMQpjb25maWc6CiAgLSB0eXBlOiBwaHlzaWNhbAogICAgbmFtZTogZW5zNAogICAgc3VibmV0czoKICAgICAgLSB0eXBlOiBzdGF0aWMKICAgICAgICBpcHY0OiB0cnVlCiAgICAgICAgYWRkcmVzczogMTAuMTAuMTAuODEKICAgICAgICBuZXRtYXNrOiAyNTUuMjU1LjI1NS4wCiAgICAgICAgZ2F0ZXdheTogMTAuMTAuMTAuMQogICAgICAgIGNvbnRyb2w6IGF1dG8KICAtIHR5cGU6IG5hbWVzZXJ2ZXIKICAgIGFkZHJlc3M6IDEwLjE2LjI1NS4yNTQ=
    - /mnt/cloud-init/cloud-init.iso
    command:
    - virt-init-volume
    image: smartxworks/virt-prerunner:v0.13.0@sha256:44311e42fb3fb4823a755d487c728535ba928efa8e449a3b3b5b8617360bacf6
    imagePullPolicy: IfNotPresent
    name: init-volume-cloud-init
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/cloud-init
      name: cloud-init
  containers:
  - args:
    - --vm-data
    - eyJraW5kIjoiVmlydHVhbE1hY2hpbmUiLCJhcGlWZXJzaW9uIjoidmlydC52aXJ0aW5rLnNtYXJ0eC5jb20vdjFhbHBoYTEiLCJtZXRhZGF0YSI6eyJuYW1lIjoidWJ1bnR1LWNvbnRhaW5lci1yb290ZnMiLCJuYW1lc3BhY2UiOiJkZWZhdWx0IiwidWlkIjoiNWRlOWQ1M2UtMDIxNi00YjUwLTg2ZWEtZjYzZDNmYWI3MzJjIiwicmVzb3VyY2VWZXJzaW9uIjoiMjU2MTEiLCJnZW5lcmF0aW9uIjoxLCJjcmVhdGlvblRpbWVzdGFtcCI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiYW5ub3RhdGlvbnMiOnsia3ViZWN0bC5rdWJlcm5ldGVzLmlvL2xhc3QtYXBwbGllZC1jb25maWd1cmF0aW9uIjoie1wiYXBpVmVyc2lvblwiOlwidmlydC52aXJ0aW5rLnNtYXJ0eC5jb20vdjFhbHBoYTFcIixcImtpbmRcIjpcIlZpcnR1YWxNYWNoaW5lXCIsXCJtZXRhZGF0YVwiOntcImFubm90YXRpb25zXCI6e30sXCJuYW1lXCI6XCJ1YnVudHUtY29udGFpbmVyLXJvb3Rmc1wiLFwibmFtZXNwYWNlXCI6XCJkZWZhdWx0XCJ9LFwic3BlY1wiOntcImluc3RhbmNlXCI6e1wiZGlza3NcIjpbe1wibmFtZVwiOlwidWJ1bnR1XCJ9LHtcIm5hbWVcIjpcImNsb3VkLWluaXRcIn1dLFwiaW50ZXJmYWNlc1wiOlt7XCJuYW1lXCI6XCJwb2RcIn1dLFwia2VybmVsXCI6e1wiY21kbGluZVwiOlwiY29uc29sZT10dHlTMCByb290PS9kZXYvdmRhIHJ3XCIsXCJpbWFnZVwiOlwic21hcnR4d29ya3MvdmlydGluay1rZXJuZWwtNS4xNS4xMlwifSxcIm1lbW9yeVwiOntcInNpemVcIjpcIjFHaVwifX0sXCJuZXR3b3Jrc1wiOlt7XCJuYW1lXCI6XCJwb2RcIixcInBvZFwiOnt9fV0sXCJ2b2x1bWVzXCI6W3tcImNvbnRhaW5lclJvb3Rmc1wiOntcImltYWdlXCI6XCJzbWFydHh3b3Jrcy92aXJ0aW5rLWNvbnRhaW5lci1yb290ZnMtdWJ1bnR1XCIsXCJzaXplXCI6XCI0R2lcIn0sXCJuYW1lXCI6XCJ1YnVudHVcIn0se1wiY2xvdWRJbml0XCI6e1widXNlckRhdGFcIjpcIiNjbG91ZC1jb25maWdcXG5wYXNzd29yZDogcGFzc3dvcmRcXG5jaHBhc3N3ZDogeyBleHBpcmU6IEZhbHNlIH1cXG5zc2hfcHdhdXRoOiBUcnVlXCJ9LFwibmFtZVwiOlwiY2xvdWQtaW5pdFwifV19fVxuIn0sIm1hbmFnZWRGaWVsZHMiOlt7Im1hbmFnZXIiOiJrdWJlY3RsLWNsaWVudC1zaWRlLWFwcGx5Iiwib3BlcmF0aW9uIjoiVXBkYXRlIiwiYXBpVmVyc2lvbiI6InZpcnQudmlydGluay5zbWFydHguY29tL3YxYWxwaGExIiwidGltZSI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiZmllbGRzVHlwZSI6IkZpZWxkc1YxIiwiZmllbGRzVjEiOnsiZjptZXRhZGF0YSI6eyJmOmFubm90YXRpb25zIjp7Ii4iOnt9LCJmOmt1YmVjdGwua3ViZXJuZXRlcy5pby9sYXN0LWFwcGxpZWQtY29uZmlndXJhdGlvbiI6e319fSwiZjpzcGVjIjp7Ii4iOnt9LCJmOmluc3RhbmNlIjp7Ii4iOnt9LCJmOmRpc2tzIjp7fSwiZjppbnRlcmZhY2VzIjp7fSwiZjprZXJuZWwiOnsiLiI6e30sImY6Y21kbGluZSI6e30sImY6aW1hZ2UiOnt9fSwiZjptZW1vcnkiOnsiLiI6e30sImY6c2l6ZSI6e319fSwiZjpuZXR3b3JrcyI6e30sImY6dm9sdW1lcyI6e319fX0seyJtYW5hZ2VyIjoidmlydC1jb250cm9sbGVyIiwib3BlcmF0aW9uIjoiVXBkYXRlIiwiYXBpVmVyc2lvbiI6InZpcnQudmlydGluay5zbWFydHguY29tL3YxYWxwaGExIiwidGltZSI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiZmllbGRzVHlwZSI6IkZpZWxkc1YxIiwiZmllbGRzVjEiOnsiZjpzdGF0dXMiOnsiZjpwaGFzZSI6e30sImY6dm1Qb2ROYW1lIjp7fX19LCJzdWJyZXNvdXJjZSI6InN0YXR1cyJ9LHsibWFuYWdlciI6InZpcnQtZGFlbW9uIiwib3BlcmF0aW9uIjoiVXBkYXRlIiwiYXBpVmVyc2lvbiI6InZpcnQudmlydGluay5zbWFydHguY29tL3YxYWxwaGExIiwidGltZSI6IjIwMjQtMDQtMjBUMDY6MDQ6NDFaIiwiZmllbGRzVHlwZSI6IkZpZWxkc1YxIiwiZmllbGRzVjEiOnsiZjpzdGF0dXMiOnt9fSwic3VicmVzb3VyY2UiOiJzdGF0dXMifV19LCJzcGVjIjp7InJlc291cmNlcyI6e30sInJ1blBvbGljeSI6Ik9uY2UiLCJpbnN0YW5jZSI6eyJjcHUiOnsic29ja2V0cyI6MiwiY29yZXNQZXJTb2NrZXQiOjJ9LCJtZW1vcnkiOnsic2l6ZSI6IjhHaSJ9LCJrZXJuZWwiOnsiaW1hZ2UiOiJzbWFydHh3b3Jrcy92aXJ0aW5rLWtlcm5lbC01LjE1LjEyIiwiY21kbGluZSI6ImNvbnNvbGU9dHR5UzAgcm9vdD0vZGV2L3ZkYSBydyJ9LCJkaXNrcyI6W3sibmFtZSI6InVidW50dSJ9LHsibmFtZSI6ImNsb3VkLWluaXQifV0sImludGVyZmFjZXMiOlt7Im5hbWUiOiJwb2QiLCJtYWMiOiI1Mjo1NDowMDplMjoxNjplYSIsImJyaWRnZSI6e319XX0sInZvbHVtZXMiOlt7Im5hbWUiOiJ1YnVudHUiLCJjb250YWluZXJSb290ZnMiOnsiaW1hZ2UiOiJzbWFydHh3b3Jrcy92aXJ0aW5rLWNvbnRhaW5lci1yb290ZnMtdWJ1bnR1Iiwic2l6ZSI6IjQwR2kifX0seyJuYW1lIjoiY2xvdWQtaW5pdCIsImNsb3VkSW5pdCI6eyJ1c2VyRGF0YSI6IiNjbG91ZC1jb25maWdcbnBhc3N3b3JkOiBwYXNzd29yZFxuY2hwYXNzd2Q6IHsgZXhwaXJlOiBGYWxzZSB9XG5zc2hfcHdhdXRoOiBUcnVlIn19XSwibmV0d29ya3MiOlt7Im5hbWUiOiJwb2QiLCJwb2QiOnt9fV19LCJzdGF0dXMiOnsicGhhc2UiOiJTY2hlZHVsaW5nIiwidm1Qb2ROYW1lIjoidm0tdWJ1bnR1LWNvbnRhaW5lci1yb290ZnMtcWdyZHgifX0=
    image: smartxworks/virt-prerunner:v0.13.0@sha256:44311e42fb3fb4823a755d487c728535ba928efa8e449a3b3b5b8617360bacf6
    imagePullPolicy: IfNotPresent
    name: cloud-hypervisor
    securityContext:
      capabilities:
        add:
        - SYS_ADMIN
        - NET_ADMIN
        - SYS_RESOURCE
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /dev/kvm
      name: devkvm
    - mountPath: /dev/net/tun
      name: devtun
    - mountPath: /var/run/virtink
      name: virtink
    - mountPath: /mnt/virtink-kernel
      name: virtink-kernel
    - mountPath: /mnt/ubuntu
      name: ubuntu
    - mountPath: /mnt/cloud-init
      name: cloud-init
  volumes:
  - name: devkvm
    hostPath:
      path: /dev/kvm
  - name: devtun
    hostPath:
      path: /dev/net/tun
  - emptyDir: {}
    name: virtink
  - emptyDir: {}
    name: virtink-kernel
  - name: ubuntu
    persistentVolumeClaim:
      claimName: jx24000041-ops-81-pvc
  - emptyDir: {}
    name: cloud-init

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.