Giter Site home page Giter Site logo

kubedee's Introduction

End of life: So long, kubedee

kubedee logo

builds.sr.ht status

Fast multi-node Kubernetes (>= 1.18) development and test clusters on LXD.

Under the hood, CRI-O is used as container runtime and Flannel for networking.

For questions or feedback, please open an issue.

Requirements

  • LXD
    • Make sure your user is member of the lxd group (see lxd --group ...)
    • btrfs is used a storage driver currently and required
  • cfssl with cfssljson
  • jq
  • kubectl

Installation

kubedee is meant to and easily installed out of git. Clone the repository and link kubedee from a directory in your $PATH. Example:

cd ~/code
git clone https://github.com/schu/kubedee
cd ~/bin
ln -s ~/code/kubedee/kubedee

That's it!

kubedee stores all data in ~/.local/share/kubedee/.... kubedee LXD resources have a kubedee- prefix.

KUBEDEE_DEBUG=1 enables verbose debugging output (set -x).

Usage

Getting started

kubedee can install clusters based on an upstream version of Kubernetes or your own build.

To install an upstream version, use --kubernetes-version to specify the release (Git tag) that you want to install. For example:

kubedee up test --kubernetes-version v1.21.1

To install a local build, specify the location of the binaries (kube-apiserver etc.) with --bin-dir. For example:

kubedee up test --bin-dir /path/to/my/kubernetes/binaries

The default for --bin-dir is ./_output/bin/ and thus matches the default location after running make in the Kubernetes repository. So in a typical development workflow --bin-dir doesn't need to be specified.

Note: after the installation or upgrade of kubedee, kubedee requires some extra time to download and update cached packages and images once.

With a SSD, up-to-date caches and images, setting up a cluster usually takes less than 60 seconds for a four node cluster (etcd, controller, 2x worker).

[...]

Switched to context "kubedee-test".

==> Cluster test started
==> kubectl config current-context set to kubedee-test

==> Cluster nodes can be accessed with 'lxc exec <name> bash'
==> Cluster files can be found in '/home/schu/.local/share/kubedee/clusters/test'

==> Current node status is (should be ready soon):
NAME                         STATUS     ROLES    AGE   VERSION
kubedee-test-controller      NotReady   master   16s   v1.21.1
kubedee-test-worker-2ma3em   NotReady   node     9s    v1.21.1
kubedee-test-worker-zm8ikt   NotReady   node     2s    v1.21.1

kubectl's current-context has been changed to the new cluster automatically.

Cheatsheet

List the available clusters:

kubedee [list]

Start a cluster with less/more worker nodes than the default of 2:

kubedee up --num-workers 4 <cluster-name>

Start a new worker node in an existing cluster:

kubedee start-worker <cluster-name>

Delete a cluster:

kubedee delete <cluster-name>

Configure the kubectl env:

eval $(kubedee kubectl-env <cluster-name>)

Configure the etcdctl env:

eval $(kubedee etcd-env <cluster-name>)

See all available commands and options:

kubedee help

Smoke test

kubedee has a smoke-test subcommand:

kubedee smoke-test <cluster-name>

kubedee's People

Contributors

blinkeye avatar brokenpip3 avatar schu avatar teur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kubedee's Issues

Does `up` work without --kubernetes-version?

Up with a new cluster name gave me an error. I'd successfully launched a cluster before with --kubernetes-version v1.21.1

1 % kubedee up greymatter
==> Creating network for greymatter ...
Network kubedee-1acypo created
==> Pruning old kubedee caches ...
==> Pruning old kubedee container images ...
==> Creating new cluster greymatter ...
cp: cannot stat './_output/bin/kube-apiserver': No such file or directory
==> Failed to copy './_output/bin/kube-apiserver' to '/home/coleman/.local/share/kubedee/clusters/greymatter/rootfs/usr/local/bin/'

This immediately worked (changed the cluster name):

 kubedee up gm --kubernetes-version v1.21.1

open /run/flannel/subnet.env: no such file or directory

Tried with 1.14.2, 1.14.5, 1.15.2, same error.
Distro: ubuntu 18.04, btrfs as storage

NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
kubectl get nodes
NAME                          STATUS   ROLES    AGE     VERSION
kubedee-myk8s-controller      Ready    master   6m1s    v1.14.5
kubedee-myk8s-worker-ntugu4   Ready    node     5m37s   v1.14.5
kubedee-myk8s-worker-vk618x   Ready    node     5m56s   v1.14.5
kubedee smoke-test myk8s
==> Running smoke test for cluster myk8s ...
deployment.apps/kubedee-smoke-test-myk8s-lymh6t created
deployment.extensions/kubedee-smoke-test-myk8s-lymh6t scaled
service/kubedee-smoke-test-myk8s-lymh6t exposed
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
==> kubedee-smoke-test-myk8s-lymh6t not ready yet
service "kubedee-smoke-test-myk8s-lymh6t" deleted
deployment.extensions "kubedee-smoke-test-myk8s-lymh6t" deleted
==> Failed to connect to kubedee-smoke-test-myk8s-lymh6t within 240 seconds
Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kubedee-smoke-test-myk8s-lymh6t-6ff89bc964-r6wjj_default_207ebf8d-bbb9-11e9-ab0b-00163e6dadd3_0(3dd2163623a0c98717bc12a7657cdb49c85ea5031442e5ccd39b29350a708641): open /run/flannel/subnet.env: no such file or directory

kubelet fails to start on k8s v1.10.0-beta.1 / "failed to get rootfs info"

Mar 03 11:03:27 kubedee-v1-10-0-beta-1-worker-46igat kubelet[850]: W0303 11:03:27.114392     850 fs.go:539] stat failed on /dev/loop1 with error: no such file or directory
Mar 03 11:03:27 kubedee-v1-10-0-beta-1-worker-46igat kubelet[850]: F0303 11:03:27.114402     850 kubelet.go:1356] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 110 in cached partitions map
git log v1.9.3..v1.10.0-beta.1 -S RootFsInfo --oneline

shows two commits:

b259543985 collect ephemeral storage capacity on initialization
68dadcfd15 Make eviction manager work with CRI container runtime.

With b259543985 the kubelet fails to start when RootFsInfo returns an error where before it did only log a message. For example from a v1.9.3 kubedee cluster:

Mar 03 11:43:37 kubedee-v1-9-3-worker-ja6ihf kubelet[763]: E0303 11:43:37.233591     763 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 142 in cached partitions map

Two old issues discussing the problem:

..and the corresponding patches:

In kubedee RootFsInfo fails because /dev/loopX (the mount source of /var/lib/kubelet) was not mounted into the container up until now.

cgroup v2 support

On systems with cgroup v2, kubedee (/lxd) prints the following warning on every invocation, so quite a lot:

WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement

Also, mounting devices might fails due to the devices cgroup not being mounted. Mounting the cgroup manually and restarting lxd after seems to work as a workaround:

mkdir /sys/fs/cgroup/devices
mount -t cgroup devices -o devices /sys/fs/cgroup/devices
snap restart lxd

Installation fails because no node is available

Following the README and executing the commands on a Ubuntu 18.0.4.2 LTS with snap lxd 3.13:

$ ./kubedee up test --kubernetes-version v1.14.2

fails with:

...
==> Deploying core-dns ...
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
==> Applying labels and taints to kubedee-test-controller ...
Error from server (NotFound): nodes "kubedee-test-controller" not found

See kubedee.log for the complete output.

Inspecting the current state of affairs reveals:

$ lxc list
+----------------------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|            NAME            |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+----------------------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| kubedee-test-controller    | RUNNING | 10.216.230.167 (eth0) | fd42:ecad:d329:99e5:216:3eff:fe9a:b8de (eth1) | PERSISTENT |           |
+----------------------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| kubedee-test-etcd          | RUNNING | 10.216.230.27 (eth0)  | fd42:ecad:d329:99e5:216:3eff:fe65:da2d (eth1) | PERSISTENT |           |
+----------------------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| kubedee-test-worker-2ajv8y | RUNNING | 10.216.230.67 (eth0)  | fd42:ecad:d329:99e5:216:3eff:fe7d:6437 (eth1) | PERSISTENT |           |
+----------------------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| kubedee-test-worker-68gljm | RUNNING | 10.216.230.56 (eth0)  | fd42:ecad:d329:99e5:216:3eff:fe82:6b79 (eth1) | PERSISTENT |           |
+----------------------------+-

the controller was created successfully. The problem is that the worker-node is not ready:

$ kubectl --kubeconfig ~/.local/share/kubedee/clusters/test/kubeconfig/admin.kubeconfig get nodes
NAME                         STATUS     ROLES    AGE     VERSION
kubedee-test-worker-2ajv8y   NotReady   <none>   6m39s   v1.14.2

which explains the pending pods:

$ kubectl --kubeconfig ~/.local/share/kubedee/clusters/test/kubeconfig/admin.kubeconfig get pods --all-namespaces
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-695ddf5859-7skng   0/1     Pending   0          6m31s
kube-system   coredns-695ddf5859-95z55   0/1     Pending   0          6m31s
kube-system   kube-flannel-ds-6z2tw      0/1     Pending   0          5m5s

Logging into the worker node I see a mount error:

$ lxc exec kubedee-test-worker-2ajv8y /bin/bash
root@kubedee-test-worker-2ajv8y:~# dmesg |  tail -n1
[ 4234.802298] audit: type=1400 audit(1560776365.531:507): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-kubedee-test-etcd_</var/snap/lxd/common/lxd>" name="/dev/" pid=26581 comm="(ostnamed)" flags="ro, nosuid, noexec, remount, strictatime"

There's not much running on the worker:

root@kubedee-test-worker-2ajv8y:~# ps axuf
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      5476  0.0  0.0  21900  3964 ?        Ss   13:10   0:00 /bin/bash
root      8765  0.0  0.0  37792  3244 ?        R+   13:11   0:00  \_ ps axuf
root         1  0.0  0.0 225612  9464 ?        Ss   12:59   0:02 /sbin/init
root        58  0.0  0.1 160680 71808 ?        S<s  12:59   0:02 /lib/systemd/systemd-journald
root        69  0.0  0.0  43328  4748 ?        Ss   12:59   0:19 /lib/systemd/systemd-udevd
systemd+   335  0.0  0.0  80040  5316 ?        Ss   12:59   0:00 /lib/systemd/systemd-networkd
systemd+   355  0.0  0.0  70624  5464 ?        Ss   12:59   0:00 /lib/systemd/systemd-resolved
root       636  0.0  0.0  31748  3184 ?        Ss   12:59   0:00 /usr/sbin/cron -f
daemon     637  0.0  0.0  28332  2480 ?        Ss   12:59   0:00 /usr/sbin/atd -f
syslog     639  0.0  0.0 267268  4864 ?        Ssl  12:59   0:01 /usr/sbin/rsyslogd -n
root       640  0.0  0.0 287992  7092 ?        Ssl  12:59   0:00 /usr/lib/accountsservice/accounts-daemon
message+   641  0.0  0.0  50044  4316 ?        Ss   12:59   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root       645  0.0  0.0 170916 17304 ?        Ssl  12:59   0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
root       647  0.0  0.0  62120  5668 ?        Ss   12:59   0:00 /lib/systemd/systemd-logind
root       654  0.0  0.0 288876  6532 ?        Ssl  12:59   0:00 /usr/lib/policykit-1/polkitd --no-debug
root       662  0.0  0.0  72296  5808 ?        Ss   12:59   0:00 /usr/sbin/sshd -D
root       663  0.0  0.0  16412  2396 console  Ss+  12:59   0:00 /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
root       676  0.0  0.0 187712 20468 ?        Ssl  12:59   0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root      1698  0.0  0.0 2403300 46692 ?       Ssl  12:59   0:00 /usr/local/bin/crio --runtime /usr/bin/runc --registry docker.io

so I assumed there will be a few pods running (to connect to the master node).
The crio service seems to be in order:

root@kubedee-test-worker-2ajv8y:~# systemctl status -l crio
● crio.service - CRI-O daemon
   Loaded: loaded (/etc/systemd/system/crio.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2019-06-17 12:59:48 UTC; 13min ago
 Main PID: 1698 (crio)
    Tasks: 31 (limit: 4915)
   CGroup: /system.slice/crio.service
           └─1698 /usr/local/bin/crio --runtime /usr/bin/runc --registry docker.io

Jun 17 12:59:48 kubedee-test-worker-2ajv8y systemd[1]: Started CRI-O daemon.

but since there's no cri-cli available I did that manually:

$ VERSION="v1.14.0"
$ wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
$ tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
$ rm -f crictl-$VERSION-linux-amd64.tar.gz

and created the corresponding configuration file:

cat <<EOF >/etc/crictl.yaml 
runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
timeout: 10
debug: true
EOF

But there's no container running:

root@kubedee-test-worker-2ajv8y:~# /usr/local/bin/crictl ps
DEBU[0000] ListContainerRequest: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},} 
DEBU[0000] ListContainerResponse: &ListContainersResponse{Containers:[],} 
CONTAINER ID        IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID

Any ideas?

Enable TokenRequest API?

Some software packages started leveraging service account token volume projection, most notably - Istio. Would it be much of a bother to enable this feature as part of Kubedee's cluster deployment?

The addition of the below snippet to kube-apiserver.service would hopefully achieve the result, though YMMV:

  --service-account-api-audiences=api,vault,factors \\
  --service-account-issuer=api \\
  --service-account-signing-key-file=/etc/kubernetes/ca-key.pem \\

Helmfile installing Istio for replication purposes via a quick helmfile sync:

{{- $helmTimeout := default 600 (env "HELM_TIMEOUT") -}}
{{- $networkNamespace := default "network" (env "NETWORK_NAMESPACE") -}}
{{- $istioInitReleaseName := default "istio-init" (env "ISTIO_INIT_HELM_RELEASE_NAME") -}}
{{- $istioReleaseName := default "istio" (env "ISTIO_HELM_RELEASE_NAME") -}}
{{- $istioVersion := default "1.5.1" (env "ISTIO_VERSION") -}}
{{- $istioMajor := slice (splitList "." $istioVersion) 0 2 | join "." -}} 

repositories:
- name: istio{{ $istioMajor }}
  url: https://storage.googleapis.com/istio-release/releases/{{ $istioVersion }}/charts

helmDefaults:
  wait: true
  timeout: {{ $helmTimeout }}
  tillerless: false

releases:
- name: {{ $istioInitReleaseName }}
  namespace: {{ $networkNamespace }}
  chart: istio{{ $istioMajor }}/istio-init
  values:
  - certmanager:
      enabled: true
  hooks:
  - events: ["postsync"]
    command: "/bin/sh"
    args: ["-xec", "kubectl -n {{ $networkNamespace }} wait --for condition=complete --timeout {{ $helmTimeout }}s job --all && sleep 5"]
- name: {{ $istioReleaseName }}
  namespace: {{ $networkNamespace }}
  chart: istio{{ $istioMajor }}/istio
  needs:
  - {{ $networkNamespace }}/{{ $istioInitReleaseName }}
  values:
  - global:
      tag: {{ $istioVersion }}
      sds:
        enabled: true
    gateways:
      istio-ingressgateway:
        type: NodePort
        sds:
          enabled: true

The symptom of it not being there would be some of the Istio pods being stuck in ContainerCreating, because the feature is not available.

As of now, istio-citadel keeps crashing, so I'm probably supplying a bad trust chain in the first snippet.

Internet access from pods

I needed to make the flannel daemonset privileged: true to allow it to run iptables commands like the following

[kube-flannel-ds-amd64-mhn4w kube-flannel] I1117 00:30:49.672089       1 iptables.go:155] Adding iptables rule: -d 10.244.0.0/16 -j ACCEPT 
[kube-flannel-ds-amd64-btkx4 kube-flannel] I1117 00:32:04.870918       1 iptables.go:155] Adding iptables rule: -d 10.244.0.0/16 -j ACCEPT 
[kube-flannel-ds-amd64-mhn4w kube-flannel] I1117 00:30:49.672391       1 iptables.go:155] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN 
[kube-flannel-ds-amd64-btkx4 kube-flannel] I1117 00:32:04.871904       1 iptables.go:155] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully 
[kube-flannel-ds-amd64-btkx4 kube-flannel] I1117 00:32:04.873028       1 iptables.go:155] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/24 -j RETURN 
[kube-flannel-ds-amd64-mhn4w kube-flannel] I1117 00:30:49.673408       1 iptables.go:155] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully 
[kube-flannel-ds-amd64-btkx4 kube-flannel] I1117 00:32:04.873897       1 iptables.go:155] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE --random-fully 
[kube-flannel-ds-amd64-mhn4w kube-flannel] I1117 00:30:49.674312       1 iptables.go:155] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.2.0/24 -j RETURN 
[kube-flannel-ds-amd64-mhn4w kube-flannel] I1117 00:30:49.770506       1 iptables.go:155] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE --random-fully 

Here is the config I adjusted:
https://github.com/schu/kubedee/blob/master/manifests/kube-flannel.yml#L199-L202

This is the nuclear option, of course. Perhaps there is a more restricted capability to add that lets us avoid privileged: true?

Any reason why using my own storage causes cluster to fail "not ready"

Hi

I've gave a whole disk to lxd manually and called the storage "kubedee". I then follow normal procedure to create a cluster and it all looks to work, goes through the motions without error.

However, it stays stuck in "not ready" mode for all nodes and I notice flannel doesn't get deployed properly as the interfaces don't appear thus there is no resulting CNI.

If I delete the cluster and delete the custom storage then try with kubeedee and let it create its own storage, it works fine.

Seems strange as the storage I create is just a bigger pool and using an entire drive as the "source=/dev/sdb", but using BTRFS as the filesystem.

Any ideas?

Cheers!
Jon :)

So long, kubedee

I started kubedee in 2017 out of the need for a "plumbing tool" for setting up vanilla Kubernetes test clusters. Specifically, I was looking for:

  • Vanilla Kubernetes services managed with systemd (e.g. no special "fat binaries")
  • The ability to run custom Kubernetes binaries right from disk without the need for tedious container rebuilds or a registry
  • A both small and simple tool (KISS) that could be understood and modified easily for experimentation and fast test iteration

While I'm still looking for such a tool, I'm not working on Kubernetes these days and don't have the resources to continue work on kubedee.

If you find kubedee useful, feel free to fork the project (Apache 2.0).

So long.

Error: Failed to install due to package configuration of libssl

A simple:

$ kubedee up test --kubernetes-version v1.14.2

shows an installation prompt in kubedee::prepare_container_image() due to the:

cat <<'EOF' | lxc exec "${kubedee_container_image}-setup" bash
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get upgrade -y
...
EOF

upgrade snippet. During configuration of libssl1.1 there's a prompt which cannot be answered:
Selection_160
You can only CTRL+C the installation. Early in the update process one also sees a:

dpkg-reconfigure: unable to re-open stdin: No file or directory

message. This is a known issue:

Error: Failed to run: ip6tables -w -t nat -I POSTROUTING...

Here<s a simple description of the issue. Let me know if additionnal error output is needed.

Distribution: Archlinux
Version

commit 2673d34dac73040e54428f37518d3b1293d36df4 (HEAD -> master, origin/master, origin/HEAD)
Author: Michael Schubert <[email protected]>
Date:   Sat Apr 27 14:20:32 2019 +0200

    Update runc to latest release, v1.0.0-rc8
    
    Resolves #5

Error:

$ kubedee up test --kubernetes-version v1.13.0
Creating network for test ...
Error: Failed to run: ip6tables -w -t nat -I POSTROUTING -s fd42:33fa:9ef0:97be::/64 ! -d fd42:33fa:9ef0:97be::/64 -j MASQUERADE -m comment --comment generated for LXD network kubedee-ym1v2y: ip6tables: No chain/target/match by that name.

List of chain present for the nat table with ip6tables.

$ sudo ip6tables -t nat -vnL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 LIBVIRT_PRT  all      *      *       ::/0                 ::/0                

Chain LIBVIRT_PRT (1 references)
 pkts bytes target     prot opt in     out     source               destination 

Failed to start OOM watcher open /dev/kmsg: no such file or directory

With Kubernetes >v1.15.0-alpha.1, the kubelet errors out with the following message:

May 08 09:02:28 kubedee-test1-15-controller kubelet[1072]: F0508 09:02:28.305187    1072 kubelet.go:1394] Failed to start OOM watcher open /dev/kmsg: no such file or directory

I'm guessing this is due to kubelet's OOM watcher being updated to not use cAdvisor any longer: kubernetes/kubernetes@b2ce446 added in v1.15.0-alpha.2: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md/#changelog-since-v1150-alpha1

no matches for rbac.authorization.k8s.io

Occasionally, kubedee start ... errors out with

error: unable to recognize "STDIN": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole

I guess the apiserver it not actually ready yet.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.