Giter Site home page Giter Site logo

Comments (16)

squat avatar squat commented on May 16, 2024 1

Hi @fire, option 1 should already be doable without any new code. Kilo accepts a --master flag to set the url of the K8s API, and users can simply add that flag to any Kilo manifest and take out --kubeconfig flag and the volume mount for the in-cluster kubeconfig to only use the service account. Do you think this should be an additional manifest in the manifests directory? If so, I'm very happy to merge that PR. The nice thing about it is that it is not installer-specific, so we only need one rather than one for kubeadm, bootkube etc

from kilo.

baurmatt avatar baurmatt commented on May 16, 2024 1

Also interested in seeing this fixed! :)

from kilo.

squat avatar squat commented on May 16, 2024

Hi Adam, let's continue the conversation from #27 here. Thanks for investing time digging into the code.

in the event that we need to run kilo before we have a network fabric running. If we do have a network fabric running, the user doesn't have to worry about where the api server is

It's a little bit tricky. In order to establish connection to the Kubernetes API via a service IP, Kubernetes requires four things:
0. kube-proxy is running (or equivalent, e.g. kube-router) and can create rules to map service IPs to real IPs;

  1. the IP address backing the service IP is routable;
  2. kube-proxy (or equivalent) has a special kubeconfig with a non-service IP address for the API; and
  3. the non-service IP address is reachable from the given node.

In Kilo's case, because we want to be able to build clusters without a shared private network (e.g. multi-cloud), these four requirements are not always guaranteed.

3 and 2 are generally OK; this is because most Kubernetes installers are smart and provision Kubeconfigs with a DNS name that resolves to the private IP when inside of the cloud's VPC and to a public IP when outside, e.g. from another cloud.

0 and 1, on the other hand, we cannot know for sure in multi-cloud environments. The problem is that most of the time, the IP address backing a service IP is the master node's private IP address, which will not be routable from nodes in other data centers. This means that even if kube-proxy is installed, we cannot guarantee that service IPs will work until Kilo is running and makes the private IPs routable.

So the two ways forward are:
0. always use a special Kubeconfig, which we can expect to have the correct address in order to reach the API from anywhere; or

  1. tell the user to edit the Kilo manifest before deploying it to write the publicly accessible address for the API.

Each has its up- and downsides:
0 evidently doesn't work out of the box in k3s since it isn't provisioned by a traditional installer and thus worker nodes are not populated with a valid kubeconfig; Kilo uses a privileged kubeconfig with more power than it requires.
1 requires extra user intervention even if the kubeconfig would have worked.

What do you think is the best way forward?

from kilo.

fire avatar fire commented on May 16, 2024

Can you provide the the 1 option as an alternative for extra user intervention?

from kilo.

fire avatar fire commented on May 16, 2024

I would like that, but this codebase is unfamiliar to me.

from kilo.

eddiewang avatar eddiewang commented on May 16, 2024

Would love to see a more permanent fix for this. k3s seems to not work great with kilo atm.

from kilo.

jbrinksmeier avatar jbrinksmeier commented on May 16, 2024

Hi @squat
I would like to continue on the solution you proposed earlier

1. tell the user to edit the Kilo manifest before deploying it to write the publicly accessible address for the API.

I gave this a try but had to face the rather obvious point that my cluster runs on self-signed certificates and therefor the kilo pods refuse to communicate with that endpoint given with the --master flag.
I solved this by building my own squat/kilo image with added ca-certificates package and then mounting the kube-ca.pem to the kilo pods. I also had to edit the entrypoint to rebuild the ca bundle on startup.
Here is the Dockerfile I came up with:

FROM squat/kilo
RUN apk update && apk add ca-certificates
ENTRYPOINT ["/bin/sh", "-c", "update-ca-certificates && /opt/bin/kg"]

As I think that this is a use-case interesting to others, too, would you accept a PR for this?

from kilo.

jbrinksmeier avatar jbrinksmeier commented on May 16, 2024

I just realized that since applying the changes outlined above, the kilo pods do not respect the --subnet flag anymore, building a CIDR of 10.4.0.0/24 (which is the default I guess). Any idea how to dig deeper there?

from kilo.

unixfox avatar unixfox commented on May 16, 2024

Meanwhile, I developed an init container that insert a kubeconfig for kilo.

Here is the deployment yaml file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kilo
  namespace: kube-system
  labels:
    app.kubernetes.io/name: kilo
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: kilo
  template:
    metadata:
      labels:
        app.kubernetes.io/name: kilo
    spec:
      serviceAccountName: kilo
      hostNetwork: true
      containers:
      - name: kilo
        image: squat/kilo
        args:
        - --kubeconfig=/etc/kubernetes/kubeconfig
        - --hostname=$(NODE_NAME)
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
        - name: kilo-dir
          mountPath: /var/lib/kilo
        - name: kubeconfig
          mountPath: /etc/kubernetes
          readOnly: true
        - name: lib-modules
          mountPath: /lib/modules
          readOnly: true
        - name: xtables-lock
          mountPath: /run/xtables.lock
          readOnly: false
      initContainers:
      - name: generate-kubeconfig
        image: unixfox/kilo-kubeconfig
        imagePullPolicy: Always
        volumeMounts:
        - name: kubeconfig
          mountPath: /etc/kubernetes
        env:
        - name: MASTER_URL
          value: "your.kube.api:6443"
      - name: install-cni
        image: squat/kilo
        command:
        - /bin/sh
        - -c
        - set -e -x;
          cp /opt/cni/bin/* /host/opt/cni/bin/;
          TMP_CONF="$CNI_CONF_NAME".tmp;
          echo "$CNI_NETWORK_CONFIG" > $TMP_CONF;
          rm -f /host/etc/cni/net.d/*;
          mv $TMP_CONF /host/etc/cni/net.d/$CNI_CONF_NAME
        env:
        - name: CNI_CONF_NAME
          value: 10-kilo.conflist
        - name: CNI_NETWORK_CONFIG
          valueFrom:
            configMapKeyRef:
              name: kilo
              key: cni-conf.json
        volumeMounts:
        - name: cni-bin-dir
          mountPath: /host/opt/cni/bin
        - name: cni-conf-dir
          mountPath: /host/etc/cni/net.d
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
      volumes:
      - name: cni-bin-dir
        hostPath:
          path: /opt/cni/bin
      - name: cni-conf-dir
        hostPath:
          path: /etc/cni/net.d
      - name: kilo-dir
        hostPath:
          path: /var/lib/kilo
      - name: kubeconfig
        hostPath:
          path: /etc/kilo-kubeconfig
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

Then for the serviceaccount I don't know what kind of RBAC permissions kilo requires, so the current ones may not be enough. If you have any idea about that please let me know!

The source code of the project is located here: https://bitbucket.org/unixfox/kilo-kubeconfig/src/master/

from kilo.

jawabuu avatar jawabuu commented on May 16, 2024

@unixfox This is a great solution to the issue and should probably be merged into kilo.
I have tested it in k3s.
Is it possible to make the namespace (default to kube-system) configurable?

from kilo.

unixfox avatar unixfox commented on May 16, 2024

@unixfox This is a great solution to the issue and should probably be merged into kilo.
I have tested it in k3s.
Is it possible to make the namespace (desfault to kube-system) configurable?

As you can see in the YAML I don't specify any namespace because you can deploy it to whatever namespace you want.

from kilo.

jawabuu avatar jawabuu commented on May 16, 2024

@unixfox
In your entrypoint script, I would like to understand the significance of the namespace declaration
https://bitbucket.org/unixfox/kilo-kubeconfig/src/d91d836fbf08f07a47f86f9f3458bf6410fdd62a/ENTRYPOINT.sh#lines-20,21,22,23,24,25,26,27

contexts:
- context:
    cluster: kilo
    namespace: kube-system
    user: kilo
  name: kilo

from kilo.

unixfox avatar unixfox commented on May 16, 2024

@unixfox
In your entrypoint script, I would like to understand the significance of the namespace declaration
https://bitbucket.org/unixfox/kilo-kubeconfig/src/d91d836fbf08f07a47f86f9f3458bf6410fdd62a/ENTRYPOINT.sh#lines-20,21,22,23,24,25,26,27

contexts:
- context:
    cluster: kilo
    namespace: kube-system
    user: kilo
  name: kilo

It's just the default namespace that the user will use when not providing any "namespace" in the kubectl command for example. Just deploy my yaml file into the kube-system namespace, and you will be fine (I just edited it to make it more obvious).

from kilo.

jawabuu avatar jawabuu commented on May 16, 2024

@unixfox Thanks for the clarification.
My only concern was if it would still function if I deployed kilo in another namespace e.g. kilo

from kilo.

unixfox avatar unixfox commented on May 16, 2024

@unixfox Thanks for the clarification.
My only concern was if it would still function if I deployed kilo in another namespace e.g. kilo

Well I'm not sure if that will work, I haven't tested though.
If that doesn't work get back to me and I'll see if I can do something about it.

from kilo.

stv0g avatar stv0g commented on May 16, 2024

Here is an iteration of @unixfox approach which:

  • gets rid of the dedicated Docker image
  • works for any namespace which Kilo might use
  • autodetects the API server URL as more recent K3S versions use a dynamic port number
  • writes generated kubeconfig to a temporary emptyDir volume
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kilo-scripts
  namespace: kube-system
data:
  init.sh: |
    #!/bin/sh

    cat > /etc/kubernetes/kubeconfig <<EOF
        apiVersion: v1
        kind: Config
        name: kilo
        clusters:
        - cluster:
            server: $(sed -n 's/.*server: \(.*\)/\1/p' /var/lib/rancher/k3s/agent/kubelet.kubeconfig)
            certificate-authority: /var/lib/rancher/k3s/agent/server-ca.crt
        users:
        - name: kilo
          user:
            token: $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
        contexts:
        - name: kilo
          context:
            cluster: kilo
            namespace: ${NAMESPACE}
            user: kilo
        current-context: kilo
    EOF

Add the following initContainer to the Kilo daemonset:

[...]
      initContainers:
      - name: generate-kubeconfig
        image: busybox
        command:
        - /bin/sh
        args:
        - /scripts/init.sh
        imagePullPolicy: Always
        volumeMounts:
        - name: kubeconfig
          mountPath: /etc/kubernetes
        - name: scripts
          mountPath: /scripts/
          readOnly: true
        - name: k3s-agent
          mountPath: /var/lib/rancher/k3s/agent/
          readOnly: true
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

And add the following volumes as well:

[...]
      volumes:
      - name: scripts
        configMap:
          name: kilo-scripts
      - name: kubeconfig
        emptyDir: {}
      - name: k3s-agent
        hostPath:
          path: /var/lib/rancher/k3s/agent

from kilo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.