Giter Site home page Giter Site logo

k2d's Introduction

Portainer Community Edition is a lightweight service delivery platform for containerized applications that can be used to manage Docker, Swarm, Kubernetes and ACI environments. It is designed to be as simple to deploy as it is to use. The application allows you to manage all your orchestrator resources (containers, images, volumes, networks and more) through a ‘smart’ GUI and/or an extensive API.

Portainer consists of a single container that can run on any cluster. It can be deployed as a Linux container or a Windows native container.

Portainer Business Edition builds on the open-source base and includes a range of advanced features and functions (like RBAC and Support) that are specific to the needs of business users.

Latest Version

Portainer CE is updated regularly. We aim to do an update release every couple of months.

latest version

Getting started

Features & Functions

View this table to see all of the Portainer CE functionality and compare to Portainer Business.

Getting help

Portainer CE is an open source project and is supported by the community. You can buy a supported version of Portainer at portainer.io

Learn more about Portainer's community support channels here.

You can join the Portainer Community by visiting https://www.portainer.io/join-our-community. This will give you advance notice of events, content and other related Portainer content.

Reporting bugs and contributing

  • Want to report a bug or request a feature? Please open an issue.
  • Want to help us build portainer? Follow our contribution guidelines to build it locally and make a pull request.

Security

Work for us

If you are a developer, and our code in this repo makes sense to you, we would love to hear from you. We are always on the hunt for awesome devs, either freelance or employed. Drop us a line to [email protected] with your details and/or visit our careers page.

Privacy

To make sure we focus our development effort in the right places we need to know which features get used most often. To give us this information we use Matomo Analytics, which is hosted in Germany and is fully GDPR compliant.

When Portainer first starts, you are given the option to DISABLE analytics. If you don't choose to disable it, we collect anonymous usage as per our privacy policy. Please note, there is no personally identifiable information sent or stored at any time and we only use the data to help us improve Portainer.

Limitations

Portainer supports "Current - 2 docker versions only. Prior versions may operate, however these are not supported.

Licensing

Portainer is licensed under the zlib license. See LICENSE for reference.

Portainer also contains code from open source projects. See ATTRIBUTIONS.md for a list.

k2d's People

Contributors

deviantony avatar ncresswell avatar stevensbkang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

k2d's Issues

Support "Load Balancer" service types.

in the Alpha, we did not implement support for load balancer services. As a result, the only way to expose services using "low" ports, is via HostPort, which is not common.

If we emulated a load balancer (just one, bound to the docker host IP) then services could be exposed on the host on common ports (80/443 etc)

Support Ingress services

In the Alpha, we did not implement support for ingress, as this was deemed unnecessary for IOT use cases. However, it would be possible to add this if we auto-deploy a reverse proxy like Traefik / Inlets and then configure the proxy using the ingress config provided by the manifest being deployed.

ContainerD Support

K2D is based on translating Kubernetes APIs to Docker APIs, which makes Docker a must. It should be possible, in the future, to not require docker, by embedding contained as part of the k2d image. This would lower the memory footprint and increase security by removing unneeded docker features.

Known issues for 1.0.0-beta

This is a thread to reference known issues for the 1.0.0-beta release.

  • Unable to list Secret and ConfigMap resources with k9s: #68 (comment)
  • An error is raised when creating a namespace with Lens/OpenLens: #68 (comment)
  • Creating workloads and resources inside a non existing namespace silently fails: #68 (comment)
  • Applying a manifest that creates a namespace and multiple resources within that namespace will fail with "namespace not found" error: #68 (comment)
  • Creating services when k2d runs under Podman fails: #68 (comment)

Refactor Secrets and ConfigMaps to Docker Volumes

It will be good to migrate secrets and configMaps stored in the filesystem to Docker volumes. It will have the following advantages:

  • Bind mount for /var/lib/k2d will not be required anymore. Instead, the use of a Docker volume will then be possible to store TLS and token data for k2d.
  • No need for maintaining metadata files to store annotations and labels. Rather, these can be pushed to Docker Volume labels which will be cleaner.
  • Can be expanded to support PV/PVC by the Docker volume support.

Support for emulation of Persistent Volumes via PVCs

In the Alpha, we support container persistence via hostpath mapping (which translates to a docker bind mount), which is rarely used in Kubernetes deployments. What is more common is PV's and PVC's coming from a storage class.

To provide standardised support for manifests that expect a PVC, we should emulate a default storage class (docker), make it the default, and then translate PVC's to Docker named volumes, akin to "docker volume create"

Support listening on K2D_ADVERTISE_ADDR

According to a segment of code below
https://github.com/portainer/k2d/blob/6b185b025f04d78ca8b874ee95d2cddb10b989d3/cmd/k2d.go#L189C1-L193C13

and this issue on go official golang/go#5197
The k2d container always listen on [:::6443] and all interfaces will be listening when running on docker network=host mode.
I desire to use k2d on a LAN network for the educational purpose, so I need to public the k2d only on the LAN interface when using docker network=host mode. There is a work around available now by running on network=bridge mode and then publish the container port to the host network.

Support for container Command and Args properties

Add support for the Command and Args properties of a container.

This will allow to deploy pods that leverages these properties such as:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: curl-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: curl
  template:
    metadata:
      labels:
        app: curl
    spec:
      containers:
      - name: curl-container
        image: curlimages/curl
        command: ["/bin/sh"]
        args: ["-c", "while true; do sleep 30; done;"]

Support for Namespaces

In the alpha, we emulate just a single namespace, the "default" namespace, and all resources deployed in this namespace are attached to the docker network k2d_net.

We could provide emulated namespaces, by mapping namespace creation to the creation of a custom docker bridge network, and then using the namespace name to map to the appropriate docker network.

Support Private Registries

In the alpha, images could only come from public/open registries. In reality, this would be uncommon in production environments, so we need to support the ability to do a "docker login" at deployment time based off the image pull secret provided in the application manifest. We should not hold this on disk, it should be used solely at deploy time.

Support for Jobs emulation

Support for Jobs emulation

Table of Contents

Summary

Kubernetes Jobs are vital for handling one-off tasks, periodic jobs, and other similar scenarios where a task needs to run to completion before being considered successful.

Implementing support for Kubernetes Jobs will involve parsing and translating these API calls into Docker instructions, similar to how other Kubernetes resources are handled by k2d.

The base implementation could be spawning containers that will immediately reach Running state, and based on the exit code (EC), can be marked as Completed (EC == 0) or Failed (EC != 0).

If you are open to considering this feature request, I would be happy to provide further input or assist in any way that I can. Please let me know if you have any questions or if there's anything I can do to support the development of this enhancement.

Goals

  1. Implementation of translation layer from Kubernetes jobs.batch to Docker containers;
  2. Support for Create, Delete, Get, List and Patch operations;

Proposal

Example

The following is an example Job config (source: kubernetes.io: Running an example job). It computes π to 2000 places and prints it out.

# job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
        - name: pi
          image: perl:5.34.0
          command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

Using the kubectl create -f job.yaml, I expect the following actions to occur:

  1. Job Creation: When you execute kubectl create -f job.yaml, Kubernetes will receive this YAML configuration and create a Job resource based on the specifications defined in the YAML file.
  2. Job Scheduling: Kubernetes will attempt to schedule the Job for execution. This involves finding an appropriate node in the cluster where the Job's pod can run. The Job pod will consist of a single container defined in the containers section of the Job configuration.
  3. Pod Creation: Once a suitable node is found, Kubernetes will create a pod based on the template specified in the Job configuration. In this case, it's a single pod with a container named "pi," running the "perl:5.34.0" image.
  4. Container Initialization: The container within the pod is started. It will execute the command specified in the command field of the container definition. In this example, it runs a Perl script that calculates π to 2000 decimal places using the Perl bpi module and prints the result.
  5. Pod Execution: The container will run until it completes the assigned task. In this example, it computes π to 2000 decimal places and prints it. Once the command finishes, the container will terminate.
  6. Job Status Tracking: Kubernetes continuously monitors the status of the pod and the container within the pod. If the container exits with a status code of 0 (indicating success), Kubernetes marks the pod as "Succeeded." If the container exits with a non-zero status code (indicating failure), Kubernetes marks the pod as "Failed."
  7. Job Completion: The Job itself will be marked as "Completed" if all its pods complete successfully (i.e., with an exit code of 0) or "Failed" if any of its pods fail (i.e., exit with a non-zero code). The backoffLimit field in the Job configuration specifies the number of retries allowed for failed pods. In this example, it's set to 4, so Kubernetes will retry running the Job up to four times if it fails.
  8. Job Cleanup: Once the Job is completed (either successfully or after reaching the backoffLimit), Kubernetes cleans up any associated resources, such as pods, unless you specify otherwise.

Solution

From the k2d standpoint, the best way to treat the following example would be starting new container for each of the containers in the template, similarly to the deployment.apps (ref. docs.k2d.io).

The inspect object of the Job container would look like this (truncated to the important fields only):

[
  {
    "Id": "DATA+OMITTED",
    "Path": "perl",
    "Args": ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"],
    "State": {
      "Status": "running",
      "Running": true,
      "Paused": false,
      "Restarting": false,
      "OOMKilled": false,
      "Dead": false,
      "Pid": 41222,
      "ExitCode": 0,
      "Error": "",
      "StartedAt": "2023-09-04T15:52:43.601028251Z",
      "FinishedAt": "0001-01-01T00:00:00Z"
    },
    "Config": {
      "Cmd": ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"],
      "Image": "perl:5.34.0",
      "Labels": {
        "namespace.k2d.io/name": "default",
        "networking.k2d.io/network-name": "k2d-default",
        "pod.k2d.io/last-applied-configuration": "DATA+OMITTED",
        "workload.k2d.io/last-applied-configuration": "DATA+OMITTED",
        "workload.k2d.io/name": "pi",
        "workload.k2d.io/type": "job"
      }
    }
  }
]

Mappings

Each Pod in Kubernetes Jobs can have various statuses that indicate the progress and outcome of the job's execution, and based on them we can mark the job status. Here is a table with Possible mappings

Job Status Container State Container Exit Code Remarks Description
Active Running - - It indicates that the Job is currently running or actively being processed. There might be one or more pods associated with the Job in the "Running" state.
Succeeded Exited each pod EC == 0 - It signifies that all the pods created by the Job have completed successfully. In other words, all the pods associated with the Job have exited with a status code of 0. The Job itself is considered successfully completed.
Failed Exited at least one podEC != 0 - It indicates that at least one pod created by the Job has failed. A pod is marked as "Failed" when its main container exits with a non-zero status code. The Job itself is considered to have failed if any of its pods fail.
Completed Exited - - It signifies that the Job has completed its execution, but it does not provide information about whether the execution was successful or not.
Pending Created - Should this be supported? It means that the Job has been created, but no pods have been scheduled and started yet. Kubernetes is still in the process of scheduling the Job for execution.
Unknown ? - - It is used when there is an issue with retrieving the Job's status. This might occur if there are problems with communication between the Kubernetes control plane and the cluster nodes.
Expiered Any state execution time exceeds spec.activeDeadlineSeconds This status is used when a Job has reached its active deadline or has been terminated due to exceeding its specified activeDeadlineSeconds field.

Use-case

Kubernetes Jobs are primarily designed for running one-off or batch tasks in a Kubernetes cluster. They are a useful resource for managing short-lived and parallelizable workloads. Here are some common use cases for Kubernetes Jobs:

  1. Data Processing and ETL (Extract, Transform, Load):

    • Running data processing jobs, such as log analysis, data transformation, or data migration tasks.
    • Executing ETL pipelines to clean, enrich, and transfer data between different systems.
  2. Cron Jobs:

    • Scheduling periodic tasks or batch jobs at specific intervals using the Kubernetes CronJob resource. CronJobs are built on top of Jobs and are suitable for tasks like periodic backups, data synchronization, or report generation.
  3. Database Migrations:

    • Performing database schema updates or migrations as part of application deployments.
    • Running scripts to apply schema changes, seed data, or perform maintenance tasks on databases.
  4. Periodic Cleanup:

    • Automating the cleanup of temporary files, log rotation, or purging outdated data.
    • Managing resources by periodically deleting or archiving unused objects in a cluster.
  5. Testing and CI/CD Pipelines:

    • Running tests, build jobs, or integration tests as part of a continuous integration and continuous deployment (CI/CD) pipeline.
    • Isolating test environments and executing tests in parallel.
  6. Batch Processing:

    • Handling parallel batch processing tasks where each unit of work is independent and can be processed concurrently.
    • Examples include processing invoices, generating reports, or transcoding media files.
  7. Backup and Restore:

    • Automating backup and restore procedures for applications, databases, or configuration files.
    • Ensuring data resilience and disaster recovery.
  8. Resource Scaling and Parallelization:

    • Scaling applications horizontally by launching multiple parallel instances to handle increased workloads.
    • Parallelizing tasks to complete them faster, such as image or video processing.
  9. Job Queue Workers:

    • Creating workers to process tasks from a job queue or message queue system like RabbitMQ or Apache Kafka.
    • Scaling the number of workers based on queue backlog.
  10. Scientific and Computational Workloads:

    • Running scientific simulations, calculations, or numerical analysis.
    • Utilizing Kubernetes clusters for high-performance computing (HPC) tasks.
  11. Backup and Restore:

    • Automating backup and restore procedures for applications, databases, or configuration files.
    • Ensuring data resilience and disaster recovery.
  12. Ad-Hoc Administrative Tasks:

    • Executing one-time administrative tasks or system maintenance operations on Kubernetes nodes or applications.

Kubernetes Jobs provide a reliable and declarative way to manage these tasks within a Kubernetes cluster. They ensure that tasks are executed to completion, handle retries, and provide a framework for tracking job status, which is particularly useful for monitoring and managing batch workloads.

About multi-node support implementation options

First of all, very interesting project. I see a lot of potential on this 🚀

I understand that main use case for k2d is single node deployments in edge but as far I see, it would be very easy to add at least some level multi-node support to it.

Possibilities which comes to my mind are:

Swarm mode support

so basically instead of converting deployments directly to containers, k2d would convert those to service and swarm handles the rest.

Pros:

  • Simple for existing Swarm users.

Cons:

  • Might be tricky to implement because of all corner cases.
  • Introduces Swarm bugs and limitations to here too.

Utilizing Swarm overlay networks and DNS only

This actually already works if you deploy stack like this to Swarm first:

version: "3.8"
services:
  pause:
    image: k8s.gcr.io/pause:3.9
    networks:
    - k2d_net
    deploy:
      mode: global
networks:
  k2d_net:
    name: k2d_net
    driver: overlay
    attachable: true

Then on each Swarm node there would be still standalone k2d but containers deployed with it can find others with DNS names and communicate inside of that overlay network.

Supporting this case would be very simple, basically you just need add following logic:

  1. If Swarm mode is enabled and node is Swarm manager -> create k2d_net network with overlay driver if it does not exists.
  2. If Swarm mode is enabled and node is Swarm worker -> deploy containers without checking if k2d_net network exists.

and then it would be very simple to add namespace support too because instead of k2d_net you would create overlay network k2d_<namespace name>

Pros:

  • Very simple to implement.
  • Does not introduce Swarm scheduler bugs.
  • Together with GitOps tools it makes possible to implement low-cost multi-device/multi-cluster high available solution for applications which needs better availability than single single device can provide.

Cons:

  • Users need to use separate solution for multi-cluster configuration management.
  • Docker overlay network and internal DNS bugs/limitations are still there.

Bridge networks and custom service discovery

This is also possible already. Bascially user need create custom bridge network k2d_net without outgoing NAT and add static routes between nodes like this:

# Node1
docker network create \
  --driver bridge \
  --subnet 192.168.101.0/24 \
  --gateway 192.168.101.1 \
  -o com.docker.network.bridge.enable_ip_masquerade=false \
  k2d_net
sudo route add -net 192.168.102.0/24 gw <NODE 2 IP>

# Node2 
docker network create \
  --driver bridge \
  --subnet 192.168.102.0/24 \
  --gateway 192.168.102.1 \
  -o com.docker.network.bridge.enable_ip_masquerade=false \
  k2d_net
sudo route add -net 192.168.101.0/24 gw <NODE 1 IP>

Then deploy k2d normally to each node and they containers deployed with it can communicate with IP addresses.
For actual service discovery however something like https://github.com/kevinjqiu/coredns-dockerdiscovery is needed on top of this.

Pros:

  • Works already with k2d.
  • Very stable networking because of no overlay networks used (very similar than K8s + Calico without overlay)

Cons:

  • Users need to use separate solution for multi-cluster configuration management.
  • Sets lot of requirements for infractructure configuration (which might be good or bad thing depending on use case).
  • No complete service discovery solution available yet (or needs are at least more testing how to do it).

EDIT: Looks that Tailscale together its split DNS can solve both connectivity between nodes, even over internet and service discovery between nodes.

Introduce reset mode

Introduce the ability to execute a reset routine when starting k2d that will remove all resources created by k2d and created via k2d.

This is a destructive operation that can be used to reset a system and redeploy a fresh k2d environment.

Secure the k2d API endpoint kubernetes.default.svc

In the Alpha, the kubernetes API endpoint is not secured for communications with the emulated cluster. This means that any pod for any deployment has complete access to the emulated API, akin to "anonymous" access to the Kube API. This is not secure.

We already generate a kube access token, which is used for remote connections. We should also enforce the use of this when accessing the k2d endpoint from within the docker environment.

Add CONTRIBUTING.md file

Right now there are no contribution guidelines. Adding CONTRIBUTING.md will allow new contributors to follow the set of rules.

Support the creation of empty configmap and secrets

Allow the creation of empty configmap and secrets:

kubectl create secret generic empty-secret
kubectl create configmap empty-cm

This will allow us to use empty system configmaps to store metadata only (for PVCs for example) which will be a bit more optimized.

Note that this is currently not supported for the disk backend store, the volume backend store already supports this.

Allow bi-directional management of containers

Before k2d version 1.0.0-beta, k2d used to allow bi-directional management of some resources:

As an additional benefit, as the translations are bi-directional, any Docker management commands executed outside of K2D on the docker host directly, are also translated and appear as Kubernetes resources when later inspected via Kubernetes tooling through the translator.

This feature is now partially supported. The user needs to know how to name Docker resources and which labels to associate with them in order to manage these resources using a Kubernetes client.

We want to re-introduce bi-directional management of containers with support for the following operations:

  • List pods - kubectl get pods -Akubectl get pods
    • Listing pods across all namespaces or the default namespace will include the containers created via docker run... as well as pods created via k2d
  • Inspect a pod - kubectl describe pods/container-namekubectl get pods/container-name
    • Allows you to inspect a pod associated to a container created via docker run...
  • Get the logs of a pod - kubectl logs pods/container-name
    • Allows you to get the logs of a pod associated to a container created via docker run...
  • Delete a pod - kubectl delete pods/container-name
    • Allows you to delete a pod associated to a container created via docker run...

Bi-directional service management is discussed in #81

Allow bi-directional management of services

Follow-up on #79

Assuming bi-directional management is interesting for our users, we should potentially add bi-directional service management.

This would allow support for the following operations:

  • List services - kubectl get svc -Akubectl get svc
    • Listing services across all namespaces or the default namespace will include a service for each container created via docker run... that expose at least one port. These will default to hostPort type services.
  • Inspect a service - kubectl get svc/container-namekubectl describe svc/container-name
    • Allows you to inspect a service associated to a container created via docker run...
  • Delete a service - kubectl delete svc/container-name
    • Allows a user to remove any port exposed on a container created via docker run...

Known issues for 1.0.0-beta-RC

This is a thread to reference known issues with the current development build for the 1.0.0-beta release.

IPv6 support

When K2D_ADVERTISE_ADDR is set to an IPv6, the server crash

unable to get advertise IP address: invalid IP address: 2001:db8::1

When k2d_net is created with IPv6 enabled, the assigned IPv6 is not show in the Service.

Also he Service's ipFamilies field is ignored, leading to confusing case where IPv6-only is "requested" and an IPv4 is showed.

$ docker inspect k2d_net
[
    {
        "Name": "k2d_net",
        // ...
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.17.2.0/24",
                    "Gateway": "172.17.2.1"
                },
                {
                    "Subnet": "fd7d:a40b:9c6::/64",
                    "Gateway": "fd7d:a40b:9c6::1/64"
                }
            ]
        },
        // ...
    }
]
$ kubectl get svc my-nginx-svc -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: ...
  creationTimestamp: "2023-08-10T13:12:31Z"
  labels:
    app: nginx
  name: my-nginx-svc
  namespace: default
spec:
  clusterIPs:
  - 172.17.2.2
  ipFamilies:
  - IPv6
  ipFamilyPolicy: PreferDualStack
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: ClusterIP
status:
  loadBalancer: {}

Simply getting the IPAddress and GlobalIPv6Address if not empty and based of the content of ipFamilies can do the trick.

service.Spec.ClusterIPs = []string{container.NetworkSettings.Networks[k2dtypes.K2DNetworkName].IPAddress}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.