Giter Site home page Giter Site logo

openebs / dynamic-localpv-provisioner Goto Github PK

View Code? Open in Web Editor NEW
131.0 18.0 60.0 27.94 MB

Dynamically deploy Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is provisioned from simple Local-Hostpath /root storage.

Home Page: https://openebs.io

License: Apache License 2.0

Dockerfile 0.82% Makefile 1.22% Go 95.21% Shell 2.04% Mustache 0.32% Jinja 0.39%
kubernetes kubernetes-storage openebs dynamic-provisioning distributed-systems hacktoberfest

dynamic-localpv-provisioner's Introduction

Welcome to OpenEBS

OpenEBS Welcome Banner

OpenEBS is a modern Block-Mode storage platform, a Hyper-Converged software Storage System and virtual NVMe-oF SAN (vSAN) Fabric that is natively integrates into the core of Kubernetes.

Try our Slack channel
If you have questions about using OpenEBS, please use the CNCF Kubernetes OpenEBS slack channel, it is open for anyone to ask a question

Important

OpenEBS provides...

  • Stateful persistent Dynamically provisioned storage volumes for Kubernetes
  • High Performance NVMe-oF & NVMe/RDMA storage transport optimized for All-Flash Solid State storage media
  • Block devices, LVM, ZFS, ext2/ext3/ext4, XFS, BTRFS...and more
  • 100% Cloud-Native K8s declarative storage platform
  • A cluster-wide vSAN block-mode fabric that provides containers/Pods with HA resilient access to storage across the entire cluster.
  • Node local K8s PV's and n-way Replciated K8s PV's
  • Deployable On-premise & in-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • Enterprise Grade data management capabilities such as snapshots, clones, replicated volumes, DiskGroups, Volume Groups, Aggregates, RAID

openEBS has 2 Editions:

1. STANDARD ✔️ > Ready Player 1
2. LEGACY ⚠️ Game Over

Within STANDARD, you have a choice of 2 Types of K8s Storage Services. Replicated PV and Local PV.


Type Storage Engine Type of data services Status In OSS ver
Replicated_PV Replicated data volumes (in a Cluster wide vSAN block mode fabric)
Replicated PV Mayastor Mayastor for High Availability deploymemnts distributing & replicating volumes across the cluster Stable, deployable in PROD
Releases
v4.0.1
 
Local PV Non-replicated node local data volumes (Local-PV has multiple variants. See below) v4.0.1
Local PV Hostpath Local PV HostPath for integration with local node hostpath (e.g. /mnt/fs1) Stable, deployable in PROD
Releases
v4.0.1
Local PV ZFS Local PV ZFS for integration with local ZFS storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV LVM2 Local PV LVM for integration with local LVM2 storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV Rawfile Local PV Rawfile for integration with Loop mounted Raw device-file filesystem Stable, deployable in PROD, undergoing evaluation & integration
release: v0.70
v4.0.1

STANDARD is optimized for NVMe and SSD Flash storage media, and integrates ultra modern cutting-edge high performance storage technologies at its core...

☑️   It uses the High performance SPDK storage stack - (SPDK is an open-source NVMe project initiated by INTEL)
☑️   The hyper modern IO_Uring Linux Kernel Async polling-mode I/O Interface - (fastest kernel I/O mode possible)
☑️   Native abilities for RDMA and Zero-Copy I/O
☑️   NVMe-oF TCP Block storage Hyper-converged data fabric
☑️   Block layer volume replication
☑️   Logical volumes and Diskpool based data managment
☑️   a Native high performance Blobstore
☑️   Native Block layer Thin provisioning
☑️   Native Block layer Snapshots and Clones

Get in touch with our team.

Vishnu Attur :octocat: @avishnu Admin, Maintainer
Abhinandan Purkait 😎 @Abhinandan-Purkait Maintainer
Niladri Halder 🚀 @niladrih Maintainer
Ed Robinson 🐶 @edrob999   CNCF Primary Liason
Special Maintainer
Tiago Castro @tiagolobocastro   Admin, Maintainer
David Brace @orville-wright     Admin, Maintainer

Activity dashbaord

Alt

Current status

Release Support Twitter/X Contrib License statue CI Staus
Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices

Read this in 🇩🇪 🇷🇺 🇹🇷 🇺🇦 🇨🇳 🇫🇷 🇧🇷 🇪🇸 🇵🇱 🇰🇷 other languages.

Deployment

  • In-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • On-Premise: Bare Metal, Virtualzied Hypervisor infra using VMWare ESXi, KVM/QEMU (K8s KubeVirt), Proxmox
  • Deployed as native K8s elemets: Deployments, Containers, Servcies, Stateful sets, CRD's, Sidecars, Jobs and Binaries all on K8s worker nodes.
  • Runs 100% in K8s userspace. So it's highly portable and run across many OS's & platforms.

Roadmap (as of June 2024)


OpenEBS Welcome Banner

QUICKSTART : Installation

NOTE: Depending on which of the 5 storage engines you choose to deploy, pre-requests that must be met. See detailed quickstart docs...


  1. Setup helm repository.
# helm repo add openebs https://openebs.github.io/openebs
# helm repo update

2a. Install the Full OpenEBS helm chart with default values.

  • This installs ALL OpenEBS Storage Engines* in the openebs namespace and chart name as openebs:
    Local PV Hostpath, Local PV LVM, Local PV ZFS, Replicated Mayastor
# helm install openebs --namespace openebs openebs/openebs --create-namespace

2b. To Install just the OpenEBS Replicated Mayastor Storage Engine, use the following command:

# helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace
  1. To view the chart
# helm ls -n openebs

Output:
NAME     NAMESPACE   REVISION  UPDATED                                   STATUS     CHART           APP VERSION
openebs  openebs     1         2024-03-25 09:13:00.903321318 +0000 UTC   deployed   openebs-4.0.1   4.0.1
  1. Verify installation
    • List the pods in namespace
    • Verify StorageClasses
# kubectl get pods -n openebs

Example Ouput:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-agent-core-674f784df5-7szbm               2/2     Running   0          11m
openebs-agent-ha-node-nnkmv                       1/1     Running   0          11m
openebs-agent-ha-node-pvcrr                       1/1     Running   0          11m
openebs-agent-ha-node-rqkkk                       1/1     Running   0          11m
openebs-api-rest-79556897c8-b824j                 1/1     Running   0          11m
openebs-csi-controller-b5c47d49-5t5zd             6/6     Running   0          11m
openebs-csi-node-flq49                            2/2     Running   0          11m
openebs-csi-node-k8d7h                            2/2     Running   0          11m
openebs-csi-node-v7jfh                            2/2     Running   0          11m
openebs-etcd-0                                    1/1     Running   0          11m
openebs-etcd-1                                    1/1     Running   0          11m
openebs-etcd-2                                    1/1     Running   0          11m
...
# kubectl get sc

Example Output:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-6ddf7c7978-jsstg      1/1     Running   0          3m9s
openebs-lvm-localpv-controller-7b6d6b4665-wfw64   5/5     Running   0          3m9s
openebs-lvm-localpv-node-62lnq                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-lhndx                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-tlcqv                    2/2     Running   0          3m9s
openebs-zfs-localpv-controller-f78f7467c-k7ldb    5/5     Running   0          3m9s
...

For more details, please refer to OpenEBS Documentation.

CNCF logo OpenEBS is a CNCF project and DataCore, Inc is a CNCF Silver member. DataCore support's CNCF extensively and has funded OpenEBS participating in every KubeCon event since 2020. Our project team is managed under the CNCF Storage Landscape and we contribute to the CNCF CSI and TAG Storage project initiatives. We proudly support CNCF Cloud Native Community Groups initiatives.

Project updates, subscribe to OpenEBS Announcements
Interacting with other OpenEBS users, subscribe to OpenEBS Users


Container Storage Interface group Storage Technical Advisory Group     Cloud Native Community Groups

Commercial Offerings

Commerically supported deployments of openEBS are avaialble via key companies. (Some provide services, funding, technology, infra, rescourced to the openEBS proejct).

(openEBS OSS is a CNCF project. CNCF does not endorse any specific company).

dynamic-localpv-provisioner's People

Contributors

abhilashshetty04 avatar abhinandan-purkait avatar agarwalrounak avatar akhilerm avatar allenhaozi avatar almas33 avatar asquare14 avatar avishnu avatar cmontemuino avatar csschwe avatar dbackeus avatar fossabot avatar hickersonj avatar kmova avatar liuminjian avatar mingzhang-ybps avatar moteesh avatar niladrih avatar nsathyaseelan avatar pensu avatar prateekpandey14 avatar rahulchheda avatar rolandma1986 avatar shovanmaity avatar shubham14bajpai avatar vaniisgh avatar vishnuitta avatar w3aman avatar wangzihao3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynamic-localpv-provisioner's Issues

openebs-device deleted contents of directory

Describe the bug: A clear and concise description of what the bug is.
I configured openebs-device to use /dev/sda. This worked. But the second tenant (minio) was not created on /dev/sda, but on /data.
When deleting the tenant, the complete /data directory was deleted and not just the tenant.
Is this expected?
(I switched to openebs-lvmpv)

Expected behaviour: A concise description of what you expected to happen

Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>

Anything else we need to know?:
Add any other context about the problem here.

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • kernel (e.g: uname -a):
  • others:

Update compatibility matrix

Describe the problem/challenge you have
We're upgrading to a Kubernetes version past the ones listed in the compatibility matrix in the README.

Describe the solution you'd like
An update to the compatibility matrix.

Environment:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 3.5
  • Kubernetes version (use kubectl version): 1.28
  • Cloud provider or hardware configuration: -
  • OS (e.g: cat /etc/os-release): -
  • kernel (e.g: uname -a): -
  • others: -

Hostpath volume with Block volumeMode does not return desired error

if *opts.PVC.Spec.VolumeMode == v1.PersistentVolumeBlock && stgType != "device" {
line in cmd/provisioner-localpv/app/provisioner.go file will never get triggered.

Confirmed this by testing. PVC describe for a hostpath volume with Block volume request records this event message:

Warning  VolumeMismatch         12s (x2 over 26s)  persistentvolume-controller                                                                                 Cannot bind PersistentVolume "pvc-0014ca86-f7f2-4abe-8d1c-b09728c6383a" to requested PersistentVolumeClaim due to incompatible volumeMode.

No errors or warning in localpv-provisioner container logs.

Endpoints object `openebs.io-local` is constantly updated for no apparent reason

Describe the bug:

After installation of the provisioner, a Kubernetes Endpoints object called openebs.io-local is created.
Then, that object is constantly updated (every second or so) for no apparent reason, since its content remains the same.

The updates stop if the openebs-localpv-provisioner deployment is scaled to 0 instances.

This seems totally unnecessary and may trigger controllers/operators that monitor Endpoints objects for nothing.

Expected behaviour:

The Endpoints object openebs.io-local should only be updated if its content must be changed.

Steps to reproduce the bug:

Install the provisioner and look at the openebs.io-local Endpoints object resourceVersion.

Environment details:

  • OpenEBS version: 3.1.0 (also tested with 3.4.0)
  • Kubernetes version: 1.21.10
  • Cloud provider or hardware configuration: baremetal
  • OS: Ubuntu 20.04.3
  • kernel: 5.4.0-131-generic

[bug] failed to delete large pv, thus making node unschedulable.

Describe the bug:
After You purposefully create a large (80% of the node ephemeral storage) pvc to test the behavior of the localpv-provisioner, the provisioner successfully creates it, but fails to delete the pv after the pvc is removed, thus leaving the node with diskPressure=true and preventing further scheduling of pods on the node. Manual deletion of the pv on kubernetes leaves the data on disk and persists the issue.
On Talos 1.7.0 using Openebs-localpv-provisioner (Helm) and the default Talos deployment instructions in the docs.

Expected behaviour:
The provisioner successfully deletes the pv after the pvc is deleted and/or successfully deletes the data after the pv is manually deleted, the diskPressure=true is removed and the node resumes operations.

Steps to reproduce the bug:

  • Create Talos cluster
  • Apply patch to mount the /var/openebs/local in the kubelet as per the docs
  • create a workload with a PVC that will bring your ephemeral storage to 85+%
  • delete the workload/pvc

The output of the following commands will help us better understand what's going on:
These are the logs of the localpv-provisioner container after the deletion. They run a loop of the following

...
I0429 21:12:46.177550       1 controller.go:1509] delete "pvc-600bcff4-c26f-43c4-bebb-6b989110c715": started
2024-04-29T21:12:46.177Z	INFO	app/provisioner_hostpath.go:270	Get the Node Object with label {map[kubernetes.io/hostname:v2]}
I0429 21:12:46.181181       1 provisioner_hostpath.go:282] Deleting volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715 at v2:/var/openebs/local/pvc-600bcff4-c26f-43c4-bebb-6b989110c715
2024-04-29T21:14:46.664Z	ERROR	app/provisioner.go:188		{"eventcode": "local.pv.delete.failure", "msg": "Failed to delete Local PV", "rname": "pvc-600bcff4-c26f-43c4-bebb-6b989110c715", "reason": "failed to delete host path", "storagetype": "local-hostpath"}
github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app.(*Provisioner).Delete
	/go/src/github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app/provisioner.go:188
sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller.(*ProvisionController).deleteVolumeOperation
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1511
sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller.(*ProvisionController).syncVolume
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1115
sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller.(*ProvisionController).syncVolumeHandler
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1045
sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller.(*ProvisionController).processNextVolumeWorkItem.func1
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:987
sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller.(*ProvisionController).processNextVolumeWorkItem
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1004
sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller.(*ProvisionController).runVolumeWorker
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:905
sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller.(*ProvisionController).Run.func1.3
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:857
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:157
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:158
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:135
k8s.io/apimachinery/pkg/util/wait.Until
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:92
E0429 21:14:46.664896       1 controller.go:1519] delete "pvc-600bcff4-c26f-43c4-bebb-6b989110c715": volume deletion failed: failed to delete volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715: failed to delete volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715: clean up volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715 failed: create process timeout after 120 seconds
E0429 21:14:46.664948       1 controller.go:995] Giving up syncing volume "pvc-600bcff4-c26f-43c4-bebb-6b989110c715" because failures 15 >= threshold 15
E0429 21:14:46.664972       1 controller.go:1007] error syncing volume "pvc-600bcff4-c26f-43c4-bebb-6b989110c715": failed to delete volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715: failed to delete volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715: clean up volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715 failed: create process timeout after 120 seconds
I0429 21:14:46.665321       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolume", Namespace:"", Name:"pvc-600bcff4-c26f-43c4-bebb-6b989110c715", UID:"4f6f7084-bdb3-4ea5-89cc-ed217aa78da1", APIVersion:"v1", ResourceVersion:"931637", FieldPath:""}): type: 'Warning' reason: 'VolumeFailedDelete' failed to delete volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715: failed to delete volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715: clean up volume pvc-600bcff4-c26f-43c4-bebb-6b989110c715 failed: create process timeout after 120 seconds
I0429 21:27:46.178478       1 controller.go:1509] delete "pvc-600bcff4-c26f-43c4-bebb-6b989110c715": started
...
  • kubectl get pods -n <openebs_namespace> --show-labels
NAME                                           READY   STATUS    RESTARTS   AGE   LABELS
openebs-localpv-provisioner-6b8bff68bd-p9d9f   1/1     Running   0          17h   app=localpv-provisioner,chart=localpv-provisioner-4.0.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=4.0.0,pod-template-hash=6b8bff68bd,release=openebs
  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>
k get job -n openebs                                                                         
No resources found in openebs namespace.

Anything else we need to know?:
NA

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): see above
  • Kubernetes version (use kubectl version): Server Version: v1.29.3
  • Cloud provider or hardware configuration: Talos 1.7.0. on Proxmox nodes with "ssd emulation":
talosctl -n v1 disks                                                                    
NODE       DEV        MODEL           SERIAL   TYPE   UUID   WWID   MODALIAS      NAME   SIZE     BUS_PATH                                                                   SUBSYSTEM          READ_ONLY   SYSTEM_DISK
10.2.0.8   /dev/sda   QEMU HARDDISK   -        SSD    -      -      scsi:t-0x00   -      22 GB    /pci0000:00/0000:00:05.0/0000:01:01.0/virtio1/host2/target2:0:0/2:0:0:0/   /sys/class/block               *
  • OS (e.g: cat /etc/os-release): Talos 1.7.0
  • kernel (e.g: uname -a): Linux v1 6.6.28-talos #1 SMP Thu Apr 18 16:21:02 UTC 2024 x86_64 Linux
  • others:

Add ability to check whether PVC storage requirements are fulfilled by the node

Describe the problem/challenge you have

It seems that a PVC can claim and be provisioned more storage than the disk size of a particular node.
For example, the following PVC requesting 5G should be denied if the available capacity of nodes in cluster is 1G

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-hostpath-pvc
  namespace: openebs
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5G

As it currently stands, the above PVC will be provisioned and bound to a PV .

Describe the solution you'd like

We would like to limit the storage policy to existing free storage in node.

Environment:

  • OpenEBS version:
$ kubectl get po -n openebs --show-labels
NAME                                           READY   STATUS    RESTARTS   AGE    LABELS
openebs-localpv-provisioner-b559bc6c4-k87hl    1/1     Running   0          4h4m   name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.0.0,pod-template-hash=b559bc6c4
openebs-ndm-cluster-exporter-b48f4c59d-f24d9   1/1     Running   0          4h4m   name=openebs-ndm-cluster-exporter,openebs.io/component-name=ndm-cluster-exporter,openebs.io/version=3.0.0,pod-template-hash=b48f4c59d
openebs-ndm-node-exporter-22mv4                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-2g78z                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-4h526                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-9p2xv                1/1     Running   3          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-c999w                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-d5xsq                1/1     Running   3          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-fnl8m                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-h2q88                1/1     Running   4          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-kcrs7                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-vclcz                1/1     Running   2          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-x6ww5                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-xczq2                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-operator-77d54c5c69-pt85n          1/1     Running   0          89m    name=openebs-ndm-operator,openebs.io/component-name=ndm-operator,openebs.io/version=3.0.0,pod-template-hash=77d54c5c69
  • Kubernetes version :
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-4+2076e5554ee7d2", GitCommit:"2076e5554ee7d2e0ac57857d161db7fd6fad915a", GitTreeState:"clean", BuildDate:"2021-01-05T22:46:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g: cat /etc/os-release):
~# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • kernel:
    linux

openebs-localpv-provisioner pull linux-utils no basic auth credentials

int pod do not support docker secret when pull OPENEBS_IO_HELPER_IMAGE image

func (p *Provisioner) launchPod(config podConfig) (*corev1.Pod, error) {
	// the helper pod need to be launched in privileged mode. This is because in CoreOS
	// nodes, pods without privileged access cannot write to the host directory.
	// Helper pods need to create and delete directories on the host.
	privileged := true

	helperPod, err := pod.NewBuilder().
		WithName(config.podName+"-"+config.pOpts.name).
		WithRestartPolicy(corev1.RestartPolicyNever).
		//WithNodeSelectorHostnameNew(config.pOpts.nodeHostname).
		WithNodeAffinityNew(config.pOpts.nodeAffinityLabelKey, config.pOpts.nodeAffinityLabelValue).
		WithServiceAccountName(config.pOpts.serviceAccountName).
		WithTolerationsForTaints(config.taints...).
		WithContainerBuilder(
			container.NewBuilder().
				WithName("local-path-" + config.podName).
				WithImage(p.helperImage).
				WithCommandNew(append(config.pOpts.cmdsForPath, filepath.Join("/data/", config.volumeDir))).
				WithVolumeMountsNew([]corev1.VolumeMount{
					{
						Name:      "data",
						ReadOnly:  false,
						MountPath: "/data/",
					},
				}).
				WithPrivilegedSecurityContext(&privileged),
		).
		WithVolumeBuilder(
			volume.NewBuilder().
				WithName("data").
				WithHostDirectory(config.parentDir),
		).
		Build()

	//Launch the helper pod.
	hPod, err := p.kubeClient.CoreV1().Pods(p.namespace).Create(helperPod)
	return hPod, err
}

Allocation will fail if the volume is deleted too quickly

Describe the bug: A clear and concise description of what the bug is.
I can't allocate volume if there is 1 volume allocated and delete too fast.

Expected behaviour: A concise description of what you expected to happen
Allocation succesfully without error.

Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem

  1. Deploy 1 kubernetes cluster with openebs-hostpath and openebs-kernel-nfs storageclass installed
  2. Create a volume using storageclass openebs-kernel-nfs and delete it quickly

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
NAME                                                            READY   STATUS                   RESTARTS   AGE   LABELS
init-pvc-5914ccac-74a6-4888-98d0-247b45aac34f                   0/1     Evicted                  0          59m   <none>
init-pvc-77c3bbbe-5ea3-4843-b908-885a3f7d91b2                   0/1     Evicted                  0          42m   <none>
init-pvc-784eb57b-0454-4d32-8a58-901782c6bb6b                   0/1     Evicted                  0          42m   <none>
init-pvc-7d8aa337-58e4-4be3-9549-9e4b5c89e8a3                   0/1     Evicted                  0          46s   <none>
init-pvc-89094101-24e1-4060-abb0-561895da218d                   0/1     Evicted                  0          40s   <none>
init-pvc-8cc3d84d-24a6-4a2c-a923-2307cbec397d                   0/1     Evicted                  0          44s   <none>
init-pvc-ad743c6a-122e-476b-a264-358dfbe38993                   0/1     Evicted                  0          42m   <none>
init-pvc-b5b15f67-0754-4a80-b778-ac953b3aa3d3                   0/1     Evicted                  0          42s   <none>
init-pvc-fd915cd8-b2a8-4a5f-bce0-3209f39f2f63                   0/1     Completed                0          43m   <none>
nfs-pvc-0b73f33b-b748-4e69-b203-5a52cab70c9b-76ddb8c974-tft7z   0/1     Pending                  0          63m   openebs.io/nfs-server=nfs-pvc-0b73f33b-b748-4e69-b203-5a52cab70c9b,pod-template-hash=76ddb8c974
nfs-pvc-589e8f61-c416-4af2-82ae-d3dfbafd1b71-57486b6b6-8m97c    0/1     Pending                  0          63m   openebs.io/nfs-server=nfs-pvc-589e8f61-c416-4af2-82ae-d3dfbafd1b71,pod-template-hash=57486b6b6
nfs-pvc-706b6eb5-e3ec-4f3c-ad6d-919a7aabe800-544885dbb-z6x65    0/1     Pending                  0          64m   openebs.io/nfs-server=nfs-pvc-706b6eb5-e3ec-4f3c-ad6d-919a7aabe800,pod-template-hash=544885dbb
nfs-pvc-70cab665-7ec2-422d-a09c-ceaaf995ba46-576485f46b-r6rpp   0/1     Pending                  0          63m   openebs.io/nfs-server=nfs-pvc-70cab665-7ec2-422d-a09c-ceaaf995ba46,pod-template-hash=576485f46b
nfs-pvc-76e4536f-72ec-4c7b-a28e-f70d518a12fd-5c48f54bfd-k42zl   0/1     Pending                  0          62m   openebs.io/nfs-server=nfs-pvc-76e4536f-72ec-4c7b-a28e-f70d518a12fd,pod-template-hash=5c48f54bfd
nfs-pvc-9ccdaa59-ea9a-4549-b0d0-7979b471093f-6d7c88cf98-4thsd   0/1     Pending                  0          64m   openebs.io/nfs-server=nfs-pvc-9ccdaa59-ea9a-4549-b0d0-7979b471093f,pod-template-hash=6d7c88cf98
nfs-pvc-b836e7d6-117f-4678-b6be-c2018812c2b1-f7dc9546-249dw     0/1     Pending                  0          64m   openebs.io/nfs-server=nfs-pvc-b836e7d6-117f-4678-b6be-c2018812c2b1,pod-template-hash=f7dc9546
nfs-pvc-d15f4a8c-e7e6-4501-86d8-4e311f03caf4-86cb695c66-2s7xx   0/1     Pending                  0          64m   openebs.io/nfs-server=nfs-pvc-d15f4a8c-e7e6-4501-86d8-4e311f03caf4,pod-template-hash=86cb695c66
nfs-pvc-e413d0c1-c01d-42eb-a775-3c32299cd11a-7d4fd4f6fd-kr2f7   0/1     Pending                  0          63m   openebs.io/nfs-server=nfs-pvc-e413d0c1-c01d-42eb-a775-3c32299cd11a,pod-template-hash=7d4fd4f6fd
openebs-localpv-provisioner-b5467787f-dxt24                     0/1     ContainerStatusUnknown   3          66m   app=localpv-provisioner,chart=localpv-provisioner-3.2.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.2.0,pod-template-hash=b5467787f,release=openebs
openebs-localpv-provisioner-b5467787f-gbk8d                     1/1     Running                  0          11m   app=localpv-provisioner,chart=localpv-provisioner-3.2.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.2.0,pod-template-hash=b5467787f,release=openebs
openebs-localpv-provisioner-b5467787f-gzb7l                     0/1     ContainerStatusUnknown   1          27m   app=localpv-provisioner,chart=localpv-provisioner-3.2.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.2.0,pod-template-hash=b5467787f,release=openebs
openebs-ndm-nm6gp                                               1/1     Running                  2          66m   app=openebs,component=ndm,controller-revision-hash=6895466595,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=3.2.0,pod-template-generation=1,release=openebs
openebs-ndm-operator-69b5fcf66b-w4rdx                           1/1     Running                  0          66m   app=openebs,component=ndm-operator,name=ndm-operator,openebs.io/component-name=ndm-operator,openebs.io/version=3.2.0,pod-template-hash=69b5fcf66b,release=openebs
openebs-nfs-provisioner-67559f5889-2gvld                        1/1     Running                  0          11m   app=nfs-provisioner,chart=nfs-provisioner-0.9.0,component=nfs-provisioner,heritage=Helm,name=openebs-nfs-provisioner,openebs.io/component-name=openebs-nfs-provisioner,openebs.io/version=0.9.0,pod-template-hash=67559f5889,release=openebs
openebs-nfs-provisioner-67559f5889-j26s7                        0/1     ContainerStatusUnknown   1          26m   app=nfs-provisioner,chart=nfs-provisioner-0.9.0,component=nfs-provisioner,heritage=Helm,name=openebs-nfs-provisioner,openebs.io/component-name=openebs-nfs-provisioner,openebs.io/version=0.9.0,pod-template-hash=67559f5889,release=openebs
openebs-nfs-provisioner-67559f5889-pddrf                        0/1     ContainerStatusUnknown   3          66m   app=nfs-provisioner,chart=nfs-provisioner-0.9.0,component=nfs-provisioner,heritage=Helm,name=openebs-nfs-provisioner,openebs.io/component-name=openebs-nfs-provisioner,openebs.io/version=0.9.0,pod-template-hash=67559f5889,release=openebs
  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>

I0423 09:06:02.160041       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"openebs-system", Name:"nfs-pvc-e413d0c1-c01d-42eb-a775-3c32299cd11a", UID:"ad743c6a-122e-476b-a264-358dfbe38993", APIVersion:"v1", ResourceVersion:"3071", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "openebs-system/nfs-pvc-e413d0c1-c01d-42eb-a775-3c32299cd11a"
I0423 09:06:02.166302       1 provisioner_hostpath.go:75] Creating volume pvc-ad743c6a-122e-476b-a264-358dfbe38993 at node with labels {map[kubernetes.io/hostname:master]}, path:/var/openebs/local/pvc-ad743c6a-122e-476b-a264-358dfbe38993,ImagePullSecrets:[]
I0423 09:06:02.990031       1 provisioner_hostpath.go:90] Initialize volume pvc-ad743c6a-122e-476b-a264-358dfbe38993 failed: object is being deleted: pods "init-pvc-ad743c6a-122e-476b-a264-358dfbe38993" already exists
2022-04-23T09:06:02.990Z	ERROR	app/provisioner_hostpath.go:91		{"eventcode": "local.pv.provision.failure", "msg": "Failed to provision Local PV", "rname": "pvc-ad743c6a-122e-476b-a264-358dfbe38993", "reason": "Volume initialization failed", "storagetype": "hostpath"}
github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app.(*Provisioner).ProvisionHostPath
	/go/src/github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app/provisioner_hostpath.go:91
github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app.(*Provisioner).Provision
	/go/src/github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app/provisioner.go:140
sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller.(*ProvisionController).provisionClaimOperation
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1346
sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller.(*ProvisionController).syncClaim
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1062
sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller.(*ProvisionController).syncClaimHandler
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1030
sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller.(*ProvisionController).processNextClaimWorkItem.func1
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:931
sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller.(*ProvisionController).processNextClaimWorkItem
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:953
sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller.(*ProvisionController).runClaimWorker
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:899
sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller.(*ProvisionController).Run.func1.2
	/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:855
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
W0423 09:06:02.990447       1 controller.go:936] Retrying syncing claim "ad743c6a-122e-476b-a264-358dfbe38993" because failures 0 < threshold 15
E0423 09:06:02.990494       1 controller.go:956] error syncing claim "ad743c6a-122e-476b-a264-358dfbe38993": failed to provision volume with StorageClass "openebs-hostpath": object is being deleted: pods "init-pvc-ad743c6a-122e-476b-a264-358dfbe38993" already exists

Anything else we need to know?:
I'm trying to install Sentry (via Helm)

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 3.2.0
  • Kubernetes version (use kubectl version): v1.23.6
  • Cloud provider or hardware configuration: PC
  • OS (e.g: cat /etc/os-release): Ubuntu 20.04.4 LTS (Focal Fossa)
  • kernel (e.g: uname -a): Linux master 5.4.0-109-generic #123-Ubuntu SMP Fri Apr 8 09:10:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • others:

Vulnerabilities scanned by trivy

Describe the bug: A clear and concise description of what the bug is.
trivy detects some vulnerabilities with the localpv image.

root@stonetest:~# trivy image openebs/provisioner-localpv:latest
2022-07-06T13:35:36.317+0800	INFO	Vulnerability scanning is enabled
2022-07-06T13:35:36.317+0800	INFO	Secret scanning is enabled
2022-07-06T13:35:36.317+0800	INFO	If your scanning is slow, please try '--security-checks vuln' to disable secret scanning
2022-07-06T13:35:36.317+0800	INFO	Please see also https://aquasecurity.github.io/trivy/v0.29.2/docs/secret/scanning/#recommendation for faster secret detection
2022-07-06T13:35:39.027+0800	INFO	Detected OS: alpine
2022-07-06T13:35:39.027+0800	INFO	Detecting Alpine vulnerabilities...
2022-07-06T13:35:39.029+0800	INFO	Number of language-specific files: 1
2022-07-06T13:35:39.029+0800	INFO	Detecting gobinary vulnerabilities...
2022-07-06T13:35:39.030+0800	WARN	This OS version is no longer supported by the distribution: alpine 3.12.12
2022-07-06T13:35:39.030+0800	WARN	The vulnerability detection may be insufficient because security updates are not provided

openebs/provisioner-localpv:latest (alpine 3.12.12)

Total: 8 (UNKNOWN: 0, LOW: 0, MEDIUM: 4, HIGH: 4, CRITICAL: 0)

┌─────────┬────────────────┬──────────┬───────────────────┬───────────────┬─────────────────────────────────────────────────┐
│ Library │ Vulnerability  │ Severity │ Installed Version │ Fixed Version │                      Title                      │
├─────────┼────────────────┼──────────┼───────────────────┼───────────────┼─────────────────────────────────────────────────┤
│ curl    │ CVE-2022-22576 │ HIGH     │ 7.79.1-r0         │ 7.79.1-r1     │ curl: OAUTH2 bearer bypass in connection re-use │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-22576      │
│         ├────────────────┤          │                   │               ├─────────────────────────────────────────────────┤
│         │ CVE-2022-27775 │          │                   │               │ curl: bad local IPv6 connection reuse           │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-27775      │
│         ├────────────────┼──────────┤                   │               ├─────────────────────────────────────────────────┤
│         │ CVE-2022-27774 │ MEDIUM   │                   │               │ curl: credential leak on redirect               │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-27774      │
│         ├────────────────┤          │                   │               ├─────────────────────────────────────────────────┤
│         │ CVE-2022-27776 │          │                   │               │ curl: auth/cookie leak on redirect              │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-27776      │
├─────────┼────────────────┼──────────┤                   │               ├─────────────────────────────────────────────────┤
│ libcurl │ CVE-2022-22576 │ HIGH     │                   │               │ curl: OAUTH2 bearer bypass in connection re-use │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-22576      │
│         ├────────────────┤          │                   │               ├─────────────────────────────────────────────────┤
│         │ CVE-2022-27775 │          │                   │               │ curl: bad local IPv6 connection reuse           │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-27775      │
│         ├────────────────┼──────────┤                   │               ├─────────────────────────────────────────────────┤
│         │ CVE-2022-27774 │ MEDIUM   │                   │               │ curl: credential leak on redirect               │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-27774      │
│         ├────────────────┤          │                   │               ├─────────────────────────────────────────────────┤
│         │ CVE-2022-27776 │          │                   │               │ curl: auth/cookie leak on redirect              │
│         │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2022-27776      │
└─────────┴────────────────┴──────────┴───────────────────┴───────────────┴─────────────────────────────────────────────────┘

usr/local/bin/provisioner-localpv (gobinary)

Total: 1 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 1, CRITICAL: 0)

┌───────────────────┬────────────────┬──────────┬───────────────────┬───────────────┬──────────────────────────────────────────────────────────┐
│      Library      │ Vulnerability  │ Severity │ Installed Version │ Fixed Version │                          Title                           │
├───────────────────┼────────────────┼──────────┼───────────────────┼───────────────┼──────────────────────────────────────────────────────────┤
│ golang.org/x/text │ CVE-2021-38561 │ HIGH     │ v0.3.6            │ 0.3.7         │ golang: out-of-bounds read in golang.org/x/text/language │
│                   │                │          │                   │               │ leads to DoS                                             │
│                   │                │          │                   │               │ https://avd.aquasec.com/nvd/cve-2021-38561               │
└───────────────────┴────────────────┴──────────┴───────────────────┴───────────────┴──────────────────────────────────────────────────────────┘
root@stonetest:~# trivy version
Version: 0.29.2
Vulnerability DB:
  Version: 2
  UpdatedAt: 2022-07-06 00:12:25.854188929 +0000 UTC
  NextUpdate: 2022-07-06 06:12:25.854188429 +0000 UTC
  DownloadedAt: 2022-07-06 05:34:58.185443 +0000 UTC

Expected behaviour: A concise description of what you expected to happen
no vulnerabilities, at least no high and medium vulnerabilities.

Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem

download trivy https://github.com/aquasecurity/trivy then run against the image

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>

Anything else we need to know?:
Add any other context about the problem here.

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • kernel (e.g: uname -a):
  • others:

xfs quota: wrong soft/hard limits number was set by openebs provisioner

Describe the bug: A clear and concise description of what the bug is.
I installed openebs localpv provisioner via helm chart with xfs quota enabled:

    xfsQuota:
      # If true, enables XFS project quota
      enabled: true
      # Detailed configuration options for XFS project quota.
      # If XFS Quota is enabled with the default values, the usage limit
      # is set at the storage capacity specified in the PVC.
      softLimitGrace: "60%"
      hardLimitGrace: "90%"

Then I created a pvc with 10Gi and running it with a busybox container. However I found openebs set a wrong&bigger soft&hard limits for this pvc.

root@stonetest:~# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
busybox-test   Bound    pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2   10Gi       RWO            openebs-hostpath   4h51m

root@stonetest:~# kubectl get pv pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 -o yaml | grep path
    openebs.io/cas-type: local-hostpath
    path: /openebs/local/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2
  storageClassName: openebs-hostpath

root@stonetest:~# mount | grep 45a4
/dev/vdc on /var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota)

root@stonetest:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
vdc  xfs                f993bbb1-d875-4436-ab4d-d7275b2c719c   30.6G    39% /var/lib/kubelet/pods/93ae9893-0374-4197-9084-02f64d8aaba6/volumes/kubernetes.io~local-volume/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c
                                                                            /var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2
                                                                            /openebs

root@stonetest:~# xfs_quota -x
xfs_quota> print
Filesystem          Pathname
/openebs            /dev/vdc (pquota)
/var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 /dev/vdc (pquota)
/var/lib/kubelet/pods/93ae9893-0374-4197-9084-02f64d8aaba6/volumes/kubernetes.io~local-volume/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c /dev/vdc (pquota)
xfs_quota> report
Project quota on /openebs (/dev/vdc)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0                  0          0          0     00  [0 days]
#1                  0   17179872   20401096     00 [--------]
#2                  0   17179872   20401096     00 [--------]

The openebs provisioner set the soft limit as 17179872KB and hard limit as 20401096KB which exceeded the pvc's capacity (10Gi), I think this is wrong.

The soft limit should be 10Gi * 0.6 and the hard limit should be 10Gi * 0.9 respectively.

Expected behaviour: A concise description of what you expected to happen
The openebs provisioner set the correct soft&hard limits for the pvc

Steps to reproduce the bug:

  • attach a disk to k8s node, format it with xfs and mount it to /openebs with pquota options.
  • install openebs localpv provisioner with helm chart v3.3.1.
root@stonetest:~# helm install openebs openebs/openebs --namespace openebs -f openebs-values.yaml
root@stonetest:~# cat openebs-values.yaml
apiserver:
  enabled: false

varDirectoryPath:
  baseDir: "/openebs"

provisioner:
  enabled: false

localprovisioner:
  enabled: true
  basePath: "/openebs/local"
  deviceClass:
    enabled: false
  hostpathClass:
    # Name of the default hostpath StorageClass
    name: openebs-hostpath
    # If true, enables creation of the openebs-hostpath StorageClass
    enabled: true
    # Available reclaim policies: Delete/Retain, defaults: Delete.
    reclaimPolicy: Delete
    # If true, sets the openebs-hostpath StorageClass as the default StorageClass
    isDefaultClass: false
    # Path on the host where local volumes of this storage class are mounted under.
    # NOTE: If not specified, this defaults to the value of localprovisioner.basePath.
    basePath: "/openebs/local"
    # Custom node affinity label(s) for example "openebs.io/node-affinity-value"
    # that will be used instead of hostnames
    # This helps in cases where the hostname changes when the node is removed and
    # added back with the disks still intact.
    # Example:
    #          nodeAffinityLabels:
    #            - "openebs.io/node-affinity-key-1"
    #            - "openebs.io/node-affinity-key-2"
    nodeAffinityLabels: []
    # Prerequisite: XFS Quota requires an XFS filesystem mounted with
    # the 'pquota' or 'prjquota' mount option.
    xfsQuota:
      # If true, enables XFS project quota
      enabled: true
      # Detailed configuration options for XFS project quota.
      # If XFS Quota is enabled with the default values, the usage limit
      # is set at the storage capacity specified in the PVC.
      softLimitGrace: "60%"
      hardLimitGrace: "90%"
    # Prerequisite: EXT4 Quota requires an EXT4 filesystem mounted with
    # the 'prjquota' mount option.
    ext4Quota:
      # If true, enables XFS project quota
      enabled: false
      # Detailed configuration options for EXT4 project quota.
      # If EXT4 Quota is enabled with the default values, the usage limit
      # is set at the storage capacity specified in the PVC.
      softLimitGrace: "0%"
      hardLimitGrace: "0%"

snapshotOperator:
  enabled: false

ndm:
  enabled: false

ndmOperator:
  enabled: false

ndmExporter:
  enabled: false

webhook:
  enabled: false

crd:
  enableInstall: false

policies:
  monitoring:
    enabled: false

analytics:
  enabled: false

jiva:
  enabled: false
  openebsLocalpv:
    enabled: false
  localpv-provisioner:
    openebsNDM:
      enabled: false

cstor:
  enabled: false
  openebsNDM:
    enabled: false

openebs-ndm:
  enabled: false

localpv-provisioner:
  enabled: false
  openebsNDM:
    enabled: false

zfs-localpv:
  enabled: false

lvm-localpv:
  enabled: false

nfs-provisioner:
  enabled: false
  • create pvc and running a busybox with it
  • xfs_quota -x then check the soft/hard limits for the pvc

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
root@stonetest:~# kubectl get pods -n openebs --show-labels
NAME                                           READY   STATUS    RESTARTS   AGE     LABELS
openebs-localpv-provisioner-5757b495fc-4zflv   1/1     Running   0          5h16m   app=openebs,component=localpv-provisioner,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.3.0,pod-template-hash=5757b495fc,release=openebs
  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>
root@stonetest:~# kubectl -n openebs logs openebs-localpv-provisioner-5757b495fc-4zflv
I1207 03:10:43.992737       1 start.go:66] Starting Provisioner...
I1207 03:10:44.018165       1 start.go:130] Leader election enabled for localpv-provisioner via leaderElectionKey
I1207 03:10:44.018641       1 leaderelection.go:248] attempting to acquire leader lease openebs/openebs.io-local...
I1207 03:10:44.027045       1 leaderelection.go:258] successfully acquired lease openebs/openebs.io-local
I1207 03:10:44.027209       1 controller.go:810] Starting provisioner controller openebs.io/local_openebs-localpv-provisioner-5757b495fc-4zflv_9670bed1-7b43-4c70-b187-7f4e514cd24e!
I1207 03:10:44.027181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openebs", Name:"openebs.io-local", UID:"4838eadf-1cf6-4cec-af4e-8edafba21e87", APIVersion:"v1", ResourceVersion:"2999260", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openebs-localpv-provisioner-5757b495fc-4zflv_9670bed1-7b43-4c70-b187-7f4e514cd24e became leader
I1207 03:10:44.128323       1 controller.go:859] Started provisioner controller openebs.io/local_openebs-localpv-provisioner-5757b495fc-4zflv_9670bed1-7b43-4c70-b187-7f4e514cd24e!
I1207 03:13:30.749165       1 controller.go:1279] provision "default/busybox-test" class "openebs-hostpath": started
I1207 03:13:30.755533       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-test", UID:"45a4e4f7-8117-4725-a17b-e3446da4b7a2", APIVersion:"v1", ResourceVersion:"2999559", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/busybox-test"
I1207 03:13:30.757597       1 provisioner_hostpath.go:76] Creating volume pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 at node with labels {map[kubernetes.io/hostname:stonetest]}, path:/openebs/local/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2,ImagePullSecrets:[]
2022-12-07T03:13:45.869Z	INFO	app/provisioner_hostpath.go:130		{"eventcode": "local.pv.quota.success", "msg": "Successfully applied quota", "rname": "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2", "storagetype": "hostpath"}
2022-12-07T03:13:45.869Z	INFO	app/provisioner_hostpath.go:214		{"eventcode": "local.pv.provision.success", "msg": "Successfully provisioned Local PV", "rname": "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2", "storagetype": "hostpath"}
I1207 03:13:45.869571       1 controller.go:1384] provision "default/busybox-test" class "openebs-hostpath": volume "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2" provisioned
I1207 03:13:45.869586       1 controller.go:1397] provision "default/busybox-test" class "openebs-hostpath": succeeded
I1207 03:13:45.869594       1 volume_store.go:212] Trying to save persistentvolume "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2"
I1207 03:13:45.872951       1 volume_store.go:219] persistentvolume "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2" saved
I1207 03:13:45.873060       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-test", UID:"45a4e4f7-8117-4725-a17b-e3446da4b7a2", APIVersion:"v1", ResourceVersion:"2999559", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2
I1207 03:20:00.738275       1 controller.go:1279] provision "xfs/busybox-test" class "openebs-hostpath": started
I1207 03:20:00.746069       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"xfs", Name:"busybox-test", UID:"e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", APIVersion:"v1", ResourceVersion:"3000306", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "xfs/busybox-test"
I1207 03:20:00.748003       1 provisioner_hostpath.go:76] Creating volume pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c at node with labels {map[kubernetes.io/hostname:stonetest]}, path:/openebs/local/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c,ImagePullSecrets:[]
2022-12-07T03:20:08.822Z	INFO	app/provisioner_hostpath.go:130		{"eventcode": "local.pv.quota.success", "msg": "Successfully applied quota", "rname": "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", "storagetype": "hostpath"}
2022-12-07T03:20:08.822Z	INFO	app/provisioner_hostpath.go:214		{"eventcode": "local.pv.provision.success", "msg": "Successfully provisioned Local PV", "rname": "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", "storagetype": "hostpath"}
I1207 03:20:08.822262       1 controller.go:1384] provision "xfs/busybox-test" class "openebs-hostpath": volume "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c" provisioned
I1207 03:20:08.822275       1 controller.go:1397] provision "xfs/busybox-test" class "openebs-hostpath": succeeded
I1207 03:20:08.822282       1 volume_store.go:212] Trying to save persistentvolume "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c"
I1207 03:20:08.826234       1 volume_store.go:219] persistentvolume "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c" saved
I1207 03:20:08.826654       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"xfs", Name:"busybox-test", UID:"e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", APIVersion:"v1", ResourceVersion:"3000306", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c
root@stonetest:~#

Anything else we need to know?:
Add any other context about the problem here.

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): openebs helm chart v3.3.1
  • Kubernetes version (use kubectl version): v1.23.10
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release): Ubuntu 22.04 LTS
  • kernel (e.g: uname -a): Linux stonetest 5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • others:

XFS Quota dynamic adjustment

Describe the problem/challenge you have
XFS Quotas are set only once when the PV is created. If I want to change the quota for a PV, I can't just change the value in a resource. I have to log in to the Node and use xfs_quota to change the quota, and I bet that change doesn't bubble back up to OpenEBS/K8s.

Describe the solution you'd like
I would like to be able to change the size of the storage request in the parent Pod (or above, not sure of the exact flow of CSI) and when it changes in the PV/PVC, the XFS quota is adjusted by the operator via the same mechanism it used to initially configure the quota. If the request cannot be fulfilled because the quota is being reduced below the current usage or the quota is larger than the filesystem (or total quota in the filesystem is over some overcommit threshold), this should be reported in an Event and the Pod/PV should remain inconsistent. (Uh, or whatever is the best practice in K8s ☺)

Environment:

  • OpenEBS version: 3.0.0
  • Kubernetes version: v1.19.2 and I am embarrassed about it
  • Cloud provider or hardware configuration: Proxmox VMs on amd64
  • OS: Debian 10 or 11
  • kernel ... 4.19 / 5.10 because Debian versions

googleanalytics.go:49] Post "https://www.google-analytics.com/collect"

Describe the bug: A clear and concise description of what the bug is.
The localpv-provisioner pod generate the "googleanalytics.go:49] Post "https://www.google-analytics.com/collect": dial tcp 203.208.40.33:443: i/o timeout"
Expected behaviour:
we don't want this behavior to happen, Please help me to avoid it.
Steps to reproduce the bug:
you just install it and do nothing.
The output of the following commands will help us better understand what's going on:

  • kubectl logs -f --tail 200 -n openebs localpv-localpv-provisioner-9bf5b5874-vblxw

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 3.4.0,
  • Kubernetes version (use kubectl version): 1.21

Delay in pod deletion when having localpv provisioned

Describe the bug: When a pod is created with the localpv provisioned, deleting the pod using kubectl along with the pv will take around 40+ seconds whereas creating a pod should return instantly.

Expected behaviour: Delete a pod should have very small blocking wait compared to pod creation.

Steps to reproduce the bug:

  1. Setup the hostpath with xfs quota enabled. Install the openebs localpv provisioner.
  2. Create a pod along with the pv with the following configuration test.yaml
apiVersion: apps/v1
kind: List
items:
- kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: local-hostpath-pvc
    namespace: infra-offline-dev
  spec:
    storageClassName: openebs-hostpath
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 5G
- apiVersion: v1
  kind: Pod
  metadata:
    name: hello-local-device-pod
    namespace: infra-offline-dev
  spec:
    volumes:
      - name: local-storage
        persistentVolumeClaim:
          claimName: local-hostpath-pvc
    containers:
      - name: hello-container
        image: busybox
        command:
          - sh
          - -c
          - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
        volumeMounts:
          - mountPath: /mnt/store
            name: local-storage

cmd: kubectl apply -f test.yaml
3. After waiting the pod being created, delete the pod as well as the pvc together using kubectl:
cmd: kubectl delete -f test.yaml

My test output when performing the delete:

Linux$ time kubectl delete -f test.yaml
persistentvolumeclaim "local-hostpath-pvc" deleted
pod "hello-local-device-pod" deleted

real	0m43.641s
user	0m0.098s
sys	0m0.034s

Anything else we need to know?:
N/A

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels):
NAME                                            READY   STATUS    RESTARTS   AGE   LABELS
openebs-localpv-provisioner-5d88cb474b-j59qk    1/1     Running   16         32d   name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.3.0,pod-template-hash=5d88cb474b
openebs-ndm-cluster-exporter-84bb5fc764-jxtkh   1/1     Running   0          32d   name=openebs-ndm-cluster-exporter,openebs.io/component-name=ndm-cluster-exporter,openebs.io/version=3.3.0,pod-template-hash=84bb5fc764
openebs-ndm-g2rsl                               1/1     Running   0          31d   controller-revision-hash=56bdc87c48,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=3.3.0,pod-template-generation=3
openebs-ndm-node-exporter-6wt6d                 1/1     Running   0          31d   controller-revision-hash=5fc9bcf946,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.3.0,pod-template-generation=3
openebs-ndm-operator-7657446466-cz4w9           1/1     Running   0          32d   name=openebs-ndm-operator,openebs.io/component-name=ndm-operator,openebs.io/version=3.3.0,pod-template-hash=7657446466
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.12", GitCommit:"696a9fdd2a58340e61e0d815c5769d266fca0802", GitTreeState:"clean", BuildDate:"2022-04-13T19:07:00Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.12", GitCommit:"696a9fdd2a58340e61e0d815c5769d266fca0802", GitTreeState:"clean", BuildDate:"2022-04-13T19:01:10Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    Bare-metal. XFS config is the following:
UUID="1e99f408-4103-4432-b5c0-f943400c8e6a" /data         xfs     defaults,uquota,pquota 0   0
/data/k8s/pv /var/kubernetes/pv auto bind 0 0
  • OS (e.g: cat /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • kernel (e.g: uname -a):
Linux test-host 5.15.0-46-generic #49~20.04.1-Ubuntu SMP Thu Aug 4 19:15:44 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • others:
    N/A

Populate data into Local PV from another Local PV or from an outside source

A few use cases around Local PV are:

  • Cluster node re-cycle scenarios: A Kubernetes node will be reclaimed by the provider during an upgrade. The data saved into the local storage of the node (about to be recycled) should be migrated to another node in the cluster and the application (pod) using that storage should be re-scheduled to the new node.
  • Load the seed data into Local PV: Either from an external source or as a clone from an existing Local PV volume that will help with scaling the applications with static content (without using read-write many) or for performing ML jobs that require data to be downloaded from an S3 or other sources.

The idea is to make use of the "Data Populator" and PVC Data Source constructs being developed in K8s to solve these use cases. Can data populators be built using rsync, rclone or restic kind of tools.

The solution should be easy to integrate into Stateful or other K8s operators. For example, it should be possible to write a generic operator that can be instructed to drain a node by:

  • initiating the migration of the Local PVs to an upgraded new node
  • move the application
  • allowing the older node to be reclaimed

Alternate ways or workaround available today are:

  • Take Backup of the application using Velero to an S3 location and restore the data into new nodes. A few challenges or drawbacks of this approach are:
    • Setting up backup outside the cluster or creating a new dependency.
    • The backup needs to be performed by scaling down the application.
    • And if the application belongs to STS - all pods need to coordinate in the process.

Add support to specify file permissions for pvc hostpaths

Describe the problem/challenge you have
Depending on the type of pods using the Local volumes, storage class should provide options to customize the permissions on the host path subdirectories. By default, the permissions are set to 0777.

Describe the solution you'd like
The customizations can be passed down via the StorageClass CAS config options.

Anything else you would like to add:
As part of this issue, we should explore options around:

  • default permissions to be set, which works with most of the applications. For example, a mysql/percona pod needs to have full permissions to the directory. Ref: openebs-archive/maya#1314 (comment)
  • fsguid / security context related changes that have come in through latest version of K8s

chore: add the missing openebs-lite-sc.yaml to this repo

Describe the problem/challenge you have

Match the provisioner YAMLs provided in https://github.com/openebs/charts/tree/gh-pages with those hosted in this repository (https://github.com/openebs/dynamic-localpv-provisioner/tree/develop/deploy/kubectl) for development.

Describe the solution you'd like
Add the following that includes both hostpath and device storage class:

While we are at this, it would be nice to add a make manifest that can pull in the latest NDM operator directly from NDM repo and using the yamls in this repo generate the required artifacts like:

  • openebs-operator-lite.yaml
  • openebs-operator-lite-sc.yaml
  • hostpath-operator.yaml
  • hospath-operator-sc.yaml

Ref: openebs-archive/jiva-operator#157

Enforce ext4 quotas

Describe the problem/challenge you have
The setup I'm using has an EXT4 filesystem and switching to XFS is not a viable option. Adding quotas for ext4 filesystems would solve that problem.

Describe the solution you'd like
In similar fashion to #78, the update will enable EXT4 project quotas to be enforced.

Anything else you would like to add:
This expands ext4 support for issue: #13
I've updated the code locally with tests and I'll perform a pull request for review.

Environment:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 3.2.0
  • Kubernetes version (use kubectl version): v1.24.2
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release): Fedora 33
  • kernel (e.g: uname -a): Linux endor 5.10.99 #23 SMP Tue Jun 28 12:29:23 PDT 2022 x86_64 x86_64 x86_64 GNU/Linux
  • others:

not working correctly in 1.19.10

Describe the bug: A clear and concise description of what the bug is.

It can create PVC/PV correctly and create the pvc-xxx folder properly. However, it can't mount it correctly.

Steps to reproduce the bug:

I use Rancher 2.5.8 for this. (with 2.9.0 local-pv and 1.4.0 ndm)
It works fine in 1.19.9 but not 1.19.10.
Both cluster starts from scratch with exactly the same step.

Anything else we need to know?:

  • OS ubuntu 20.04

Race condition: two PVCs get the same project quota

Describe the bug: We use localpv with ext4 hard quotas. They work quite fine, but from time to time, we get the problem, that the quota has exceeded despite the folder contains less than the defined quota (10GiB). Today I could track the problem down to 2 PVCs that oviously had the same project quota ID set:

/nvme/disk# ls
lost+found  pvc-2fabebc9-8143-4b60-beef-563180845e64  pvc-6d3a015a-c547-4292-9ed6-95b35a7aea41

/nvme/disk/pvc-6d3a015a-c547-4292-9ed6-95b35a7aea41# du -h --max-depth=1
4.2G	./workspace
33M	./remoting
8.0K	./caches
4.3G	.

/nvme/disk# du -h --max-depth=1
6.1G	./pvc-2fabebc9-8143-4b60-beef-563180845e64
16K	./lost+found
4.3G	./pvc-6d3a015a-c547-4292-9ed6-95b35a7aea41
11G	.

/nvme/disk# repquota -avugP
*** Report for project quotas on device /dev/md0
Block grace time: 7days; Inode grace time: 7days
                        Block limits                File limits
Project         used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
#0        --      20       0       0              2     0     0       
#1        --       0 10737419 10737419              0     0     0       
#2        --       0 10737419 10737419              0     0     0       
#3        --       0 10737419 10737419              0     0     0       
#4        -- 10737416 10737419 10737419           6122     0     0       
#5        --       0 10737419 10737419              0     0     0       
#6        --       0 10737419 10737419              0     0     0       

I think the problem occurs because of a race condition when determining the project id:
https://github.com/openebs/dynamic-localpv-provisioner/blob/e797585cb1e2c3578b914102bfe0e8768b04d950/cmd/provisioner-localpv/app/helper_hostpath.go#L294+L295

I see two possible workaround: either make sure that only one create-quota-pod can run at a time on one single node or apply a random project number instead of trying to increment them.

Expected behaviour: Each PVC has the quota it is configured with.

Steps to reproduce the bug:
Unfortunately, it is really hard to reproduce the bug, as it only happens now and then. During tests I scaled a deployment with a PVC up and down very fast to check the create and cleanup and had no problem. Maybe you can reproduce it with more than one deployment scaled up in parallel

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
    nvme-provisioner-localpv-provisioner-68f8494cf7-84hdv 1/1 Running 80 (12h ago) 32d app=localpv-provisioner,chart=localpv-provisioner-3.3.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.3.0,pod-template-hash=68f8494cf7,release=nvme-provisioner

Anything else we need to know?:
The provisioner pod has lots of restarts, we don't know why, there is no error in the pod log, but it seems not to be related

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 3.3.0
  • Kubernetes version (use kubectl version): 1.23.15
  • Cloud provider or hardware configuration: AWS
  • OS (e.g: cat /etc/os-release): Amazon Linux 2
  • kernel (e.g: uname -a): 5.4.228-131.415.amzn2.x86_64

volume is in pendig state in case of waitforfirstconsumer

Using LocalPV Device mode, I got the same problem as this issue of lvm-localpv openebs/lvm-localpv#31, how to solve it under the current project? do I have to use nodeAffinity to tag my StatefulSet?

according to the description of @zwForrest:

In the case of waitforfirstconsumer, the pod will be scheduled by parameters, then the node will be selected by scheduler, but lvm did not participate in kubernetes scheduling. This means that the node selected by lvm and the one selected by kubernetes scheduler may not be the same.

k8s cannot sense the local device state of the node when scheduling?

Can't get a node affinity label value in a microk8s multiple nodes cluster

From time to time this line doesn't work:

nodeAffinityValue := GetNodeLabelValue(opts.SelectedNode, nodeAffinityKey)

If it failed the openebs-localpv-provisioner can't schedule a new pod including a init-helper pod.

When I replaced the line with nodeAffinityValue := "someValue" and rebuild its docker image it worked as usual.

Enhance BDD tests to include more volume managers

Describe the problem/challenge you have
The integration test for the dynamic-localpv-provisioner should run on storage provisioned on using device-mapper, BTRFS, AUFS, etc.

Describe the solution you'd like
Integration test run will include running the existing tests on LVM, etc.

Anything else you would like to add:
Might be good idea to enable disable these additional tests, as per needs.

openebs-device deleted contents of directory

Describe the bug: A clear and concise description of what the bug is.
I configured openebs-device to use /dev/sda. This worked. But the second tenant (minio) was not created on /dev/sda, but on /data.
When deleting the tenant, the complete /data directory was deleted and not just the tenant.
Is this expected?
(I switched to openebs-lvmpv)

Expected behaviour: A concise description of what you expected to happen

Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>

Anything else we need to know?:
Add any other context about the problem here.

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • kernel (e.g: uname -a):
  • others:

openebs-localpv-provisioner 2.11.0 crashing, 2.10.1 does not

Describe the bug:

The openebs-localpv-provisioner pod in my OpenEBS deployment goes into a crash loop as soon as a PVC is created. Changing to the 2.10.1 release instantly resolves.

Expected behaviour:

No crashing, PVC allocated.

Steps to reproduce the bug:
Install latest version via helm:

helm upgrade --install openebs openebs/openebs --namespace openebs --create-namespace --set localprovisioner.enabled=false --set ndm.enabled=false --set ndmOperator.enabled=false --set openebs-ndm.enabled=true --set localpv-provisioner.enabled=true

Set openebs-device (or hostpath, both exhibit issue) StorageClass to default.

Deploy my app, which creates a PVC using default StorageClass.

Observe PVC not provisioned and openebs-locapv-provisioner goes into crash loop.

The output of the following commands will help us better understand what's going on:

**$ kubectl get pods -n openebs --show-labels**
NAME                                           READY   STATUS    RESTARTS   AGE   LABELS
openebs-admission-server-84bd769954-9lxj4      1/1     Running   0          96m   app=admission-webhook,name=admission-webhook,openebs.io/component-name=admission-webhook,openebs.io/version=2.11.0,pod-template-hash=84bd769954,release=openebs
openebs-apiserver-6c5dbd554f-2tsqh             1/1     Running   0          96m   app=openebs,component=apiserver,name=maya-apiserver,openebs.io/component-name=maya-apiserver,openebs.io/version=2.11.0,pod-template-hash=6c5dbd554f,release=openebs
openebs-localpv-provisioner-694b9f587d-8rb6d   1/1     Running   0          19s   app=localpv-provisioner,chart=localpv-provisioner-2.11.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=2.11.0,pod-template-hash=694b9f587d,release=openebs
openebs-ndm-jmthc                              1/1     Running   0          72m   app=openebs-ndm,chart=openebs-ndm-1.6.0,component=ndm,controller-revision-hash=db55c95cd,heritage=Helm,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=1.6.0,pod-template-generation=1,release=openebs
openebs-ndm-m7zt2                              1/1     Running   0          72m   app=openebs-ndm,chart=openebs-ndm-1.6.0,component=ndm,controller-revision-hash=db55c95cd,heritage=Helm,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=1.6.0,pod-template-generation=1,release=openebs
openebs-ndm-operator-c5874fbc5-dn4hr           1/1     Running   0          57m   app=openebs-ndm-operator,chart=openebs-ndm-1.6.0,component=openebs-ndm-operator,heritage=Helm,name=openebs-ndm-operator,openebs.io/component-name=openebs-ndm-operator,openebs.io/version=1.6.0,pod-template-hash=c5874fbc5,release=openebs
openebs-ndm-r2kvt                              1/1     Running   0          72m   app=openebs-ndm,chart=openebs-ndm-1.6.0,component=ndm,controller-revision-hash=db55c95cd,heritage=Helm,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=1.6.0,pod-template-generation=1,release=openebs
openebs-provisioner-7cb669f466-fd8ds           1/1     Running   0          57m   app=openebs,component=provisioner,name=openebs-provisioner,openebs.io/component-name=openebs-provisioner,openebs.io/version=2.11.0,pod-template-hash=7cb669f466,release=openebs
openebs-snapshot-operator-669696cd5d-4qwz2     2/2     Running   0          96m   app=openebs,component=snapshot-operator,name=openebs-snapshot-operator,openebs.io/component-name=openebs-snapshot-operator,openebs.io/version=2.11.0,pod-template-hash=669696cd5d,release=openebs

Logs from crashing localpv-provisioner:

I0716 21:17:02.178445       1 start.go:69] Starting Provisioner... 
I0716 21:17:02.249390       1 start.go:134] Leader election enabled for localpv-provisioner via leaderElectionKey 
I0716 21:17:02.250156       1 leaderelection.go:242] attempting to acquire leader lease  openebs/openebs.io-local... 
I0716 21:17:19.720306       1 leaderelection.go:252] successfully acquired lease openebs/openebs.io-local 
I0716 21:17:19.721003       1 controller.go:780] Starting provisioner controller openebs.io/local_openebs-localpv-provisioner-694b9f587d-8rb6d_199a945d-0218-47cd-9c06-346aadd4948f! 
I0716 21:17:19.722001       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openebs", Name:"openebs.io-local", UID:"1d6d6fcc-d321-4cf4-ae18-a441b79c3aee", APIVersion:"v1", ResourceVersion:"43777276", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openebs-localpv-provisioner-694b9f587d-8rb6d_199a945d-0218-47cd-9c06-346aadd4948f became leader 
I0716 21:17:19.821714       1 controller.go:1323] delete "pvc-14ec2240-727d-47a2-a398-7623f1506153": started 
I0716 21:17:19.821793       1 controller.go:1323] delete "pvc-b1acec8b-efd4-482f-8628-867573838711": started 
I0716 21:17:19.821714       1 controller.go:1323] delete "pvc-4620bdd6-21f0-48a1-911c-33f46bad8dd9": started 
I0716 21:17:19.822120       1 controller.go:829] Started provisioner controller openebs.io/local_openebs-localpv-provisioner-694b9f587d-8rb6d_199a945d-0218-47cd-9c06-346aadd4948f! 
I0716 21:17:19.822208       1 controller.go:1323] delete "pvc-1273f3ef-b069-476c-89a3-73e2d8ce7524": started 
E0716 21:17:19.883750       1 runtime.go:78] Observed a panic: runtime.boundsError{x:7, y:0, signed:true, code:0x1} (runtime error: slice bounds out of range [:7] with length 0) 
goroutine 139 [running]: 
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1667180, 0xc00025f100) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 
panic(0x1667180, 0xc00025f100) 
	/usr/local/go/src/runtime/panic.go:969 +0x166 
github.com/openebs/maya/pkg/version.GetVersionDetails(0x17522d8, 0x14) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/version/version.go:166 +0xe0 
github.com/openebs/maya/pkg/usage.(*versionSet).fetchAndSetVersion(0xc0002eb6a0, 0x16, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/usage/versionset.go:75 +0x1c1 
github.com/openebs/maya/pkg/usage.(*versionSet).getVersion(0xc0002eb6a0, 0xba4b00, 0xc00004401c, 0xc000066700) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/usage/versionset.go:85 +0x5b 
github.com/openebs/maya/pkg/usage.(*Usage).Build(0xc0000a01c0, 0xc0000a01c0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/usage/usage.go:197 +0x7d 
github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app.sendEventOrIgnore(0xc000400c70, 0x10, 0xc000042ed0, 0x28, 0xc000400c00, 0x4, 0xc000400930, 0xe, 0x1750789, 0x12) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app/provisioner.go:222 +0xc4 
github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app.(*Provisioner).Delete(0xc000194550, 0xc0002384d0, 0x0, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app/provisioner.go:180 +0x1dd 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).deleteVolumeOperation(0xc00003d440, 0xc0002384d0, 0x252ee00, 0x1) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:1338 +0x25d 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncVolume(0xc00003d440, 0x17251c0, 0xc0002384d0, 0x17251c0, 0xc0002384d0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:1055 +0xaa 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncVolumeHandler(0xc00003d440, 0xc000042ed0, 0x28, 0x4142e7, 0xc0002fe148) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:1002 +0x96 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextVolumeWorkItem.func1(0xc00003d440, 0x14eafc0, 0xc0004d22e0, 0x0, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:944 +0xe0 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00003d440, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:961 +0x53 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).runVolumeWorker(0xc00003d440) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:874 +0x2b 
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0000e8ea0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5f 
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000e8ea0, 0x3b9aca00, 0x0, 0x1, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8 
k8s.io/apimachinery/pkg/util/wait.Until(0xc0000e8ea0, 0x3b9aca00, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d 
created by sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).Run.func1 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:826 +0x37d 
panic: runtime error: slice bounds out of range [:7] with length 0 [recovered] 
	panic: runtime error: slice bounds out of range [:7] with length 0 
 
goroutine 139 [running]: 
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105 
panic(0x1667180, 0xc00025f100) 
	/usr/local/go/src/runtime/panic.go:969 +0x166 
github.com/openebs/maya/pkg/version.GetVersionDetails(0x17522d8, 0x14) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/version/version.go:166 +0xe0 
github.com/openebs/maya/pkg/usage.(*versionSet).fetchAndSetVersion(0xc0002eb6a0, 0x16, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/usage/versionset.go:75 +0x1c1 
github.com/openebs/maya/pkg/usage.(*versionSet).getVersion(0xc0002eb6a0, 0xba4b00, 0xc00004401c, 0xc000066700) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/usage/versionset.go:85 +0x5b 
github.com/openebs/maya/pkg/usage.(*Usage).Build(0xc0000a01c0, 0xc0000a01c0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/github.com/openebs/maya/pkg/usage/usage.go:197 +0x7d 
github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app.sendEventOrIgnore(0xc000400c70, 0x10, 0xc000042ed0, 0x28, 0xc000400c00, 0x4, 0xc000400930, 0xe, 0x1750789, 0x12) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app/provisioner.go:222 +0xc4 
github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app.(*Provisioner).Delete(0xc000194550, 0xc0002384d0, 0x0, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/cmd/provisioner-localpv/app/provisioner.go:180 +0x1dd 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).deleteVolumeOperation(0xc00003d440, 0xc0002384d0, 0x252ee00, 0x1) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:1338 +0x25d 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncVolume(0xc00003d440, 0x17251c0, 0xc0002384d0, 0x17251c0, 0xc0002384d0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:1055 +0xaa 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncVolumeHandler(0xc00003d440, 0xc000042ed0, 0x28, 0x4142e7, 0xc0002fe148) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:1002 +0x96 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextVolumeWorkItem.func1(0xc00003d440, 0x14eafc0, 0xc0004d22e0, 0x0, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:944 +0xe0 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00003d440, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:961 +0x53 
sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).runVolumeWorker(0xc00003d440) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:874 +0x2b 
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0000e8ea0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5f 
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000e8ea0, 0x3b9aca00, 0x0, 0x1, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8 
k8s.io/apimachinery/pkg/util/wait.Until(0xc0000e8ea0, 0x3b9aca00, 0x0) 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d 
created by sigs.k8s.io/sig-storage-lib-external-provisioner/controller.(*ProvisionController).Run.func1 
	/go/src/github.com/openebs/dynamic-localpv-provisioner/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:826 +0x37d 

Environment details:

  • OpenEBS version: 2.11
  • Kubernetes version: Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2+rke2r1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-06-24T00:52:42Z", GoVersion:"go1.16.4b7", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: VMs on VMWare 6.7
  • OS (e.g: cat /etc/os-release): Ubuntu 20.04.2
  • kernel (e.g: uname -a): Linux oly-k3node-51d 5.4.0-77-generic #86-Ubuntu SMP Thu Jun 17 02:35:03 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

HostPath does not enforce quotas

OpenEBS LocalPV HostPath do not enforce quotas
This is a known limitation of OpenEBS LocalPV HostPath, but it's one that seriously limits its usefulness in production.

I'd like for OpenEBS LocalPV HostPath to collect disk usage stats and render pods unscheduleable when they exceed quotas
I know this is not as straight forward as it sounds, but I'd like to make OpenEBS HostPath render pods unschedulable when quotas are exceeded. For existing pods, I'd like to kill them if they are exceeding quotas. For pods to be scheduled, I'd like for this quota to be enforced in advance.

Anything else you would like to add:
I know this is a roadmap feature.

Dynamic Local PV (hostpath and device) - GitHub Updates

  • README Updates
    • Badges
    • Project Status - Beta
    • k8s version compatibility
    • Quickstart guide
    • Contributor Docs
    • Adopters.md with links to openebs/openebs adopters
    • Roadmap link to openebs project
    • Community Links
  • Helm Charts
  • GitHub Builds
  • Multiarch builds
  • Disable Travis
  • Downstream tagging
  • e2e tests
  • Upgrades
  • Monitoring
  • Troubleshooting guide

Version 3.4.0 constantly crashes with "exec format error"

Describe the bug: A clear and concise description of what the bug is.
I installed the latest OpenEBS helm charts in my single-node Kubernetes cluster running in Ubuntu 22.04 (64 bit)

Expected behaviour: A concise description of what you expected to happen
All PODs are running in namespace openebs

Steps to reproduce the bug:

$ helm repo add openebs https://openebs.github.io/charts
$ helm repo update
$ helm install openebs --namespace=openebs --create-namespace openebs/openebs

and after installation

$ helm list -n openebs
NAME    NAMESPACE       REVISION        UPDATED                                         STATUS          CHART           APP VERSION
openebs openebs         1               2023-06-24 13:09:14.851162273 +0200 CEST        deployed        openebs-3.7.0   3.7.0
$ kubectl get pods -n openebs
NAME                                           READY   STATUS             RESTARTS      AGE
openebs-localpv-provisioner-8488c699fb-stqjt   0/1     CrashLoopBackOff   3 (23s ago)   66s
openebs-ndm-operator-5f6bf6fb48-pmzmh          1/1     Running            0             66s
openebs-ndm-tq2p6                              1/1     Running            0             66s

The output of the following commands will help us better understand what's going on:

$ kubectl get pods -n openebs --show-labels
NAME                                           READY   STATUS             RESTARTS       AGE     LABELS
openebs-localpv-provisioner-8488c699fb-stqjt   0/1     CrashLoopBackOff   5 (2m3s ago)   4m58s   app=openebs,component=localpv-provisioner,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.6.0,pod-template-hash=8488c699fb,release=openebs
openebs-ndm-operator-5f6bf6fb48-pmzmh          1/1     Running            0              4m58s   app=openebs,component=ndm-operator,name=ndm-operator,openebs.io/component-name=ndm-operator,openebs.io/version=3.6.0,pod-template-hash=5f6bf6fb48,release=openebs
openebs-ndm-tq2p6                              1/1     Running            0              4m58s   app=openebs,component=ndm,controller-revision-hash=5ddf54dd65,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=3.6.0,pod-template-generation=1,release=openebs
$ kubectl logs -n openebs openebs-localpv-provisioner-8488c699fb-stqjt
exec /usr/local/bin/provisioner-localpv: exec format error

Anything else we need to know?:

If I install the version 3.3.0 there are no problems. Version 3.4.0, 3.5.0 and 3.6.0 also have the same error.

$ helm install openebs --namespace=openebs --create-namespace openebs/openebs --version=3.3.0

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels):

3.7.0 (see above)

  • Kubernetes version (use kubectl version):
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.11", GitCommit:"8cfcba0b15c343a8dc48567a74c29ec4844e0b9e", GitTreeState:"clean", BuildDate:"2023-06-14T09:57:26Z", GoVersion:"go1.19.10", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.11", GitCommit:"8cfcba0b15c343a8dc48567a74c29ec4844e0b9e", GitTreeState:"clean", BuildDate:"2023-06-14T09:49:38Z", GoVersion:"go1.19.10", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:

Contabo Virtual Private Server M 16GB / 6vCPU / 400 GB SSD

  • OS (e.g: cat /etc/os-release):
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
  • kernel (e.g: uname -a):
$ uname -a
Linux vmd38168.contaboserver.net 5.15.0-25-generic #25-Ubuntu SMP Wed Mar 30 15:54:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • others:

failed to provision volume, no node was specified

Describe the bug: failed to provision volume, no node was specified

Expected behaviour: provision works

Steps to reproduce the bug:

Using https://openebs.github.io/openebs (openebs)

Following values:

          engines:
            replicated:
              mayastor:
                enabled: false
            local:
              lvm:
                enabled: false
              zfs:
                enabled: false

          localpv-provisioner:
            enabled: true
            analytics:
              enabled: false

          lvm-localpv:
            enabled: false

          mayastor:
            enabled: false

          zfs-localpv:
            enabled: false

          openebs-crds:
            csi:
              volumeSnapshots:
                enabled: false
                keep: false

Storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hostpath-vm
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: "hostpath"
      - name: BasePath
        value: "/var/openebs/local"
provisioner: openebs.io/local
volumeBindingMode: Immediate
reclaimPolicy: Delete

pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/createdForDataVolume: d5113330-83f5-491c-acad-feb543242fdd
    cdi.kubevirt.io/storage.contentType: kubevirt
    cdi.kubevirt.io/storage.import.endpoint: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
    cdi.kubevirt.io/storage.import.source: http
    cdi.kubevirt.io/storage.pod.restarts: "0"
    cdi.kubevirt.io/storage.preallocation.requested: "false"
    cdi.kubevirt.io/storage.usePopulator: "false"
    volume.beta.kubernetes.io/storage-provisioner: openebs.io/local
    volume.kubernetes.io/storage-provisioner: openebs.io/local
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    alerts.k8s.io/KubePersistentVolumeFillingUp: disabled
    app: containerized-data-importer
    app.kubernetes.io/component: storage
    app.kubernetes.io/managed-by: cdi-controller
  name: vm-ubuntu-datavolume
  namespace: vm
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: hostpath-vm
  volumeMode: Filesystem
status:
  phase: Pending

The output of the following commands will help us better understand what's going on:
I get the following error/event:

Type: Warning
Reason: ProvisioningFailed
From: openebs.io/local_openebs-localpv-provisioner-6947fc6f65-gct92_6b42f8f6-4f8b-4270-9a8f-afcab507adbc
Message: failed to provision volume with StorageClass "hostpath-vm": configuration error, no node was specified

provisioner logs:

I0523 00:20:41.062099       1 controller.go:1366] provision "vm/vm-ubuntu-datavolume" class "hostpath-vm": started
W0523 00:20:41.062319       1 controller.go:937] Retrying syncing claim "6accb10c-577c-4c05-9cb0-b9399215ff77" because failures 2 < threshold 15
E0523 00:20:41.062399       1 controller.go:957] error syncing claim "6accb10c-577c-4c05-9cb0-b9399215ff77": failed to provision volume with StorageClass "hostpath-vm": configuration error, no node was specified
  • kubectl get pods -n <openebs_namespace> --show-labels
NAME                                           READY   STATUS    RESTARTS        AGE   LABELS
openebs-localpv-provisioner-6947fc6f65-gct92   1/1     Running   1 (2m53s ago)   28m   app=localpv-provisioner,chart=localpv-provisioner-4.0.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=4.0.0,pod-template-hash=6947fc6f65,release=openebs

Anything else we need to know?:
Using KubeVirt CDI.
And Virtink.

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 4.0.0
  • Kubernetes version (use kubectl version): v1.30.1 / v1.29.4+k0s
  • Cloud provider or hardware configuration: Intel Nuc
  • OS (e.g: cat /etc/os-release): Alpine Linux v3.19
  • kernel (e.g: uname -a): Linux nuc-k8s.local 6.6.30-0-lts #1-Alpine SMP PREEMPT_DYNAMIC Mon, 06 May 2024 07:55:42 +0000 x86_64 Linux

When docker overlay2.size is enabled, the actual size of the pvc request is always the overlay2.size setting size

Describe the bug: Always docker overlay2.size and not pvc request size.

Expected behaviour: The actual size should be the pvc request size.

Steps to reproduce the bug:

  1. openebs localpv and docker in same xfs with prjquota enabled

  2. enable overlay.size in docker daemon config

cat /etc/docker/daemon.json
{
 "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.size=10GB"
    ]
}
  1. deploy a pod and pvc with size equal 5 GB

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels

    NAME                                          READY   STATUS    RESTARTS   AGE   LABELS
    localpv-localpv-provisioner-7d59d8f6c-zdt5c   1/1     Running   0          22h   app=localpv-provisioner,chart=localpv-provisioner-4.0.0,component=localpv-provisioner,heritage=Helm,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=4.0.0,pod-template-hash=7d59d8f6c,release=localpv
  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>

    I0330 07:08:37.197263       1 controller.go:1366] provision "default/localpv-vol" class "local-ssd": started
    I0330 07:08:37.201411       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"localpv-vol", UID:"115f2690-dc12-49a5-bd12-adb1b5824f52", APIVersion:"v1", ResourceVersion:"11761250", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/localpv-vol"
    I0330 07:08:37.203873       1 provisioner_hostpath.go:77] Creating volume pvc-115f2690-dc12-49a5-bd12-adb1b5824f52 at node with labels {map[kubernetes.io/hostname:cn002.zw1.local]}, path:/var/openebs/local/pvc-115f2690-dc12-49a5-bd12-adb1b5824f52,ImagePullSecrets:[]
    2024-03-30T07:08:45.254Z	INFO	app/provisioner_hostpath.go:131		{"eventcode": "local.pv.quota.success", "msg": "Successfully applied quota", "rname": "pvc-115f2690-dc12-49a5-bd12-adb1b5824f52", "storagetype": "hostpath"}
    2024-03-30T07:08:45.254Z	INFO	app/provisioner_hostpath.go:215		{"eventcode": "local.pv.provision.success", "msg": "Successfully provisioned Local PV", "rname": "pvc-115f2690-dc12-49a5-bd12-adb1b5824f52", "storagetype": "hostpath"}
    I0330 07:08:45.254731       1 controller.go:1449] provision "default/localpv-vol" class "local-ssd": volume "pvc-115f2690-dc12-49a5-bd12-adb1b5824f52" provisioned
    I0330 07:08:45.254744       1 controller.go:1462] provision "default/localpv-vol" class "local-ssd": succeeded
    I0330 07:08:45.254752       1 volume_store.go:212] Trying to save persistentvolume "pvc-115f2690-dc12-49a5-bd12-adb1b5824f52"
    I0330 07:08:45.257734       1 volume_store.go:219] persistentvolume "pvc-115f2690-dc12-49a5-bd12-adb1b5824f52" saved
    I0330 07:08:45.257856       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"localpv-vol", UID:"115f2690-dc12-49a5-bd12-adb1b5824f52", APIVersion:"v1", ResourceVersion:"11761250", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-115f2690-dc12-49a5-bd12-adb1b5824f52

Anything else we need to know?:
Add any other context about the problem here.

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 4.0.0
  • Kubernetes version (use kubectl version): v1.28.7+k3s1
  • Cloud provider or hardware configuration: AMD EPYC 7452 32-Core Processor / Intel D3-S4510 Series 480GB TLC SATA 6Gbps
  • OS (e.g: cat /etc/os-release): ubuntu 22.04
  • kernel (e.g: uname -a): 5.15.0-97-generic
  • others:

Include BlockDevice driveType as selector within cas.openebs.io/config

Describe the problem/challenge you have
I would like to be able to separate device types into separate storage classes so that workloads can control which device type will provide storage for their workload by specifying the appropriate storage class.

Describe the solution you'd like
Allow the StorageClass annotation cas.openebs.io/config to filter on the driveType details coming from Node Device Manager via the BlockDevice CR. For instance:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-device
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: DeviceType
        value: SSD
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Where the DeviceType value would be HDD, SSD, NVMe, etc. Similar to the mechanism described in the docs to filter devices based on a tag like openebs.io/block-device-tag=mongo.

Anything else you would like to add:
Slack discussion: https://kubernetes.slack.com/archives/CUAKPFU78/p1615567766054600

Environment:

  • OpenEBS version 2.6.0
  • Kubernetes version 1.17.0
  • Cloud provider or hardware configuration: HP Z8 G4
  • OS (e.g: cat /etc/os-release): ubuntu 18.04
  • kernel (e.g: uname -a): 5.3.0-28-generic

Upgrade the Issue Template to use GitHub Issue forms 📜

Introduction

GitHub has recently rolled out a public beta for their issue forms feature. This would allow us to create interactive issue templates and validate them.

OpenEBS currently uses the older issue template format. The task is to create GitHub issue forms for this repository.

Tasks summary:

  • Fork & clone this repository
  • Prepare bug report issue form in .github/ISSUE_TEMPLATE/bug.yaml
  • Prepare documentation issue form in .github/ISSUE_TEMPLATE/documentation.yaml
  • Prepare feature request issue form in .github/ISSUE_TEMPLATE/feature.yaml
  • Push changes to master and test issue forms on your fork
  • Submit pull request

New Helm chart parameter for default path of the hostpath StorageClass

Describe the problem/challenge you have

I have mounted a disk in /mnt/data and I would like to change the default path (/var/openebs/local) of the hostpath StorageClass so OpenEBS Local PV hostpath volumes are created in /mnt/data/openebs/local.

Full discussion here https://kubernetes.slack.com/archives/CUAKPFU78/p1618156947021900

Describe the solution you'd like

Solution suggested by @niladrih (in Slack) consists in adding a new hostpathClass.basePath parameter to the Helm chart:
helm install openebs-localpv --set hostpathClass.basePath=/mnt/data/openebs/local openebs-localpv/localpv-provisioner -n openebs

Environment:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): openebs.io/version=1.3.0 chart=localpv-provisioner-2.7.0 chart=openebs-ndm-1.3.0
  • Kubernetes version (use kubectl version): 1.19.6
  • Cloud provider or hardware configuration: bare metal
  • OS (e.g: cat /etc/os-release): Ubuntu Server 18.04.5 LTS
  • kernel (e.g: uname -a): 5.4.0-70-generic
  • others:

Do not create log_file

Describe the bug: Do not create log_file

Expected behaviour: log_file can created at the right location

Steps to reproduce the bug:
1.set args --logtostrerr=false and --log_file=/var/log/localpv.log
2.start container

The output of the following commands will help us better understand what's going on:

  • get into container and to see not the "localpv.log" file created at /var/log/``

Anything else we need to know?:
maybe vendor/github.com/openebs/maya/pkg/logs/logs.go using the klog v1 NOT v2

Environment details:

  • OpenEBS version v3.0.1

Error: cannot find volume "data" to mount into container "local-path-init"

Describe the bug: A clear and concise description of what the bug is.
I installed provisioned this way:

#!/usr/bin/env bash
echo "deploying openebs operator"
kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml -f https://openebs.github.io/charts/openebs-lite-sc.yaml

I noticed following events after some time:

openebs          3m25s       Warning   Failed                     pod/init-pvc-06b8282d-c265-4048-95b9-8efee0992acd                        Error: cannot find volume "data" to mount into container "local-path-init"
**Expected behaviour:** A concise description of what you expected to happen

Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
 zangetsu@andromeda ~/proj/private/k8s-vagrant-multi-node $ kubectl get pods -n openebs --show-labels
NAME                                          READY   STATUS    RESTARTS   AGE   LABELS
openebs-localpv-provisioner-5984c9745-gxw8m   1/1     Running   0          15m   name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=2.10.0,pod-template-hash=5984c9745
openebs-ndm-4rgs4                             1/1     Running   0          15m   controller-revision-hash=5f5c87449b,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=2.10.0,pod-template-generation=1
openebs-ndm-operator-7576d647bc-zb8nc         1/1     Running   0          15m   name=openebs-ndm-operator,openebs.io/component-name=ndm-operator,openebs.io/version=2.10.0,pod-template-hash=7576d647bc
openebs-ndm-z78hv                             1/1     Running   0          15m   controller-revision-hash=5f5c87449b,name=openebs-ndm,openebs.io/component-name=ndm,openebs.io/version=2.10.0,pod-template-generation=1

  • kubectl logs <upgrade_job_pod> -n <openebs_namespace>

Anything else we need to know?:
Add any other context about the problem here.

Environment details:

 OpenEBS version (use `kubectl get po -n openebs --show-labels`): openebs.io/version=2.10.0
- Kubernetes version (use `kubectl version`):
- Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:52:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release): Gentoo
  • kernel (e.g: uname -a): 5.12.12-gentoo-x86_64
  • others:

Automate operations to be done after a node is removed from cluster

Describe the problem/challenge you have

When a node running the stateful pod with Local PV goes out of the cluster, the pod gets into a pending state and remains in a pending state. The administrator or the automated operator will have to run some manual steps to bring the pod back online. The operations to be performed may vary depending on the way storage is connected to the nodes. However, a few options are common across different stateful operators. The general actions to be performed are:

  • Run checks to ensure PV is really dead.
    • The node on which PV was provisioned is gone. The node mentioned in the affinity is no longer available in the cluster.
    • The application to which PV was provisioned is deleted. The PV remained in the system due to some failure condition during delete operations.
    • The application is in a state to add new replica (delete old pv/replica and let stateful operator create new pv/replica)
    • other application-specific checks
  • Delete the PV and the application PVC/replica
    • If reclaimPolicy is delete, delete the PV
    • If reclaimPolicy is retain, remove references
  • Wait for a new PV to be created and run some post operations specific to the application like:
  • Run a command/api on application to rebalance

Describe the solution you'd like
A Kubernetes operator that can be launched into the cluster with a ConfigMap(s) that can specify:

  • annotations or configuration in the PV spec that uniquely identify the PVs to be acted upon
  • enable/disable pre-checks
  • enable/disable post-hooks

Anything else you would like to add:
It should be possible to either run this operator independently or embed this controller into other stateful operators.

XFS quota fails if LVM based filesystem is not mounted with loop option

Describe the bug: A clear and concise description of what the bug is.

  • XFS Project quota is not set on LVM or LUKS encrypted LVM

Expected behaviour: A concise description of what you expected to happen

  • XFS Project quota is set

Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem

  • Create a LVM or LUKS encrypted LVM and mount as xfs filesystem with prjquota on '/var/openebs/local`
  • Attempt to set quota logs says it succeeds but it actually fails

The output of the following commands will help us better understand what's going on:

Current image(linux-utils:3.0.0)

# xfs_quota -x -c 'report -h' /data 
xfs_quota: cannot setup path for mount /data: No such device or address

Current image (linux-utils:3.0.0) + mount /dev from the host

# xfs_quota -x -c 'report -h' /data 
Project quota on /data (/dev/mapper/lv_enc_volume)
                        Blocks              
Project ID   Used   Soft   Hard Warn/Grace   
---------- --------------------------------- 
#0              0      0      0  00 [------]
#1           1.6T   1.8T   1.8T  00 [------]

Anything else we need to know?:
Add any other context about the problem here.

https://stackoverflow.com/questions/64525271/how-to-make-xfs-quotas-work-in-kubernetes-volumes-on-digitalocean

By mounting /dev from the host in the container xfs_quota works as expected

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 3.0.0
  • Kubernetes version (use kubectl version): v1.21.7
  • Cloud provider or hardware configuration: Physical server
  • OS (e.g: cat /etc/os-release): Ubuntu 20.04.3 LTS
  • kernel (e.g: uname -a): 5.13.0-35-generic # 40~20.04.1-Ubuntu
  • others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.