Giter Site home page Giter Site logo

openebs / openebsctl Goto Github PK

View Code? Open in Web Editor NEW
28.0 14.0 21.0 66.66 MB

`openebsctl` is a kubectl plugin to manage OpenEBS storage components.

License: Apache License 2.0

Makefile 1.58% Shell 3.46% Go 94.96%
openebs kubectl-plugin cli-tool hacktoberfest kubernetes kubectl

openebsctl's Introduction

Welcome to OpenEBS

OpenEBS Welcome Banner

OpenEBS is a modern Block-Mode storage platform, a Hyper-Converged software Storage System and virtual NVMe-oF SAN (vSAN) Fabric that is natively integrates into the core of Kubernetes.

Try our Slack channel
If you have questions about using OpenEBS, please use the CNCF Kubernetes OpenEBS slack channel, it is open for anyone to ask a question

Important

OpenEBS provides...

  • Stateful persistent Dynamically provisioned storage volumes for Kubernetes
  • High Performance NVMe-oF & NVMe/RDMA storage transport optimized for All-Flash Solid State storage media
  • Block devices, LVM, ZFS, ext2/ext3/ext4, XFS, BTRFS...and more
  • 100% Cloud-Native K8s declarative storage platform
  • A cluster-wide vSAN block-mode fabric that provides containers/Pods with HA resilient access to storage across the entire cluster.
  • Node local K8s PV's and n-way Replciated K8s PV's
  • Deployable On-premise & in-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • Enterprise Grade data management capabilities such as snapshots, clones, replicated volumes, DiskGroups, Volume Groups, Aggregates, RAID

Type Storage Engine Type of data services Status In OSS ver
Replicated_PV Replicated data volumes (in a Cluster wide vSAN block mode fabric)
Replicated PV Mayastor Mayastor for High Availability deploymemnts distributing & replicating volumes across the cluster Stable, deployable in PROD
Releases
v4.0.1
 
Local PV Non-replicated node local data volumes (Local-PV has multiple variants. See below) v4.0.1
Local PV Hostpath Local PV HostPath for integration with local node hostpath (e.g. /mnt/fs1) Stable, deployable in PROD
Releases
v4.0.1
Local PV ZFS Local PV ZFS for integration with local ZFS storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV LVM2 Local PV LVM for integration with local LVM2 storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV Rawfile Local PV Rawfile for integration with Loop mounted Raw device-file filesystem Stable, deployable in PROD, undergoing evaluation & integration
release: v0.70
v4.0.1

STANDARD is optimized for NVMe and SSD Flash storage media, and integrates ultra modern cutting-edge high performance storage technologies at its core...

☑️   It uses the High performance SPDK storage stack - (SPDK is an open-source NVMe project initiated by INTEL)
☑️   The hyper modern IO_Uring Linux Kernel Async polling-mode I/O Interface - (fastest kernel I/O mode possible)
☑️   Native abilities for RDMA and Zero-Copy I/O
☑️   NVMe-oF TCP Block storage Hyper-converged data fabric
☑️   Block layer volume replication
☑️   Logical volumes and Diskpool based data managment
☑️   a Native high performance Blobstore
☑️   Native Block layer Thin provisioning
☑️   Native Block layer Snapshots and Clones

Get in touch with our team.

Vishnu Attur :octocat: @avishnu Admin, Maintainer
Abhinandan Purkait 😎 @Abhinandan-Purkait Maintainer
Niladri Halder 🚀 @niladrih Maintainer
Ed Robinson 🐶 @edrob999   CNCF Primary Liason
Special Maintainer
Tiago Castro @tiagolobocastro   Admin, Maintainer
David Brace @orville-wright     Admin, Maintainer

Activity dashbaord

Alt

Current status

Release Support Twitter/X Contrib License statue CI Staus
Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices

Read this in 🇩🇪 🇷🇺 🇹🇷 🇺🇦 🇨🇳 🇫🇷 🇧🇷 🇪🇸 🇵🇱 🇰🇷 other languages.

Deployment

  • In-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • On-Premise: Bare Metal, Virtualzied Hypervisor infra using VMWare ESXi, KVM/QEMU (K8s KubeVirt), Proxmox
  • Deployed as native K8s elemets: Deployments, Containers, Servcies, Stateful sets, CRD's, Sidecars, Jobs and Binaries all on K8s worker nodes.
  • Runs 100% in K8s userspace. So it's highly portable and run across many OS's & platforms.

Roadmap (as of June 2024)


OpenEBS Welcome Banner

QUICKSTART : Installation

NOTE: Depending on which of the 5 storage engines you choose to deploy, pre-requests that must be met. See detailed quickstart docs...


  1. Setup helm repository.
# helm repo add openebs https://openebs.github.io/openebs
# helm repo update

2a. Install the Full OpenEBS helm chart with default values.

  • This installs ALL OpenEBS Storage Engines* in the openebs namespace and chart name as openebs:
    Local PV Hostpath, Local PV LVM, Local PV ZFS, Replicated Mayastor
# helm install openebs --namespace openebs openebs/openebs --create-namespace

2b. To Install just the OpenEBS Replicated Mayastor Storage Engine, use the following command:

# helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace
  1. To view the chart
# helm ls -n openebs

Output:
NAME     NAMESPACE   REVISION  UPDATED                                   STATUS     CHART           APP VERSION
openebs  openebs     1         2024-03-25 09:13:00.903321318 +0000 UTC   deployed   openebs-4.0.1   4.0.1
  1. Verify installation
    • List the pods in namespace
    • Verify StorageClasses
# kubectl get pods -n openebs

Example Ouput:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-agent-core-674f784df5-7szbm               2/2     Running   0          11m
openebs-agent-ha-node-nnkmv                       1/1     Running   0          11m
openebs-agent-ha-node-pvcrr                       1/1     Running   0          11m
openebs-agent-ha-node-rqkkk                       1/1     Running   0          11m
openebs-api-rest-79556897c8-b824j                 1/1     Running   0          11m
openebs-csi-controller-b5c47d49-5t5zd             6/6     Running   0          11m
openebs-csi-node-flq49                            2/2     Running   0          11m
openebs-csi-node-k8d7h                            2/2     Running   0          11m
openebs-csi-node-v7jfh                            2/2     Running   0          11m
openebs-etcd-0                                    1/1     Running   0          11m
openebs-etcd-1                                    1/1     Running   0          11m
openebs-etcd-2                                    1/1     Running   0          11m
...
# kubectl get sc

Example Output:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-6ddf7c7978-jsstg      1/1     Running   0          3m9s
openebs-lvm-localpv-controller-7b6d6b4665-wfw64   5/5     Running   0          3m9s
openebs-lvm-localpv-node-62lnq                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-lhndx                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-tlcqv                    2/2     Running   0          3m9s
openebs-zfs-localpv-controller-f78f7467c-k7ldb    5/5     Running   0          3m9s
...

For more details, please refer to OpenEBS Documentation.

CNCF logo OpenEBS is a CNCF project and DataCore, Inc is a CNCF Silver member. DataCore support's CNCF extensively and has funded OpenEBS participating in every KubeCon event since 2020. Our project team is managed under the CNCF Storage Landscape and we contribute to the CNCF CSI and TAG Storage project initiatives. We proudly support CNCF Cloud Native Community Groups initiatives.

Project updates, subscribe to OpenEBS Announcements
Interacting with other OpenEBS users, subscribe to OpenEBS Users


Container Storage Interface group Storage Technical Advisory Group     Cloud Native Community Groups

Commercial Offerings

Commerically supported deployments of openEBS are avaialble via key companies. (Some provide services, funding, technology, infra, rescourced to the openEBS proejct).

(openEBS OSS is a CNCF project. CNCF does not endorse any specific company).

openebsctl's People

Contributors

abhilashshetty04 avatar abhinandan-purkait avatar abhishek-kumar09 avatar amishakumari544 avatar avishnu avatar burntcarrot avatar daniel-shuy avatar fossabot avatar giovannitgl avatar kmova avatar mukulkolpe avatar parths007 avatar prateekpandey14 avatar shubham14bajpai avatar survivant avatar vaniisgh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openebsctl's Issues

Add support for LVM LocalPV

Potential scope

More details will be added here, as more info drops by

Some more fields are getting added to the LVM resources -> @ openebs/lvm-localpv -> 115

  • Add a sub-command for getting volume-group(vg), as squishing this into the pool sub-command doesn't make good UX sense.
  • pool sub-command will be re-named to something generic like storage, so that it can mean a pool of resources(& not a pool as in a cstor pool or an LVM volume-group) as mentioned in this recent talks
  • CSI -> Get CR structures from https://github.com/openebs/lvm-localpv
  • Listing LVM volumes
  • Listing pools/LVM-nodes
  • Describing LVM volumes
  • Describing pools/LVM-nodes, etc

Investigation: Not all tools of GolangCI lint are running in the CI

There is likely some mis-configuration in the CI which is ignoring minor linters like gosimpl, golint, etc from raising red flags. E.g.

func awesome() {
    ....
    if something {
        return nil, err
    } else {
        return value, nil
    }
} // end of function

should raise warnings to the very least, but it isn't being raised currently.

Starting points for completion

  1. Take the Go-Tour if you aren't up to speed with Golang
  2. Read some basics about Continuous Integration and something about GolangCI Lint and what linters it supports.
  3. Enable some of the linters like revive, gofmt, errorlint, etc.
  4. Fix the linting issues

Add Jiva Volume describe feature similar to cstor volume describe

Scope

As of now, one can run kubectl openebs describe volume pvc-abcd-1234 describe a volume(PersistentVolume) and needful information which is filled in by various other custom resources like CStorVolume, CStorVolumeConfig, CStorVolumeAttachment, StorageClass etc.

Similarly, detailed information about a JivaVolume can be fetched from JivaVolumePolicy

TUI User Experience

$ kubectl openebs describe volume example-jiva-pv
# I'll add some more details viz what info should be shown

The List-By-Object feature should be preserved as far as possible as this project aims to be SINGLE key for a user to effectively & efficiently debug all-OpenEBS-things.

$ kubectl openebs describe volume example-jiva-pv example-cstor-pv example-local
....

$ kubectl openebs describe volume pvc-193844d7-3bef-45a3-8b7d-ed3991391b45
# jiva volume describe output
pvc-193844d7-3bef-45a3-8b7d-ed3991391b45 Details :
-----------------
NAME            : pvc-193844d7-3bef-45a3-8b7d-ed3991391b45
ACCESS MODE     : ReadWriteOnce
CSI DRIVER      : jiva.csi.openebs.io
STORAGE CLASS   : jiva-csi (user created PVC)
VOLUME PHASE    : Released
VERSION         : 2.9.0
JVP              : jiva-volume-policy-nam
SIZE            : 5.0 GiB
STATUS       : Ready (jv.Status.Phase)
REPLICA COUNT	  : 1 (jv.Spec.Policy.Target.RelplicationFactor)

Portal Details :
------------------
IQN              :  iqn.2016-09.com.openebs.cstor:pvc-193844d7-3bef-45a3-8b7d-ed3991391b45 (jv.Spec.IscsiSpec.iqn)
VOLUME NAME      :  pvc-193844d7-3bef-45a3-8b7d-ed3991391b45 (jv.Metadata.Name)
TARGET NODE NAME :  node1-virtual-machine (jv.Labels["nodeID"] )
PORTAL           :  10.106.27.10:3260 (jv.spec.IscsiSpec.targetIP+jv.spec.IscsiSpec.targetPort)

Replica Details :
-----------------
# kubectl get pvc -n JIVA_NAMESPACE -l openebs.io/component=jiva-replica,openebs.io/persistent-volume={{.jv.Metadata.Name}}
NAME                                                          STATUS   VOLUME                                     CAPACITY     STORAGECLASS       AGE
openebs-pvc-478a8329-f02d-47e5-8288-0c28b582be25-jiva-rep-0   Bound    pvc-d94c2500-6ed4-44c2-ba5d-bc6aecd5cff7   4Gi                    openebs-hostpath   19d

Support command openebsctl pool describe

openebsctl pool describe [cspi-name] should provide the following details:

  • cStor Pool Instance Details
    • Name
    • Hostname
    • Capacity
    • Free Capacity
    • Status
    • Read-only Status
    • RAID Type
  • List of Block Devices
    • Device Name
    • Capacity
    • Status
  • List of Provisioned Replicas.
    • Name
    • PVC Name
    • Size
    • Status

Enhance: Missing details about Jiva Replicas

While describing a Jiva Volume, the following is displayed.

kubectl openebs describe volume --openebs-namespace=openebs pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5
pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5 Details :
-----------------
NAME            : pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5
ACCESS MODE     : ReadWriteOnce 
CSI DRIVER      : jiva.csi.openebs.io
STORAGE CLASS   : jiva-sc-mongo
VOLUME PHASE    : Bound
VERSION         : 2.11.0
JVP             : jiva-policy-mongo
SIZE            : 4.7 GiB
STATUS          : RW
REPLICA COUNT   : 1
Portal Details :
------------------
IQN              :  iqn.2016-09.com.openebs.jiva:pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5
VOLUME NAME      :  pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5
TARGET NODE NAME :  gke-kmova-helm-default-pool-30f2c6c6-qf2w
PORTAL           :  10.3.252.131:3260
Replica Details :
-----------------
NAME                                                          STATUS   VOLUME                                     CAPACITY   STORAGECLASS       AGE
openebs-pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5-jiva-rep-0   Bound    pvc-21b23326-6ec0-463e-9926-d71e467221f4   5.0 GiB    openebs-hostpath   35m

This is missing some critical information about a replica, that is available in the JivaVoume CR like:

  • Replica Pod Name
  • Replica Status (RW or WO or NA )
  • Replica Node Name ( that is available within the PVC.

In fact what is shown as Replica Details in the above output are actually "Replica Pod PVCs"

Enhance `openebs version` command with installed component versions

The command version is only displaying the cli version.

kubectl openebs version

Client Version: v0.2.0

It would be nice to enhance this with a few additional details (if openebs namespace can be determined) or manually specified like:

kubectl openebs version --openebs-namespace openebs

Client Version: v0.2.0
OpenEBS Version: 2.11.0
OpenEBS cStor: 2.11.1
OpenEBS Jiva: 2.11.0
OpenEBS ZFS Local PV: <Not installed>
....

(Almost like microk8s plugin status command)

Related: #36

Can the `describe pvc` help to determine - why a cStor Volume is not ready?

  • offline pools / replicas
  • offline targets

Case#1: PVC applied, PV in pending state(cas-type=cstor), CV doesn't exist

So there can be several reasons:

-    if CVC does not exists then something might be wrong with csi plugin pods (controller & node) / wrong configuration in SC / kube-apiserver not reachable / etc. It can be debugged by looking at controller and node logs.
-    if only CV is missing then something something wrong with the CVC produced by the csi node, the version on CVC does not match the CVC operator version / CVC operator is down due to some reason / wrong config in the CVC / etc. This can be debugged by looking at CVC operator logs
Credits: Shubham

The Phase 1 Design to be done in these iterations.

  • module for status of each component
  • module for checking pvc events
  • module for checking bdc events [optional]
  • module for checking cvc events
  • module for cspi and cspc events
  • module for CVR events
  • all module integration
  • Unit Testing of all modules and refactors

Ability to generate raise GitHub issues with required troubleshooting information.

Background

Currently it's easy to see volumes & pools & their states(good/bad/etc) but it's not super-easy & super-fast to get to the root cause behind their state to help in faster debugging.

Scope

  • Capture relevant troubleshooting information about volume, pools/vg-groups etc & send them as GitHub issue to (say) openebs/openebs repo so that bugs don't get missed and time spent between follow-ups are reduced.

Unit testing for all packages

Potential Scope

  • Add unit testing for all the packages and the new packages to added.
  • Codecoverage integration

Packages

  • pkg/blockdevice
  • #67
  • pkg/persistentvolumeclaim
  • pkg/storage
  • pkg/util
  • #63

Add support for running tests & pushing test-coverage metrics

Code Coverage is a metric to see how much of the code by means of entities viz statements, branches, functions, etc run when the tests are run. It is a good metric to see how much of the business logic is tested(the expected & actual values are checked assuming no other measurable side effect) & it helps ensure that an existing functionality does not break when newer ones are added or older ones are refactored. While it's not the absolute metric of code quality, a higher code coverage usually implies well written code with minimal side-effects.

Currently tests are to be written down(tracked @ #49), so it'd be great

  • to run them on every Pull-Request & PR merge(I believe that'd be push/comm
  • to have informational alerts about them going UP/DOWN when PRs are raised, so that people feel encouraged to write tests(automatically managed by CodeCov), as we are ramping up unit-testing, it might not be a very good idea for the CI to fail because code coverage is less than some X%
  • to have badges on README.md, badges are cool 🎉

Good starting research points:

  • CodeCov is a go-to tool for many projects, but are there any alternatives which might be better?
  • Is it better to use the CodeCodv bash exporter or the CodeCov github action, if codecov is the choice of tool?
  • Likewise, should running UTs be a workflow separate from the workflow which runs code Linters(GolangCI-Lint)?

Good starting points:

Great readings:

Support creating an cStor Pool Cluster (CSPC) template

Setting up cStor storage is a cumbersome process - that requires users to fetch the appropriate BDs, then create an CSPC yaml file to apply. This can lead to many mis-configurations.

It will be nice to have a cli command that would help user automatically create an CSPC yaml file that can be reviewed by the user and applied into the cluster.

One way to implement this could be:

kubectl openebs template cspc --nodes=node1,node2,node3 [--number-of-devices=2 --raidtype=mirror]

  • provide a generic template subcommand that can be extended for other usecases
  • cspc can be the first template
  • nodes is a mandatory argument - can be one or more node names (obtained via kubectl get nodes).
  • optional: ask for number of block devices to be used (default to 1)
  • optional: ask for raidtype (default to striped)
  • Other optional parameters that can be taken up are - type of block device, min capacity of block device)

Add support for zfs-localPV

The pool listing for zfs-pool can be clubbed into the kubectl openebs get pool --cas-type=zfspool(but it'd lack the status field, which CStor field already has). As of now, we are not sure if zfs pool listing should go in the same sub-command(simpler UX of the CLI) or a new one(kubectl openebs get zfspool).

Features

  • Volume listing
  • Pool listing
  • Volume describe
  • Pool describe

Feature: Support command to upgrade Jiva Volumes

Once OpenEBS control plane is upgraded to latest version, user/admin will have to initiate the upgrade of the jiva volumes by creating a Kubernetes Job YAML. This is a request to automate the steps by doing the following:

  • #83
  • #82
  • Enhance the Jiva Volume Describe command to auto-determine the openebs namespace where jiva volume-related artifacts are deployed.
  • Add support for a new command - openebs upgrade volume [--cas-type=jiva] [<volume-name(s)>] that will perform the following:
    • Check if the jiva volume is running a version older than the jiva operator version
    • If yes and then check if there is already an upgrade job in progress. If yes, then show the status of the upgrade process.
    • If specific volume name(s) is specified, verify it can be upgraded and launch the Upgrade Job
    • If cas-type=jiva is specified, get list of all jiva volumes and launch the Upgrade Jobs for the volumes that need to be upgraded.
    • once the upgrade job is launched, show the status of the upgrade. the command can be exited with user can come back and run the same command to check the status.

The upgrade job yaml format is at: https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#running-the-upgrade-job-2

Related: #43

Notes:

  • The command should be introduced in such a way that it can be extended to other engines as well.

[cleanup] [client/k8s.go] Merge similar functionality into one decent function

Problem statement

Currently the client/k8s.go file has nearly 530 LoC, a lot of which is just duplicated functionality or slightly similar functionality, e.g. GetCStorVolumeAttachmentMap(), GetCVA(string), etc.
There's a general trend in this file where-in, a resource has a wrapper to

  • GET a single resource of a particular KIND
  • LIST all the resources of a particular KIND
  • LIST all resource and return a map of them by some criteria, viz resource-name ➝ resource, or resource-label-key ➝ resource
    All of this is helpful in different use-cases, but it sort of pollutes the method this package is exposing and it'll likely keep increasing as support for more features are added, for example client package could have methods such as GetPV, GetPVs, GetPVMap, GetAllCStorPV. If a new resource is added(e.g. CV), the problem will get worse as now, there might be 4 more methods per resource(assuming same or similar requirements)

Potential solution(s)

Low Hanging fruit

  • It might help crunch the LoC by merging some of GET & LIST functionality into one method, e.g. GetPV(pvName string) (*corev1.PersistentVolume, error) & GetPV() (*corev1.PersistentVolumeList, error) & GetPVs(pvNames []string) (*corev1.PersistentVolumeList, error) into ONE ListPV(vols []string) (*corev1.PersistentVolumeList, error)
  • The newer one can perform a
    • GET, if the vols list has exactly one element,
    • A LIST if vols list is nil
    • A LIST & filter out the vols PVs if a set of one or more volumes are specified

NOTE: Need to double-check if there'd be any possible dead-ends with this refactor, specially during the fetch of namespaced resources.

Add support for listing and describing hostpath volumes

Current Behaviour

 ❯ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                 STORAGECLASS          REASON   AGE
pvc-69e44403-1b3c-4028-9dd8-d3db547240d6   4Gi        RWO            Delete           Bound    openebs/openebs-pvc-7f194739-16c8-4920-bb79-a28d63183b3e-jiva-rep-0   openebs-hostpath               47h
pvc-7f194739-16c8-4920-bb79-a28d63183b3e   4Gi        RWO            Delete           Bound    default/example-jiva-csi-pvc                                          openebs-jiva-csi-sc            47h

 ❯ ./k describe volume pvc-69e44403-1b3c-4028-9dd8-d3db547240d6
could not determine the underlying storage engine ns, please provide using '--openebs-namespace' flag

Expected Behaviour

It should describe the hostpath volume

Support for upgrading a cstor volume or a pool.

Add support for new commands for upgrading cstor volume and pools like:

  • openebs upgrade pool [--cas-type=cstor] [<pool-name(s)>] that will perform the following:

    • Check if the pool is running a version older than the cstor operator version
    • If yes and then check if there is already an upgrade job in progress. If yes, the show the status of the upgrade process.
    • If pool can be upgrade, and upgrade is not in progress, then launch the pool upgrade jobs - as outlined in https://github.com/openebs/upgrade/blob/master/docs/upgrade.md
    • show the status of the upgrade. the command can be exited with user can come back and run the same command to check the status.
  • openebs upgrade volume [--cas-type=cstor] [<volume-name(s)>] that will perform the following:

    • Similar to the above with an additional check that volume can be upgraded only if the pools are already upgraded.
    • If cas-type=cstor is specified, get list of all cstor volumes and launch the Upgrade Jobs for the volumes that need to be upgraded. The upgrade job yaml format is at: https://github.com/openebs/upgrade/blob/master/docs/upgrade.md

Add the ability to specify KUBECONFIG variable in openebsctl

Currently openebsctl looks in the users directory for .kube/config file, and when it does not exist exits with the following error:
failed to build Kubernetes clientset: Could not build config from flags: stat /home/chaasmadmin/.kube/config: no such file or directory

This feature request is to add a KUBECONFIG variable setting for openebsctl to point to the location of the kubectl config file.

bug: cas-type has no impact when jiva volumes are running.

$ kubectl openebs get volumes --cas-type hello
---

NAMESPACE   NAME                                       STATUS   VERSION   CAPACITY   STORAGE CLASS         ATTACHED   ACCESS MODE     ATTACHED NODE
openebs     pvc-7f194739-16c8-4920-bb79-a28d63183b3e   RW       2.12.2    4Gi        openebs-jiva-csi-sc   Bound      ReadWriteOnce   

The command shows desired output when wrong namespace is given

$ kubectl openebs get volumes --cas-type hello --openebs-namespace wrongvalue
---
No output, which is desired at least up to this point of time

Add support to enlist component-versions & statuses of OpenEBS

Component-versions

Scope

As of now, there's a sub-command version which helps end users get info about the version of this kubectl-plugin, just like kubectl it can be expanded to give in more information about installed OpenEBS versions, etc from component labels.

TUI UX

// need to discuss with the community for better usability & debug-ability
$ kubectl openebs cluster-info.
CAS-TYPE       VERSION        STATUS
CStor            3.0             Good
LocalPV          2.9             Upgrade available
CStor-Legacy     1.4             Good

Much later, more functionality can be added which can raise more potential red-flags besides quality information, like kubectl get cs or kubectl cluster-info, a good source could be the ERROR/WARNING events from the Storage provisioners, CSI pods, etc, if it's not super easy to do so via kubectl get events --<flags>

Add unit tests functions in pkg/volume/local_hostpath.go

There're two functions in this file which do not have unit tests unlike other similar files in the same package, it'd be good to add some unit tests like there are for LocalPV-zfs currently.

Pre-requisites:

  1. Basic working knowledge of Golang, Go Tour is a great place to start
  2. Basic idea of unit-testing, from some blogs and tutorials such as this one
  3. An idea of how tests are written in similar files for other storage engines and how they use the k8s' fake-client to sort of 'mock' the K8s-APIserver. There are really cool blogs about writing unit tests for programs consuming k8s API via client-go, do check them out.

Guidelines:

  1. Try to achieve 100% path coverage in the two functions.
  2. There has been no need of https://github.com/stretchr/testify in the library.
  3. If you face some problems do drop a question in the GitHub issue or in #openebs-dev in the Kubernetes Slack channel.

usability: message for no resources found

When the cluster doesn't have required resources, display a helpful message instead of displaying nothing.

In my cluster that doesn't have any pools setup at, I get the following:

kiran_mova_mayadata_io@kmova-dev:~$ kubectl openebs get storage
kiran_mova_mayadata_io@kmova-dev:~$ 

It would nice to display a message like this:

kiran_mova_mayadata_io@kmova-dev:~$ kubectl get cspc
No resources found in default namespace.

Also, while at this - check the message displayed when there is no openebs installed in the cluster.

Make OpenEBS CLI easier to install via krew

Add the OpenEBS CLI to the krew index github repo or setup a custom krew index

  • Decide the repo for krew index, viz: custom or default(raise a PR)
  • Add support for the kubectl-auth-plugin for localpv-{lvm,zfs}
  • Add LICENSE file to the 0.3 release bundle & ensure krew plugin spec for the plugin is updated via goreleaser
  • Implement that approach & it's pre-requisites & raise a PR to the default krew index(WIP)
  • Update the installation shell script with the newer path of the binary
  • Update README.md with krew installation steps
  • Automatically update the krew index whenever a new release is pushed, #87

Add feature to view the jiva volume upgrade status

As per #115 the feature has been implemented to schedule the upgrade job to upgrade the Jiva volume data plane components to match the version of the jiva control plane. @Abhishek-kumar09 has suggested for a way to add a watch functionality to either

  • see the upgrade status as the job runs to successful completion or failure
  • or to tail the logs while running an upgrade

Both of the above are great ideas to simplify the UX for launching upgrade jobs for data plane, this issue will house the discussion for the same.

Continued from the discussions at #115

Unit testing & refactor for pkg/client

Currently openebsctl project has a K8sClient, which encapsulates all required typed clients for reaching the k8s-apis. Due to the nature of the codebase & huge options, this project sometimes needs a slice of objects and sometimes it needs a the objects mapped to some key. It is the mapping part of the codebase which is repetitive, this can be avoided by casting it to some generic values in such a way that the code outside pkg/client doesn't need modification & no interfaces{} or runtime.Object or metav1.Unstructured are used outside the client package & possible panics() are taken care by strong UTs.

  • Methods with a single line do not need unit test coverage
  • But methods with mapping logic or some other business logic needs to be tested

convert object blocks into object list

  • PrintByTemplate may not be a good UI/UX as in case of multiple BDs the above will generate something like this:-
Block Device Details :
----------------
Name     : blockdevice-c21bc3b79a98c7e8508f47558cc94f36
Capacity : 107374182400
State   : Active
Block Device Details :
----------------
Name     : blockdevice-c21bc3b79a98c7e8508f47558cc94f36
Capacity : 107374182400
State   : Active

The above should have been like this, similar to kubectl get list.

Block Device Details :
----------------
Name   Capacity  State
bd-1   107641    active 
bd-2   187261    active

You can refer this code for the same:-

if cvrs != nil && len(cvrs.Items) > 0 {

Originally posted by @Abhinandan-Purkait in #20 (comment)

Add command to suggest the changes that need to be done to CSPC

In cases, where a node is offline or gone out of the cluster and the disks from old (gone) node are moved to a new node in the cluster. Provide the user with an updated CSPC yaml with the required node selector change.

This can be a follow-up activity or a correction action - once the cstor debug command has identified that there is a problem to a given CSPI.

Add go linting tools to CI

Background

GolangCI-lint is a popular linter for go-projects which has support for a lot of popular linters like govet, golint, gofmt, etc, so it gets a bit easier than having to write individual shell scripts & complex Makefiles. It's super configurable so it's warnings can be silenced & it runs with similar toggles locally, i.e. the config file works well for the local development & CI.

It has a host of tools like golint, gofmt, govet and a bunch of unofficial tools like dupl, depguard, etc which should be added. Per the docs, using the official golangci-lint action might turn out to be a better approach than adding it via the Makefile in some other action/job.

Add code-walkthrough to help onboard new contributors

vscode one of the most popular IDEs for beginners and professionals has a cool extension which helps create a journey-based async introduction code-walkthrough, this will help us produce low-cost(in terms of network & time) checked-in journey which can say cleanly look-here then look-here-1, then look-here-understand-X & then understand Y.

Add support for MayaStor

More details will be added here, as more info drops by

Initial work

  • Understand & setup MayaStor
  • Discover the Golang structures for the Custom Resources, would likely have to write them as it's primarily written in TS+Rust
  • Implement get volumes
  • Implement get pools
  • Implement describe volumes
  • Implement describe pools

add a installation script

Instead of downloading manually the release.

for now, I had to go into the release page and find the release that I need and do that command :

wget https://github.com/openebs/openebsctl/releases/download/v0.1.0/kubectl-openebs_v0.1.0_Linux_x86_64.tar.gz

tar -xvf kubectl-openebs_v0.1.0_Linux_x86_64.tar.gz
cd kubectl-openebs_v0.1.0_Linux_x86_64
sudo mv kubectl-openebs /usr/local/bin/

We could have a script like Helm installation script :

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh

Like that, the script should be able to download the latest release with the right Linux distribution.

error while extracting from the install script

this morning, I try to install the plugin on one of my cluster and I got this error message :

sudo: unable to resolve host test-pcl4014: Name or service not known

I'm executing the script as root and running sudo as root caused a error

root@test-pcl4014:~# wget https://raw.githubusercontent.com/openebs/openebsctl/develop/scripts/install-latest.sh -O - | bash
--2021-10-29 08:46:05--  https://raw.githubusercontent.com/openebs/openebsctl/develop/scripts/install-latest.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2340 (2.3K) [text/plain]
Saving to: ‘STDOUT’

-                                              100%[==================================================================================================>]   2.29K  --.-KB/s    in 0s

2021-10-29 08:46:05 (32.9 MB/s) - written to stdout [2340/2340]



Getting Latest Release ----->


Downloading Latest Release ----->


--2021-10-29 08:46:05--  https://github.com/openebs/openebsctl/releases/download/v0.4.0/kubectl-openebs_v0.4.0_Linux_x86_64.tar.gz
Resolving github.com (github.com)... 140.82.112.3
Connecting to github.com (github.com)|140.82.112.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-releases.githubusercontent.com/274539236/9ffeb711-13e0-4ef4-9b0d-8ffe8e728f4d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211029%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211029T124606Z&X-Amz-Expires=300&X-Amz-Signature=6eb1e24bd7b7bc581d9c844fc1dde822a0e7184ce6238a46da226bc81d6a8014&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=274539236&response-content-disposition=attachment%3B%20filename%3Dkubectl-openebs_v0.4.0_Linux_x86_64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2021-10-29 08:46:06--  https://github-releases.githubusercontent.com/274539236/9ffeb711-13e0-4ef4-9b0d-8ffe8e728f4d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211029%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211029T124606Z&X-Amz-Expires=300&X-Amz-Signature=6eb1e24bd7b7bc581d9c844fc1dde822a0e7184ce6238a46da226bc81d6a8014&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=274539236&response-content-disposition=attachment%3B%20filename%3Dkubectl-openebs_v0.4.0_Linux_x86_64.tar.gz&response-content-type=application%2Foctet-stream
Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.108.154, 185.199.111.154, 185.199.110.154, ...
Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18140670 (17M) [application/octet-stream]
Saving to: ‘openebsctl.tar.gz’

openebsctl.tar.gz                              100%[==================================================================================================>]  17.30M  61.4MB/s    in 0.3s

2021-10-29 08:46:06 (61.4 MB/s) - ‘openebsctl.tar.gz’ saved [18140670/18140670]

LICENSE
README.md
kubectl-openebs


Extracted Latest Release ----->
sudo: unable to resolve host test-pcl4014: Name or service not known


Copied Latest Release to usr/local/bin ----->


Cleaning things ----->


Done

Determine openebs ns from the CLI.

We might need to rethink out K8sClient structure, as of now there are two candidates for similar resource fields

  • namespace i.e. resource namespace for user created resources like pvc, pod, etc
  • openebs-namespace i.e. the namespace where a specific Storage engine is installed

The -n flag is used to take the pvc namespace and same cannot be used to take the openebs namespace simultaneously.
Determining the openebs ns from the cli would help here.

Use kubeconfig passed through cli or from environment KUBECONFIG variable

If kubeconfig file is passed as command line option or if set in environment variable, openebsctl is still referring to the default config path i.e ~/.kube/config

It should refer to the cli option or the env variable KUBECONFIG , before referring to the default config file for accessing the k8 cluster

Following was the issue observed
With environment variable set:

[root@localhost do]# export KUBECONFIG=$PWD/kubeconfig
[root@localhost do]# kubectl get node
NAME                   STATUS   ROLES    AGE    VERSION
pool-pe58fno14-89ta0   Ready    <none>   179m   v1.21.3
pool-pe58fno14-89tad   Ready    <none>   3h     v1.21.3
pool-pe58fno14-89tav   Ready    <none>   179m   v1.21.3
[root@localhost do]# kubectl openebs get bd
Error while getting block device: Get "https://35.224.186.93/apis/openebs.io/v1alpha1/blockdevices": dial tcp 35.224.186.93:443: i/o timeout
[root@localhost do]# export KUBECONFIG=/root/mayadata/do/kubeconfig
[root@localhost do]# kubectl openebs get bd
Error while getting block device: Get "https://35.224.186.93/apis/openebs.io/v1alpha1/blockdevices": dial tcp 35.224.186.93:443: i/o timeout
[root@localhost do]# echo $KUBECONFIG
/root/mayadata/do/kubeconfig
[root@localhost do]# ls $KUBECONFIG
/root/mayadata/do/kubeconfig

With command line option passed:

kubectl openebs get bd --kubeconfig=./kubeconfig
Error while getting block device: Get "https://35.224.186.93/apis/openebs.io/v1alpha1/blockdevices": dial tcp 35.224.186.93:443: i/o timeout
[root@localhost do]# kubectl  get bd -n openebs --kubeconfig=./kubeconfig
NAME                                           NODENAME               SIZE           CLAIMSTATE   STATUS   AGE
blockdevice-8d74ae095ad87e461541bd7c3f88ef0a   pool-pe58fno14-89tav   107373116928   Unclaimed    Active   15m
blockdevice-a93320e715e3b3e34362dad3a46e2497   pool-pe58fno14-89ta0   107373116928   Unclaimed    Active   15m
blockdevice-b70e08c5acd216a9fc524907d2493ffe   pool-pe58fno14-89tav   483328         Unclaimed    Active   15m
blockdevice-e36dae74636edae1e8b63a2a8ca6cd26   pool-pe58fno14-89tad   107373116928   Unclaimed    Active   15m
blockdevice-f4522bb459144611ed898b0fde19b143   pool-pe58fno14-89tad   483328         Unclaimed    Active   15m

It worked only after copying the config file to the default location

[root@localhost do]# cp ~/.kube/config  ~/.kube/config.bkp
[root@localhost do]# cp kubeconfig  ~/.kube/config
cp: overwrite '/root/.kube/config'? y
[root@localhost do]# kubectl openebs get bd
NAME                                             PATH        SIZE     CLAIMSTATE   STATUS     FSTYPE    MOUNTPOINT
pool-pe58fno14-89ta0
├─blockdevice-8ab74802a8660795b0a8b4f3c6c117b7   /dev/vdb    472KiB   Unclaimed    Inactive   iso9660
├─blockdevice-9aeb3048976d722b999c448e35352e93   /dev/vdb    472KiB   Unclaimed    Active     iso9660
└─blockdevice-a93320e715e3b3e34362dad3a46e2497   /dev/sda1   100GiB   Claimed      Active

pool-pe58fno14-89tav
├─blockdevice-8d74ae095ad87e461541bd7c3f88ef0a   /dev/sda1   100GiB   Claimed      Active
├─blockdevice-b70e08c5acd216a9fc524907d2493ffe   /dev/vdb    472KiB   Unclaimed    Inactive   iso9660
└─blockdevice-d702b5422005ebc658b308190e4f9e08   /dev/vdb    472KiB   Unclaimed    Active     iso9660

pool-pe58fno14-89tad
├─blockdevice-8e4f257d884b8026801490a76c802ec4   /dev/vdb    472KiB   Unclaimed    Active     iso9660
├─blockdevice-e36dae74636edae1e8b63a2a8ca6cd26   /dev/sda1   100GiB   Claimed      Active
└─blockdevice-f4522bb459144611ed898b0fde19b143   /dev/vdb    472KiB   Unclaimed    Inactive   iso9660

Fix gocritic & golint(revive) issues in the codebase

This issue is strictly for someone who has recently up Golang & would like to fix some code formatting issues.

I've recently fixed a small issue with GolangCI-lint(#105) in #145 & now the project has many warnings/suggestions & now they need fixing, currently those linters are disabled but do enable them after fixing all the issues.

Steps

  1. Learn a bit about GolangCI-Lint from it's official docs & a few blogs
  2. Uncomment the enabled linters in GolangCI-Lint
  3. Fix the code comment errors manually or figure out if the revive tool can autofix things for you(recommended).
  4. Raise a PR & make sure to sign all your commits

Discussion: Folder Structure revision for openebsctl

Currently tests and the codebase is written together with their test_data all in one folder making it hard dig through the files and folders once it starts to mature.
Can we think of making a parent folder with e2e tests written
like:

|-cmd
|-tests
|-pkg

or we can think of writing tests within the same package that we are doing now, but extract out all the test resources in their own folder named like tests

|-cmd
 `|-client
   `|-test
     `|-test.go
      |-test_data.go
   `|k8s.go

incorrect display message in version subcommand, when using custom build.

When using a custom built openebs ctl, the version command shows the following:

You are using an older version (dev) of cli, latest available version is: v0.3.1

When using the dev version. The above message should change to:

You are using development version of cli, latest released version is: v0.3.1

distribute via brew?

It would be great if installation could be done with brew. Something to consider?

Add vulerability scanning for Golang Dependencies via GitHub actions

It's said that one cannot build a great building on a week foundation, well the same applies for writing software, it's turtles all the way down to the zeros and ones & turtles down even below that.

Evaluate tools which can share details

  • about code & dependency vulnerabilities(Synk does it nicely per my knowledge)
  • good-to-upgrade dependencies(DependaBot does it nicely per my knowledge, i believe this is already included via GitHub w/o configuration these days)

put your findings in the comments and raise a PR for it

PS: I'll create a relevant secret token with a specific environment key after a PR is raised and add that app to this repository, do reach out to us @ https://slack.k8s.io #openebs-dev & #openebs for further queries

Add support for CStor volume upgrade

Currently in v0.5.0 release the CLI can upgrade the Cstor Pools, i.e. CSPCs which must be done post control plane upgrade, now it might be helpful to add upgrade option for cstor to bring it at a feature parity level with Jiva volume upgrades(jiva lacks a pool concept in true sense)

Dogfood OpenEBS E2E failures by capturing useful information

Questions

  1. What should be the goal of this tool? Should it just stick to pointing troubling areas or also dump data of the troubling areas and can that data be trusted at the face value?
  2. How much logs should the tool collect, if at all. Just enough or all of it so that further debugging is done by grep-ing outputs in an editor of choice or some back-and-forth commands?
  3. What should be a baseline assumption for this tool(it's turtles all the way down, which turtle should be this tool's last one)? Is it a good idea to assume, K8s is supposed to be healthy and is managed perfectly by the admin?

Background

  • Right now we have super preliminary support for debugging Cstor volumes, it'd be good if we can think on something on the lines of debugging + creating a github issue + dogfooding, etc.
  • Right now, the cstor volume debugging, just points to places which seems off, it'd be good to sort of plan and implement, debugging in stages, i.e. it helps narrow down the search space by pointing out what's right, what isn't & what may not be
    • Identify a list of things, which needs to be checked(is the storage-engine replicated?, should NDM agents failing affect this volume/pool?)
    • K8s APIserver is up & healthy
    • K8s kube-system components are up, is kubelet container(for certain setups) up, how does node-heartbeat look like for concerned nodes(are they alive and kicking, do they have any X-Pressure)?
    • Networking isn't down(imp for replicated storage engines)
    • Relevant OpenEBS components are up(as identified in step-1)
  • There are some limitations to the tool, it might be hard to figure out(at first) if the application is failing because of storage or vice versa.

Goals

  • While OpenEBSctl can show some information we generally ask our community users while interacting with them in a single shot and we plan to help them automatically create a GitHub issue via #39.
  • It might be a good ask to think of using the same tool to collect useful information on cluster-destruction, which is likely what happens when an E2E test fails. It might be useful as a replacement of a bunch of kubectl & shell commands.
  • To-be-decided-and-updated

Pre-requisites issues for this task:

  1. #143
  2. #39

Update cstor volume describe replica details heading to accurately specifiy underlying properties in an easy-to-grok way

The term Total and Free aren't super indicative of the spaces used by actual data or snapshots or metadata or at the application level and it does pop eyeballs when Total < Free.

The Output

root@test-pcl4004:~# kubectl openebs describe volume pvc-7d5944e9-XXXXXXXXXXXX

pvc-7d5944e9-XXXXXXXXXXXX Details :
-----------------
NAME            : XXXXX
ACCESS MODE     : ReadWriteOnce
CSI DRIVER      : cstor.csi.openebs.io
STORAGE CLASS   : XXXX
VOLUME PHASE    : Bound
VERSION         : 2.10.0
CSPC            : XXXXXXXX
SIZE            : 2.0GiB
STATUS          : Healthy
REPLICA COUNT   : 1

Portal Details :
------------------
IQN              :  iqn.2016-09.com.openebs.cstor:XXXX
VOLUME NAME      :  XXXX
TARGET NODE NAME :  ABCD
PORTAL           :  XXXXX:3260
TARGET IP        :  XXXXX

Replica Details :
-----------------
NAME                                                            TOTAL     USED       STATUS    AGE
XXXXXXXXXXXXXXXXXXXXXXXXXSDJHFSHGHGHXXXXXXXXXXXXXXX-YYYYYYYZZZ   33.6MiB   126.8MiB   Healthy   67d1h

In the above output example, with the above headings it sort of becomes imperative that, Total = Free + Used, so in practice Used shouldn't exceed Total(unless Free is negative, which is absurd), but here the above values total means actual data written to disk & used means how much data got referenced.

At the nuts and bolts level, TOTAL comes from cvr.spec.capacity.Total which is USED property at the zfs level and used comes from cvr.spec.capacity.Used which is the logicalreferenced property at the zfs level.

Tl; dr: What needs to be done?

  1. The underlying ZFS metric needs to be understood well enough(this isn't super straightforward)
  2. The understanding of the above metric should be conveyed properly via simple English phrases/words, such as space used by data with compression, metadata, etc or something simple such as actual size on disk, logical size written by tenants, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.