Giter Site home page Giter Site logo

cubefs-csi's Introduction

Build Status

CubeFS CSI Driver

Overview

CubeFS Container Storage Interface (CSI) plugins.

Prerequisite

  • Kubernetes 1.16.0
  • CSI spec version 1.1.0

Prepare on-premise CubeFS cluster

An on-premise CubeFS cluster can be deployed separately, or within the same Kubernetes cluster as applications which require persistent volumes. Please refer to cubefs-helm for more details on deployment using Helm.

Deploy

CSI supports deploy with helm as well as using raw YAML files.

Though, the first step of these two methods are label the node:

Add labels to Kubernetes node

You should tag each Kubernetes node with the appropriate labels accorindly for CSI node of CubeFS. deploy/csi-controller-deployment.yaml and deploy/csi-node-daemonset.yaml have nodeSelector element, so you should add a label for nodes. If you want using CubeFS CSI in whole kubernetes cluster, you can delete nodeSelector element.

kubectl label node <nodename> component.cubefs.io/csi=enabled

Direct Raw Files Deployment

Deploy the CSI driver

$ kubectl apply -f deploy/csi-rbac.yaml
$ kubectl apply -f deploy/csi-controller-deployment.yaml
$ kubectl apply -f deploy/csi-node-daemonset.yaml

Notes: If your kubernetes cluster alter the kubelet path /var/lib/kubelet to other path(such as: /data1/k8s/lib/kubelet), you must execute the following commands to update the path:

sed -i 's#/var/lib/kubelet#/data1/k8s/lib/kubelet#g' deploy/csi-controller-deployment.yaml

sed -i 's#/var/lib/kubelet#/data1/k8s/lib/kubelet#g' deploy/csi-node-daemonset.yaml

Use Remote CubeFS Cluster as backend storage

There is only 3 steps before finally using remote CubeFS cluster as file system

  1. Create StorageClass
  2. Create PVC (Persistent Volume Claim)
  3. Reference PVC in a Pod

Create StorageClass

An example storage class yaml file is shown below.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cfs-sc
provisioner: csi.cubefs.com
reclaimPolicy: Delete
parameters:
  masterAddr: "master-service.cubefs.svc.cluster.local:17010"
  consulAddr: "http://consul-service.cubefs.svc.cluster.local:8500"
  owner: "csiuser"
  logLevel: "debug"

Creating command.

$ kubectl create -f deploy/storageclass.yaml

Helm Deployment

Download the CubeFS-Helm project

git clone https://github.com/cubefs/cubefs-helm
cd cubefs-helm

Edit the values file

Create a values file, and edit it as below:

vi ~/cubefs.yaml

component:
  master: false
  datanode: false
  metanode: false
  objectnode: false
  client: false
  csi: true
  monitor: false
  ingress: false

image:
  # CSI related images
  csi_driver: ghcr.io/cubefs/cfs-csi-driver:3.2.0.150.0
  csi_provisioner: registry.k8s.io/sig-storage/csi-provisioner:v2.2.2
  csi_attacher: registry.k8s.io/sig-storage/csi-attacher:v3.4.0
  csi_resizer: registry.k8s.io/sig-storage/csi-resizer:v1.3.0
  driver_registrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0

csi:
  driverName: csi.cubefs.com
  logLevel: error
  # If you changed the default kubelet home path, this
  # value needs to be modified accordingly
  kubeletPath: /var/lib/kubelet
  controller:
    tolerations: [ ]
    nodeSelector:
      "component.cubefs.io/csi": "enabled"
  node:
    tolerations: [ ]
    nodeSelector:
      "component.cubefs.io/csi": "enabled"
    resources:
      enabled: false
      requests:
        memory: "4048Mi"
        cpu: "2000m"
      limits:
        memory: "4048Mi"
        cpu: "2000m"
  storageClass:
    # Whether automatically set this StorageClass to default volume provisioner
    setToDefault: true
    # StorageClass reclaim policy, 'Delete' or 'Retain' is supported
    reclaimPolicy: "Delete"
    # Override the master address parameter to connect to external cluster, if the cluster is deployed
    # in the same k8s cluster, it can be omitted.
    masterAddr: ""
    otherParameters:

Install

helm upgrade --install cubefs ./cubefs -f ~/cubefs.yaml -n cubefs --create-namespace

Verify

After we installed the CSI, we can create a PVC and mount it inside a Pod to verify if everything all right.

Create PVC

An example pvc yaml file is shown below.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cfs-pvc
spec:
  accessModes:
  - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
  storageClassName: cfs-sc
$ kubectl create -f example/pvc.yaml

The field storageClassName refers to the StorageClass we already created.

Use PVC in a Pod

The example deployment.yaml looks like below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cfs-csi-demo
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cfs-csi-demo-pod
  template:
    metadata:
      labels:
        app: cfs-csi-demo-pod
    spec:
      containers:
        - name: cubefs-csi-demo
          image: nginx:1.17.9
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: cfs-pvc

The field claimName refers to the PVC created before.

$ kubectl create -f examples/deployment.yaml

cubefs-csi's People

Contributors

1032120121 avatar ahmedwaleedmalik avatar awzhgw avatar chengyu-l avatar dependabot[bot] avatar heymingwei avatar huweicai avatar leonrayang avatar marccampbell avatar mervinkid avatar shuoranliu avatar xuxihao1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cubefs-csi's Issues

Move "owner" from StorageClass yaml to PVC yaml

StorageClass yaml should involves cluster-specific information of ChubaoFS, while PVC yaml focuses on volume-specific information of ChubaoFS.

So the "owner" field in current StorageClass should be moved to PVC.

docker pull unknown blob

(base) root@ubuntu:~# docker pull ghcr.io/cubefs/cfs-csi-driver:2.4.1.110.1
2.4.1.110.1: Pulling from cubefs/cfs-csi-driver
a8c7037c15e9: Pulling fs layer
ee77564112b3: Pulling fs layer
c9a8edccd926: Pulling fs layer
8ad28c0cab2b: Pulling fs layer
e7f6b46a46e9: Pulling fs layer
3ae92f87a760: Pulling fs layer
6d4b6ceb8d2a: Pulling fs layer
dc9b4b510e80: Pulling fs layer
error pulling image configuration: download failed after attempts=1: unknown blob

The vulnerability CVE-2023-30512 has been fixed, but no specific tag denotes the patched version.

Hello, we are a team researching the dependency management mechanism of Golang. During our analysis, we came across your project and noticed that you have fixed a vulnerability (snyk references, CVE: CVE-2023-30512, CWE: CWE-264, fix commit id: 97e6ade). However, we observed that you have not tagged the fixing commit or its subsequent commits. As a result, users are unable to obtain the patch version through Go tool ‘go list’.

We kindly request your assistance in addressing this issue. Tagging the fixing commit or its subsequent commits will greatly benefit users who rely on your project and are seeking the patched version to address the vulnerability.

We greatly appreciate your attention to this matter and collaboration in resolving it. Thank you for your time and for your valuable contributions to our research.

Failed to read from conn, req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0)))

readdirlimit: packet() mp(PartitionID(228) Start(0) End(16777216) Members([~~~~]) LeaderAddr(.:10) Status(2)) req({pvtest 228 1 1024}) err(sendToMetaPartition failed: req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) mp(PartitionID(228) Start(0) End(16777216) Members([.:10 .:10 .64:10]) LeaderAddr(.:10) Status(2)) errs(map[0:[conn.go 145] Failed to read from conn, req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) :: read tcp :46808->.:10: i/o timeout 1:[conn.go 145] Failed to read from conn, req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) :: read tcp :48510->.:10: i/o timeout 2:[conn.go 145] Failed to read from conn, req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) :: read tcp :55182->:10: i/o timeout]) resp())
2022/09/26 10:08:36.483713 [ERROR] dir.go:360: readdirlimit: Readdir: ino(1) err(input/output error)

k8s version 1.22.14
cubefs-csi version 3.10
chubaofs version 2.40

请问这是因为csi插件和chubaofs版本不兼容的问题吗

K8S CSI can not run on kunpeng arm 920 platform

After I applied -f csi.yaml on arm rocky linux , the csi node and controller pod can not run .I get the error message form kubectl log as below -Back-off restarting failed container cfs-driver in pod cfs-csi-controller-7f955f786f-r2c8g_cubefs
-cfs-driver:
Container ID: docker://6eebb4ec17b5b2a54d6ce75b396ebc3cf06ec88763658d9d92d1d4f561a05782
Image: ghcr.io/cubefs/cfs-csi-driver:3.2.0.150.0
Image ID: docker-pullable://ghcr.io/cubefs/cfs-csi-driver@sha256:8723616a976a2a0278cb14ab5c2bb26ed7302603201202684934b68448c89f27
Port:
Host Port:
Args:
bash
-c
set -e
su -p -s /bin/bash -c "/cfs/bin/start.sh &"
su -p -s /bin/bash -c "sleep 9999999d"
State: Waiting
Reason: CrashLoopBackOff

Then I build the csi source code with expert help,csi pod can run but can not create pvc.

Please provide the arm csi plugin .Thanks

update CSI build script

the build way of CubeFS client had already change, the CSI build script need update also.

support NodeStageVolume

The "NodeStageVolume" is now an empty implementation, thus the fuse-mount happens every time starts a pod. Suppose a better way, firtly mount a volume to a globalpath on nodeserver, and then bind the globalpath to mount point. What's more, enter arguments on fuse command line will be better.

  • implemente NodeStageVolume
  • fuse command line start up

Why is kubeconfig needed, trying to use plugin inside nomad

Hi there,

I'm trying to get the csi plugin to work with nomad: https://www.nomadproject.io/docs/job-specification/csi_plugin

It seems that this is not that easy, it seems the plugin want to fetch the kubeconfig.
Why is that needed, can it work without?

Currently I have create this entrypoint for the plugin to do "something" inside nomad:

mkdir -p /var/run/secrets/kubernetes.io/serviceaccount


echo -ne 'ALONGSTRING' > /var/run/secrets/kubernetes.io/serviceaccount/token

# just some dummy veriables
export KUBERNETES_SERVICE_HOST=1.2.3.4
export KUBERNETES_SERVICE_PORT=1234

/cfs/bin/cfs-csi-driver -v=5 --endpoint=unix:///csi/csi.sock --nodeid=3249d90 --drivername=csi.chubaofs.com

Now the driver starts:

I1112 15:44:00.506035       8 driver.go:39] driverName:csi.chubaofs.com, version:1.0.0, nodeID:3249d90
E1112 15:44:00.506144       8 config.go:428] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
I1112 15:44:00.506422       8 driver.go:85] Enabling controller service capability: CREATE_DELETE_VOLUME
I1112 15:44:00.506429       8 driver.go:97] Enabling volume access mode: SINGLE_NODE_WRITER
I1112 15:44:00.506433       8 driver.go:97] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I1112 15:44:00.506436       8 driver.go:97] Enabling volume access mode: MULTI_NODE_READER_ONLY
I1112 15:44:00.506439       8 driver.go:97] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I1112 15:44:00.509129       8 server.go:108] Listening for connections on address: unix:///csi/csi.sock
I1112 15:44:00.528707       8 utils.go:79] GRPC request: /csi.v1.Identity/Probe body: {}
I1112 15:44:00.529121       8 utils.go:84] GRPC response: /csi.v1.Identity/Probe return: {}
I1112 15:44:00.531677       8 utils.go:79] GRPC request: /csi.v1.Identity/GetPluginInfo body: {}
I1112 15:44:00.531977       8 utils.go:84] GRPC response: /csi.v1.Identity/GetPluginInfo return: {"name":"csi.chubaofs.com","vendor_version":"1.0.0"}
I1112 15:44:00.532636       8 utils.go:79] GRPC request: /csi.v1.Identity/GetPluginCapabilities body: {}
I1112 15:44:00.533372       8 utils.go:84] GRPC response: /csi.v1.Identity/GetPluginCapabilities return: {"capabilities":[{"Type":{"Service":{"type":1}}}]}
I1112 15:44:00.534113       8 utils.go:79] GRPC request: /csi.v1.Identity/Probe body: {}
I1112 15:44:00.534325       8 utils.go:84] GRPC response: /csi.v1.Identity/Probe return: {}
I1112 15:44:00.534676       8 utils.go:79] GRPC request: /csi.v1.Controller/ControllerGetCapabilities body: {}
I1112 15:44:00.534930       8 utils.go:84] GRPC response: /csi.v1.Controller/ControllerGetCapabilities return: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I1112 15:44:00.535548       8 utils.go:79] GRPC request: /csi.v1.Identity/GetPluginCapabilities body: {}
I1112 15:44:00.535785       8 utils.go:84] GRPC response: /csi.v1.Identity/GetPluginCapabilities return: {"capabilities":[{"Type":{"Service":{"type":1}}}]}

It seems that it's not working as expected, as you can see the plugin does not returns the supported capabilities

CubeFS CSI UT Completion

Now this project does not have UT completion, we need to strengthen this. Interested friends can join in.
Come on, commit your PR.

完全按照文档部署提示:kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi.chubaofs.com not found in the list of registered CSI drivers

MountVolume.MountDevice failed for volume "pvc-7773ca39-b350-4f77-b771-4aa62119a3f6" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi.chubaofs.com not found in the list of registered CSI drivers

image

k8s中部署的cfs CSI

image

我本地挂载

image

手工挂载在主机上是可以访问的。在K8S中出现上述错误,帮忙解决下咯。

batch create pvc vol failed

I use script to create pvc volume serially, when create finished, some pvc status is pending.
All of the pvc volume can be see in cubefs volume list, but when you use kebectl get pvc , you can see some vol status are pending.

create pvc script:

# /bin/sh
date
for i in {1..10};do
cat <<EOF | kubectl create -f -&
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cubefs-batch-pvc-qa-${i}
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  storageClassName: cfs-sc-with-consul-3.3
EOF
done
wait

echo "脚本执行完成"

pending pvc log
image

bug: csi would mount the local disk to pod

bug situation:

  1. when the csi node stage phase occur error, it doesn't remove the target mount path;
  2. csi node stage phase retry, it find the path is exist and valid, it pass this request and return ok;
  3. the user pod mount the local disk finally.

solution:
csi node need to remove the target mount path when occur error.

Cannot use Auth with CubeFSI CSI yet

Hi there,

I seemingly cannot yet use the Authnode functionality in this CSI yet:

wings:rifflabs-infrastructure/ (main✗) $ kubectl describe pvc cubefs-testclaim                                                                               [11:11:58]
Name:          cubefs-testclaim
Namespace:     default
StorageClass:  cubefs-dstcodex
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: csi.cubefs.com
               volume.kubernetes.io/storage-provisioner: csi.cubefs.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age              From                                                                                     Message
  ----     ------                ----             ----                                                                                     -------
  Normal   Provisioning          3s (x3 over 6s)  csi.cubefs.com_cfs-csi-controller-6874d5988d-bhpfl_b22b925a-03ea-4a81-8256-b1fbae5204d5  External provisioner is provisioning volume for claim "default/cubefs-testclaim"
  Warning  ProvisioningFailed    3s (x3 over 6s)  csi.cubefs.com_cfs-csi-controller-6874d5988d-bhpfl_b22b925a-03ea-4a81-8256-b1fbae5204d5  failed to provision volume with StorageClass "cubefs-dstcodex": rpc error: code = Unknown desc = create volume failed: url(http://cubefs.per.riff.cc:17010/admin/createVol?name=pvc-8e73aa12-1e58-479e-8858-e4edf059faf0&capacity=5&owner=dstcodex&crossZone=&enableToken=&zoneName=) code=(40), msg: [operate_util.go 182] parameter clientIDKey not found
  Normal   ExternalProvisioning  2s (x2 over 6s)  persistentvolume-controller                                                              waiting for a volume to be created, either by external provisioner "csi.cubefs.com" or manually created by system administrator

I did set "clientKey" in the StorageClass, but it didn't seem to have an effect.

Thanks!
~ Benjamin

[功能支持] 求支持cfs client 的 subdir 参数

对于我们业务很重要,用物理机没有问题,上k8s 发现无效

 csi:
    driver: csi.chubaofs.com
    fsType: xfs
    volumeAttributes:
      masterAddr: 10.90.224.230:17010,10.90.224.231:17010,10.90.224.232:17010
      owner: tmax
      volName: tmax-gzailab-asr-zcola
      logDir: /cfs/logs/tmax-gzailab-asr-zcola-sge
      logLevel: info
      subdir: /sge

Arm64 Support

We need to add arm64 support, mainly by compiling the code into arm64 and pushing the image to dockerhub. It is best that the whole process can be scripted, the simpler the steps, the better.

Support associating chubaofs volume with a specific PV

Today I was able only to create a PVC and have auto provisioning of the PV and the chubaofs associated volume with generated IDs as names.

In some pet set case, it's necessary to closer manager PV and / or chubao volumes.

So the CSI driver should be able to :

  • Create volumes from a defined PV name
  • Associating a PV to an existing volume that was created on chubaofs

社区的同学们,谁来做一下这个特性:client 分离

背景

目前 CSI 的 node pod 集成了 cubefs client,当有挂载请求过来时,会启动一个 client 来挂载到对应的位置上,所以当前的架构是 csi 的挂载控制进程和 client 进程在同一个 pod 中。
这就带来了一些问题,比如:

  • CSI 无法升级(一旦升级,pod 需要重启,则 client 进程会被 kill,则当前节点无法正常提供存储服务)
  • client CPU/Memory 资源无法精准控制(因为所有 client 都在同一个 pod 里,每个 client 需要的资源不一样,所以无法预估整个 pod 的资源)
  • 故障域无法隔离(因为所有 client 都在一个 pod 里,所以一旦这个 pod 异常,则这个节点上的业务都会遭遇异常)

而如果将 client 的进程分离出来,则以上问题都会得到更好的解决。

PV mount to POD fails and csi.driver is not listed

Hi,
i have used helm charts to install chubaofs. Then i created PVC based on the example provided, but

  • now it fails to mount the PV to POD:
Warning  FailedMount  115s (x2760 over 3d21h)  kubelet  MountVolume.MountDevice failed for volume "pvc-bc87ba3e-7321-492a-a812-64ef33239fcb" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi.lab.com not found in the list of registered CSI drivers
  • also should i be able/ or not to list the driver
kubectl  get csidrivers.storage.k8s.io
No resources found
  • version used:
  server: chubaofs/cfs-server:2.2.2
  client: chubaofs/cfs-client:2.2.2
  csi_driver: chubaofs/cfs-csi-driver:2.2.2.110.0
  csi_provisioner: quay.io/k8scsi/csi-provisioner:v1.6.0
  driver_registrar: quay.io/k8scsi/csi-node-driver-registrar:v1.3.0
  csi_attacher: quay.io/k8scsi/csi-attacher:v2.0.0
  grafana: grafana/grafana:6.4.4
  prometheus: prom/prometheus:v2.13.1
  consul: consul:1.6.1
  • kubectl and kubernetes version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:02:01Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.