cubefs / cubefs-csi Goto Github PK
View Code? Open in Web Editor NEWCubeFS Container Storage Interface (CSI) plugins.
License: Apache License 2.0
CubeFS Container Storage Interface (CSI) plugins.
License: Apache License 2.0
there is a new release of CubeFS, need build a new version image of CSI
We need to add arm64 support, mainly by compiling the code into arm64 and pushing the image to dockerhub. It is best that the whole process can be scripted, the simpler the steps, the better.
StorageClass yaml should involves cluster-specific information of ChubaoFS, while PVC yaml focuses on volume-specific information of ChubaoFS.
So the "owner" field in current StorageClass should be moved to PVC.
We are supporting v1.0 now. The supported CSI version in README should be updated accordingly.
Kubernetes Version: 1.22.2
csi-attacher Version: quay.io/k8scsi/csi-attacher:v2.0.0
github yaml commit: df3943c (branch master now)
Hi!
I am going to use cubefs but the version of my kubernetes cluster is 1.22.5.
Could I ask is there a plan to support kubernetes 1.22+?
Hi there,
I'm trying to get the csi plugin to work with nomad: https://www.nomadproject.io/docs/job-specification/csi_plugin
It seems that this is not that easy, it seems the plugin want to fetch the kubeconfig.
Why is that needed, can it work without?
Currently I have create this entrypoint for the plugin to do "something" inside nomad:
mkdir -p /var/run/secrets/kubernetes.io/serviceaccount
echo -ne 'ALONGSTRING' > /var/run/secrets/kubernetes.io/serviceaccount/token
# just some dummy veriables
export KUBERNETES_SERVICE_HOST=1.2.3.4
export KUBERNETES_SERVICE_PORT=1234
/cfs/bin/cfs-csi-driver -v=5 --endpoint=unix:///csi/csi.sock --nodeid=3249d90 --drivername=csi.chubaofs.com
Now the driver starts:
I1112 15:44:00.506035 8 driver.go:39] driverName:csi.chubaofs.com, version:1.0.0, nodeID:3249d90
E1112 15:44:00.506144 8 config.go:428] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
I1112 15:44:00.506422 8 driver.go:85] Enabling controller service capability: CREATE_DELETE_VOLUME
I1112 15:44:00.506429 8 driver.go:97] Enabling volume access mode: SINGLE_NODE_WRITER
I1112 15:44:00.506433 8 driver.go:97] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I1112 15:44:00.506436 8 driver.go:97] Enabling volume access mode: MULTI_NODE_READER_ONLY
I1112 15:44:00.506439 8 driver.go:97] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I1112 15:44:00.509129 8 server.go:108] Listening for connections on address: unix:///csi/csi.sock
I1112 15:44:00.528707 8 utils.go:79] GRPC request: /csi.v1.Identity/Probe body: {}
I1112 15:44:00.529121 8 utils.go:84] GRPC response: /csi.v1.Identity/Probe return: {}
I1112 15:44:00.531677 8 utils.go:79] GRPC request: /csi.v1.Identity/GetPluginInfo body: {}
I1112 15:44:00.531977 8 utils.go:84] GRPC response: /csi.v1.Identity/GetPluginInfo return: {"name":"csi.chubaofs.com","vendor_version":"1.0.0"}
I1112 15:44:00.532636 8 utils.go:79] GRPC request: /csi.v1.Identity/GetPluginCapabilities body: {}
I1112 15:44:00.533372 8 utils.go:84] GRPC response: /csi.v1.Identity/GetPluginCapabilities return: {"capabilities":[{"Type":{"Service":{"type":1}}}]}
I1112 15:44:00.534113 8 utils.go:79] GRPC request: /csi.v1.Identity/Probe body: {}
I1112 15:44:00.534325 8 utils.go:84] GRPC response: /csi.v1.Identity/Probe return: {}
I1112 15:44:00.534676 8 utils.go:79] GRPC request: /csi.v1.Controller/ControllerGetCapabilities body: {}
I1112 15:44:00.534930 8 utils.go:84] GRPC response: /csi.v1.Controller/ControllerGetCapabilities return: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I1112 15:44:00.535548 8 utils.go:79] GRPC request: /csi.v1.Identity/GetPluginCapabilities body: {}
I1112 15:44:00.535785 8 utils.go:84] GRPC response: /csi.v1.Identity/GetPluginCapabilities return: {"capabilities":[{"Type":{"Service":{"type":1}}}]}
It seems that it's not working as expected, as you can see the plugin does not returns the supported capabilities
the build way of CubeFS client had already change, the CSI build script need update also.
A part of whole project rename task, see cubefs/cubefs#1343
Volume "owner" should be specified in the storage class, and passed as a parameter to func "CreateVolume" in https://github.com/chubaofs/chubaofs-csi/blob/master/pkg/cfs/cfsnet.go
目前 CSI 的 node pod 集成了 cubefs client,当有挂载请求过来时,会启动一个 client 来挂载到对应的位置上,所以当前的架构是 csi 的挂载控制进程和 client 进程在同一个 pod 中。
这就带来了一些问题,比如:
而如果将 client 的进程分离出来,则以上问题都会得到更好的解决。
The "NodeStageVolume" is now an empty implementation, thus the fuse-mount happens every time starts a pod. Suppose a better way, firtly mount a volume to a globalpath on nodeserver, and then bind the globalpath to mount point. What's more, enter arguments on fuse command line will be better.
NodeStageVolume
(base) root@ubuntu:~# docker pull ghcr.io/cubefs/cfs-csi-driver:2.4.1.110.1
2.4.1.110.1: Pulling from cubefs/cfs-csi-driver
a8c7037c15e9: Pulling fs layer
ee77564112b3: Pulling fs layer
c9a8edccd926: Pulling fs layer
8ad28c0cab2b: Pulling fs layer
e7f6b46a46e9: Pulling fs layer
3ae92f87a760: Pulling fs layer
6d4b6ceb8d2a: Pulling fs layer
dc9b4b510e80: Pulling fs layer
error pulling image configuration: download failed after attempts=1: unknown blob
对于我们业务很重要,用物理机没有问题,上k8s 发现无效
csi:
driver: csi.chubaofs.com
fsType: xfs
volumeAttributes:
masterAddr: 10.90.224.230:17010,10.90.224.231:17010,10.90.224.232:17010
owner: tmax
volName: tmax-gzailab-asr-zcola
logDir: /cfs/logs/tmax-gzailab-asr-zcola-sge
logLevel: info
subdir: /sge
the error code between csi and cubefs is not match
创建pvc后这chubaofs里创建了目录,但是删除pvc和pv后也没有服务器上也没有删除创建的目录。
已经设置reclaimPolicy: Delete
这nfs里设置
parameters:
archiveOnDelete: "false"
就可以删除
这 https://github.com/chubaofs/chubaofs-csi/blob/master/deploy/storageclass-default.yaml 这里也没有看出那个参数跟nfs类似
Hi,
i have used helm charts to install chubaofs. Then i created PVC based on the example provided, but
Warning FailedMount 115s (x2760 over 3d21h) kubelet MountVolume.MountDevice failed for volume "pvc-bc87ba3e-7321-492a-a812-64ef33239fcb" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi.lab.com not found in the list of registered CSI drivers
kubectl get csidrivers.storage.k8s.io
No resources found
server: chubaofs/cfs-server:2.2.2
client: chubaofs/cfs-client:2.2.2
csi_driver: chubaofs/cfs-csi-driver:2.2.2.110.0
csi_provisioner: quay.io/k8scsi/csi-provisioner:v1.6.0
driver_registrar: quay.io/k8scsi/csi-node-driver-registrar:v1.3.0
csi_attacher: quay.io/k8scsi/csi-attacher:v2.0.0
grafana: grafana/grafana:6.4.4
prometheus: prom/prometheus:v2.13.1
consul: consul:1.6.1
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:02:01Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Thank you
Volume snapshotting / restoring is an important feature to get a production ready filesystem. Is there any plan to implement it ?
bug situation:
solution:
csi node need to remove the target mount path when occur error.
cubefs already support two storage engine: replicas and ec,create pvc need to support these two volume types.
你好,看了一下源码,没有发现实现 csi规范中的存储快照和存储扩容功能,是不是因为chubaofs 本身还不支持存储卷的快照和动态扩容功能呢?
After I applied -f csi.yaml on arm rocky linux , the csi node and controller pod can not run .I get the error message form kubectl log as below -Back-off restarting failed container cfs-driver in pod cfs-csi-controller-7f955f786f-r2c8g_cubefs
-cfs-driver:
Container ID: docker://6eebb4ec17b5b2a54d6ce75b396ebc3cf06ec88763658d9d92d1d4f561a05782
Image: ghcr.io/cubefs/cfs-csi-driver:3.2.0.150.0
Image ID: docker-pullable://ghcr.io/cubefs/cfs-csi-driver@sha256:8723616a976a2a0278cb14ab5c2bb26ed7302603201202684934b68448c89f27
Port:
Host Port:
Args:
bash
-c
set -e
su -p -s /bin/bash -c "/cfs/bin/start.sh &"
su -p -s /bin/bash -c "sleep 9999999d"
State: Waiting
Reason: CrashLoopBackOff
Then I build the csi source code with expert help,csi pod can run but can not create pvc.
Please provide the arm csi plugin .Thanks
CubeFS already supports erasure coding, please refer to here for details.
https://cubefs.io/docs/master/user-guide/volume.html#create-erasure-coded-volume
We need to pass the erasure coding related parameters through the client so that the PVC is in erasure coding mode. So we also need to add a storageclass type to support erasure coding mode.
Now this project does not have UT completion, we need to strengthen this. Interested friends can join in.
Come on, commit your PR.
this problem is easy to reproduce:
so how to avoid this situation or how to recover?
Release ChubaoFS CSI v2.2.2.110.0
to adapt to ChubaoFS v2.2.2
and CSI v1.1.0
.
Prerequisite
Kubernetes 1.16.0
CSI spec version 1.1.0
is really old tested version
althought i set requests and limits in pvc but it seems there no limit in cfs.
I use script to create pvc volume serially, when create finished, some pvc status is pending.
All of the pvc volume can be see in cubefs volume list, but when you use kebectl get pvc , you can see some vol status are pending.
create pvc script:
# /bin/sh
date
for i in {1..10};do
cat <<EOF | kubectl create -f -&
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cubefs-batch-pvc-qa-${i}
namespace: default
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
storageClassName: cfs-sc-with-consul-3.3
EOF
done
wait
echo "脚本执行完成"
Volumes mount in pods will be crash after the csi-node pod shutdown.
readdirlimit: packet() mp(PartitionID(228) Start(0) End(16777216) Members([~~~~]) LeaderAddr(.:10) Status(2)) req({pvtest 228 1 1024}) err(sendToMetaPartition failed: req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) mp(PartitionID(228) Start(0) End(16777216) Members([.:10 .:10 .64:10]) LeaderAddr(.:10) Status(2)) errs(map[0:[conn.go 145] Failed to read from conn, req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) :: read tcp :46808->.:10: i/o timeout 1:[conn.go 145] Failed to read from conn, req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) :: read tcp :48510->.:10: i/o timeout 2:[conn.go 145] Failed to read from conn, req(ReqID(7)Op(OpMetaReadDirLimit)PartitionID(228)ResultCode(Unknown ResultCode(0))) :: read tcp :55182->:10: i/o timeout]) resp())
2022/09/26 10:08:36.483713 [ERROR] dir.go:360: readdirlimit: Readdir: ino(1) err(input/output error)
k8s version 1.22.14
cubefs-csi version 3.10
chubaofs version 2.40
请问这是因为csi插件和chubaofs版本不兼容的问题吗
Hi there,
I seemingly cannot yet use the Authnode functionality in this CSI yet:
wings:rifflabs-infrastructure/ (main✗) $ kubectl describe pvc cubefs-testclaim [11:11:58]
Name: cubefs-testclaim
Namespace: default
StorageClass: cubefs-dstcodex
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: csi.cubefs.com
volume.kubernetes.io/storage-provisioner: csi.cubefs.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 3s (x3 over 6s) csi.cubefs.com_cfs-csi-controller-6874d5988d-bhpfl_b22b925a-03ea-4a81-8256-b1fbae5204d5 External provisioner is provisioning volume for claim "default/cubefs-testclaim"
Warning ProvisioningFailed 3s (x3 over 6s) csi.cubefs.com_cfs-csi-controller-6874d5988d-bhpfl_b22b925a-03ea-4a81-8256-b1fbae5204d5 failed to provision volume with StorageClass "cubefs-dstcodex": rpc error: code = Unknown desc = create volume failed: url(http://cubefs.per.riff.cc:17010/admin/createVol?name=pvc-8e73aa12-1e58-479e-8858-e4edf059faf0&capacity=5&owner=dstcodex&crossZone=&enableToken=&zoneName=) code=(40), msg: [operate_util.go 182] parameter clientIDKey not found
Normal ExternalProvisioning 2s (x2 over 6s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.cubefs.com" or manually created by system administrator
I did set "clientKey" in the StorageClass, but it didn't seem to have an effect.
Thanks!
~ Benjamin
Release ChubaoFS CSI v2.2.2.110.0
to adapt to ChubaoFS v2.2.2
and CSI v0.3.0
.
Hello, we are a team researching the dependency management mechanism of Golang. During our analysis, we came across your project and noticed that you have fixed a vulnerability (snyk references, CVE: CVE-2023-30512, CWE: CWE-264, fix commit id: 97e6ade). However, we observed that you have not tagged the fixing commit or its subsequent commits. As a result, users are unable to obtain the patch version through Go tool ‘go list’.
We kindly request your assistance in addressing this issue. Tagging the fixing commit or its subsequent commits will greatly benefit users who rely on your project and are seeking the patched version to address the vulnerability.
We greatly appreciate your attention to this matter and collaboration in resolving it. Thank you for your time and for your valuable contributions to our research.
比如支持 nomad-0.11.0,开始支持csi标准
Today I was able only to create a PVC
and have auto provisioning of the PV
and the chubaofs associated volume
with generated IDs as names.
In some pet set case, it's necessary to closer manager PV
and / or chubao volumes
.
So the CSI driver should be able to :
volumes
from a defined PV
namePV
to an existing volume
that was created on chubaofsA declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.