Giter Site home page Giter Site logo

Comments (11)

avestuk avatar avestuk commented on May 28, 2024

Hi @hei-pa we do not support zfs as a fsType. We support the following fsTypes: ext2, ext3, ext4, btrfs and xfs

https://docs.storageos.com/docs/concepts/volumes

from cluster-operator.

kaedwen avatar kaedwen commented on May 28, 2024

ok thanks for the info

zfs can be mounted legacy like other filesystems. So mount -t zfs ... does work. What are the limitations this is not supported?

from cluster-operator.

avestuk avatar avestuk commented on May 28, 2024

@hei-pa could you perhaps share any logs you have about the crash with us? For instance what did the PVC events log show? If it's easier to do so you can send the crash reports to [email protected]

With regards to what filesystems we support we are agnostic to what file system is mounted on /var/lib/storageos. The fsType of the StorageClass has been designed to be pluggable such that if enough people show interest in using ZFS then we would support the creation of ZFS volumes with StorageOS.

Out of curiosity, I did try to create a volume with a storage class that had fsType set to zfs in my own cluster and I got the following error:

Warning    ProvisioningFailed  8s (x2 over 20s)  persistentvolume-controller  Failed to provision volume with StorageClass "zfs": API error (Server failed to process your request. Was the data correct?): couldnt process fs type: fs type not valid

That makes me wonder whether Kubernetes supports ZFS PVs at all. That's something I haven't yet been able to find an answer to so I will get back to you on that.

from cluster-operator.

kaedwen avatar kaedwen commented on May 28, 2024

Ok I have not that deep knowledge of kubernetes and pvc at all. I have setup a single master node to evaluate a little bit.

My setup with zpools as root filesystem or data pools mounted to /var/lib/storageos sounds not that uncommon.

Is it possible to delete the current StorageClass and create a new one without reinstalling the whole storageos part? Because the default setup of storageos-operator creates a StorageClass with ext4.


Now with a new cluster and a storageclass with fsType zfs i get the following ... don't know if something else is wrong here

Name:          keeweb
Namespace:     keeweb
StorageClass:  fast
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-class: fast
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/storageos
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason              Age                 From                         Message
  ----       ------              ----                ----                         -------
  Warning    ProvisioningFailed  26s (x6 over 116s)  persistentvolume-controller  Failed to provision volume with StorageClass "fast": invalid node format: lookup : no such host
Mounted By:  <none>```

from cluster-operator.

avestuk avatar avestuk commented on May 28, 2024

@hei-pa So using zpools or data pools as the file system mounted under /var/lib/storageos should not be an issue.

You can have multiple Storage Classes and specify which you wish to use when you create PVCs. You can create a new Storage Class and change the fsType parameter to one that you want. kubectl get sc fast -o yaml > /tmp/sc.yaml would save the storage class out to /tmp/sc.yaml so you could edit the file and then kubectl apply -f /tmp/sc.yaml to create a Storage Class that used zfs.

If you have reinstalled then I'd suggest you create a storage class with an fsType of ext4 and test whether you can provision volumes like that before you try and create a Storage Class that uses zfs

from cluster-operator.

kaedwen avatar kaedwen commented on May 28, 2024

yes i have deleted the sc fast and it was recreated from storageos

with this one (have ext4 as fsType) is looks good to me

Name:          keeweb
Namespace:     keeweb
StorageClass:  fast
Status:        Bound
Volume:        pvc-430fd182-5169-11e9-9d91-001999e205fd
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-class: fast
               volume.beta.kubernetes.io/storage-provisioner: storageos
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age   From                                                                    Message
  ----       ------                 ----  ----                                                                    -------
  Normal     ExternalProvisioning   60s   persistentvolume-controller                                             waiting for a volume to be created, either by external provisioner "storageos" or manually created by system administrator
  Normal     Provisioning           60s   storageos_storageos-statefulset-0_1c021d7b-5169-11e9-81bd-0a580a00002c  External provisioner is provisioning volume for claim "keeweb/keeweb"
  Normal     ProvisioningSucceeded  59s   storageos_storageos-statefulset-0_1c021d7b-5169-11e9-81bd-0a580a00002c  Successfully provisioned volume pvc-430fd182-5169-11e9-9d91-001999e205fd
Mounted By:  <none>```

Now using that pvc fails because the created storage can not be mounted (obviously because it on a zpool)

```root@srv-01:/var/lib/storageos# kubectl describe pod -n keeweb keeweb
Name:               keeweb
Namespace:          keeweb
Priority:           0
PriorityClassName:  <none>
Node:               srv-01/192.168.20.11
Start Time:         Thu, 28 Mar 2019 15:55:18 +0100
Labels:             app=keeweb
Annotations:        <none>
Status:             Pending
IP:                 
Containers:
  keeweb:
    Container ID:   
    Image:          antelle/keeweb
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/nginx/external from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-p4cdn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  keeweb
    ReadOnly:   false
  default-token-p4cdn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-p4cdn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               116s                default-scheduler        Successfully assigned keeweb/keeweb to srv-01
  Normal   SuccessfulAttachVolume  116s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-430fd182-5169-11e9-9d91-001999e205fd"
  Warning  FailedMount             45s (x8 over 110s)  kubelet, srv-01          MountVolume.SetUp failed for volume "pvc-430fd182-5169-11e9-9d91-001999e205fd" : rpc error: code = Unknown desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /var/lib/storageos/volumes/376b7983-6a72-f59b-945a-be223b5d93b4 /var/lib/kubelet/pods/808b363e-5169-11e9-9d91-001999e205fd/volumes/kubernetes.io~csi/pvc-430fd182-5169-11e9-9d91-001999e205fd/mount
Output: mount: wrong fs type, bad option, bad superblock on /var/lib/storageos/volumes/376b7983-6a72-f59b-945a-be223b5d93b4,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.```

from cluster-operator.

avestuk avatar avestuk commented on May 28, 2024

@hei-pa Could you show me what the StorageClass you are using looks like?

Also could you try and create a volume using the StorageOS CLI? storageos volume create test and then use the CLI to mount that volume on one of your node storageos volume mount test /mnt
https://docs.storageos.com/docs/reference/cli/

from cluster-operator.

kaedwen avatar kaedwen commented on May 28, 2024

i have a zfs storageclass after i have followed your advice to extract the original with -o yaml >

kind: StorageClass
metadata:
  labels:
    app: storageos
  name: zfs
parameters:
  csi.storage.k8s.io/fstype: zfs
  pool: default
provisioner: storageos
reclaimPolicy: Delete
volumeBindingMode: Immediate

Now using this one in a pvc results in

  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               45m                 default-scheduler        Successfully assigned keeweb/keeweb to srv-01
  Normal   SuccessfulAttachVolume  45m                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-f003cc31-5172-11e9-9d91-001999e205fd"
  Warning  FailedMount             14m (x23 over 45m)  kubelet, srv-01          MountVolume.SetUp failed for volume "pvc-f003cc31-5172-11e9-9d91-001999e205fd" : rpc error: code = Unknown desc = exec: "mkfs.zfs": executable file not found in $PATH

There is no mkfs.zfs ... it's not working that way.

Now trying the storageos cli results in similar errors

root@srv-01:~/kube/storage-os# storageos -D volume list
NAMESPACE/NAME                                   SIZE  MOUNT   SELECTOR  STATUS  REPLICAS  LOCATION
default/test                                     5GiB  srv-01            active  0/0       srv-01 (healthy)
root@srv-01:/# storageos -D volume mount test /mnt
DEBU[0000] StorageOS volume ready: /mnt                 
DEBU[0000] Mountpoint created: /mnt                     
ERRO[0000] fail to get output from command               args="[-t ext4 /var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860 /mnt]" cmd=/bin/mount error="exit status 32"
ERRO[0000] Mount failed                                  error="exit status 32" fs_type=ext4 mount_point=/mnt path=/var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860
failed to mount volume, beginning retry 1

It would be fine for me if storageos creates ext4 filesystem blobs, but the underlaying filesystem has to be a zpool because I have no other.

If I have a look to the disk that should be mounted

root@srv-01:~/kube/storage-os# fdisk /var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860

Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5ed3d878.

Command (m for help): p
Disk /var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5ed3d878

There is no partition? It's the whole disk? Is that correct? Something like this can not be mounted.

from cluster-operator.

avestuk avatar avestuk commented on May 28, 2024

@hei-pa I checked with colleagues and when using CSI StorageOS actually does the formatting of the block device. However we do not currently support ZFS filesystems so this is not possible.

On the subject of the partitions we do not create partitions on our volumes, however the lack of partitions does not prevent the file system from being mounted. In the output below I have mounted the volume default/pvc-52e8d3e6-517f-11e9-99f1-0681640e8ccc on /mnt.

[root@alexv-rancher-nodes1 devices]# /usr/local/bin/storageos v inspect default/pvc-52e8d3e6-517f-11e9-99f1-0681640e8ccc
[
    {
        "id": "8e594ea5-2d12-5e2b-eb6e-255909917a50",
        "inode": 117071,
        "name": "pvc-52e8d3e6-517f-11e9-99f1-0681640e8ccc",
        "size": 5,
        "pool": "default",
        "fsType": "ext4",
        "description": "",
        "labels": {
            "fsType": "zfs",
            "storageos.com/presentation": "mounted"
        },
        "namespace": "default",
        "nodeSelector": "",
        "master": {
            "id": "5d51f16a-ef21-fe7a-6ae3-5d6e5fafe78e",
            "inode": 115676,
            "node": "cc07a8ea-3781-0ae7-4339-0cc9fd10476b",
            "nodeName": "alexv-rancher-nodes1",
            "health": "healthy",
            "status": "active",
            "createdAt": "2019-03-28T17:44:22.73482705Z"
        },
        "mounted": true,
        "mountDevice": "/var/lib/kubelet/volumeplugins/kubernetes.io~storageos/devices/8e594ea5-2d12-5e2b-eb6e-255909917a50",
        "mountpoint": "/mnt",
        "mountedAt": "2019-03-28T17:46:15.177183536Z",
        "mountedBy": "alexv-rancher-nodes1",
        "replicas": [],
        "health": "healthy",
        "status": "active",
        "statusMessage": "replica 5d51f16a-ef21-fe7a-6ae3-5d6e5fafe78e was synced with master at 2019-03-28 17:44:28.191259681 +0000 UTC m=+1150.641302991",
        "mkfsDone": true,
        "mkfsDoneAt": "2019-03-28T17:32:14.613382586Z",
        "createdAt": "2019-03-28T17:31:30.551878926Z",
        "createdBy": ""
    }
]
[root@alexv-rancher-nodes1 devices]# fdisk 8e594ea5-2d12-5e2b-eb6e-255909917a50 -l

Disk 8e594ea5-2d12-5e2b-eb6e-255909917a50: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Could you try and do the following and share the output?
storageos volume create test2 -f ext4
storageos -D volume mount test2 /mnt

from cluster-operator.

kaedwen avatar kaedwen commented on May 28, 2024

Ok got the partition thing.

Here is the output

root@srv-01:~/kube/storage-os# storageos volume create test2 -f ext4
default/test2
root@srv-01:~/kube/storage-os# storageos -D volume mount test2 /mnt
DEBU[0000] volume found: /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64 
DEBU[0001] checking volume for existing filesystem: /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64: output: /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64: data 
DEBU[0001] volume /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64 has fs type: raw 
DEBU[0001] creating ext4 filesystem on volume /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64 
ERRO[0001] fail to get output from command               args="[-F -U dfa755b3-d448-ff6a-3933-3691b3a82e64 -b 4096 -E lazy_itable_init=1,lazy_journal_init=1 /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64]" cmd=/sbin/mkfs.ext4 error="exit status 5"
WARN[0001] create filesystem failed, retrying in 1s      err="exit status 5" fstype=ext4 path=/var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64

So it does not work to force a ext4 blob file for some reason

I have tried the command and got this error

root@srv-01:~/kube/storage-os# mkfs.ext4 -F -U dfa755b3-d448-ff6a-3933-3691b3a82e64 -b 4096 -E lazy_itable_init=1,lazy_journal_init=1 /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: dfa755b3-d448-ff6a-3933-3691b3a82e64
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information:      
Warning, had trouble writing out superblocks.

Edit

I have tried creating a file with ext4 signature in /var/lib/storageos (the zfs dataset mountpoint) and everything is working.

Then I did the same inside /var/lib/storageos/volumes and this did not work

root@srv-01:/var/lib/storageos/volumes# dd if=/dev/zero of=blob bs=4k count=600
dd: failed to open 'blob': Function not implemented

Whats wrong with the volumes directory? Special thing?


Ok there is something mounted to volumes

storageos on /var/lib/storageos/volumes type fuse.storageos (rw,nosuid,noexec,noatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

from cluster-operator.

avestuk avatar avestuk commented on May 28, 2024

@hei-pa Thanks for doing that. I've created a ticket internally for someone to recreate this issue however I do not know when the development team will be able to spend time working on this issue. In the meantime I'd suggest that you move /var/lib/storageos to a file system that is not ZFS formatted.

If you've got any questions in future feel free to send me a message on our public slack

from cluster-operator.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.