Giter Site home page Giter Site logo

kubernetes-csi / csi-driver-host-path Goto Github PK

View Code? Open in Web Editor NEW
297.0 13.0 194.0 57.61 MB

A sample (non-production) CSI Driver that creates a local directory as a volume on a single node

License: Apache License 2.0

Makefile 4.72% Dockerfile 0.29% Go 47.78% Shell 42.79% Python 4.41%
k8s-sig-storage

csi-driver-host-path's Introduction

CSI Hostpath Driver

This repository hosts the CSI Hostpath driver and all of its build and dependent configuration files to deploy the driver.


*WARNING: This driver is just a demo implementation and is used for CI testing. This has many fake implementations and other non-standard best practices, and should not be used as an example of how to write a real driver.

Pre-requisite

  • Kubernetes cluster
  • Running version 1.17 or later
  • Access to terminal with kubectl installed
  • VolumeSnapshot CRDs and Snapshot Controller must be installed as part of the cluster deployment (see Kubernetes 1.17+ deployment instructions)

Features

The driver can provide empty directories that are backed by the same filesystem as EmptyDir volumes. In addition, it can provide raw block volumes that are backed by a single file in that same filesystem and bound to a loop device.

Various command line parameters influence the behavior of the driver. This is relevant in particular for the end-to-end testing that this driver is used for in Kubernetes.

Usually, the driver implements all CSI operations itself. When deployed with the -proxy-endpoint parameter, it instead proxies all incoming connections for a CSI driver that is embedded inside the Kubernetes E2E test suite and used for mocking a CSI driver with callbacks provided by certain tests.

Deployment

Deployment for Kubernetes 1.17 and later

Examples

The following examples assume that the CSI hostpath driver has been deployed and validated:

Building the binaries

If you want to build the driver yourself, you can do so with the following command from the root directory:

make

Development

Updating sidecar images

The deploy/ directory contains manifests for deploying the CSI hostpath driver for different Kubernetes versions.

If you want to update the image versions used in these manifests, you can do so with the following command from the root directory:

hack/bump-image-versions.sh

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

csi-driver-host-path's People

Contributors

aayushrangwala avatar avalluri avatar bertinatto avatar carlory avatar chakri-nelluri avatar chrishenzie avatar darkowlzz avatar dependabot[bot] avatar edisonxiang avatar fengzixu avatar gnufied avatar humblec avatar jsafrane avatar k8s-ci-robot avatar leonardoce avatar lpabon avatar madhu-1 avatar msau42 avatar nixpanic avatar okartau avatar pohly avatar raunakshah avatar rootfs avatar saad-ali avatar sbezverk avatar spiffxp avatar sunnylovestiramisu avatar verult avatar vladimirvivien avatar xing-yang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-driver-host-path's Issues

Idempotency: repeated Node Unpublish operations cause failure

Background: in pmem-csi driver we added more idempotency conformance.
To test, the repeated Node Volume operations were added in csi-test.
In process of upstreaming this change, the CI tests that use host-path-driver, fail:
https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/kubernetes-csi_csi-test/229/pull-kubernetes-csi-csi-test/1185891170576764931

host-path driver currently does not pay attention to repeated requests possibility, there are no mutexes that protect against related race conditions.

Create the csi-hostpath-provisioner.yaml failed

Hi,
When running the deploy/deploy-hostpath.sh, the command:

#kubectl apply -f csi-hostpath-provisioner.yaml
# kubectl get statefulset
NAME                       READY   AGE
csi-hostpath-provisioner   0/1     33m
# kubectl describe statefulset csi-hostpath-provisioner
Warning  FailedCreate  1s (x13 over 11s)  statefulset-controller  create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods "csi-hostpath-provisioner-0" is forbidden: unable to validate against any pod security policy: [spec.volumes[0].hostPath.pathPrefix: Invalid value: "/var/lib/kubelet/plugins/csi-hostpath": is not allowed to be used]

# kubectl get psp
NAME                    PRIV    CAPS   SELINUX    RUNASUSER          FSGROUP     SUPGROUP    READONLYROOTFS   VOLUMES
00-rook-ceph-operator   true    *      RunAsAny   RunAsAny           RunAsAny    RunAsAny    false            *
bcmt                    false   *      RunAsAny   RunAsAny           RunAsAny    RunAsAny    false            *
privileged              true    *      RunAsAny   RunAsAny           RunAsAny    RunAsAny    false            *
restricted              false          RunAsAny   MustRunAsNonRoot   MustRunAs   MustRunAs   false            configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPath

I have created the privileged psp, why it wlways reportes the pod security issue?
What should I change to make it successful?

Thanks for any comments!

Using `*` in cp commands is a bad idea

Passing cp -a /csi-data-dir/xxxx/* on some shells will actually attempt to try and find a file named * to copy. We should instead be using the cp specific syntax and using /. instead.

Consolidate all the hostpath driver specs into one pod

I think we kept attacher separate so that we can easily test with or without it, but I don't see a reason why provisioner, resizer, snapshotter can't all be in the same Pod as the driver. Actually I think our attach required tests are using mock driver, not hostpath driver, so I think it should be safe to bundle attacher in the same pod as well.

Creation of a volume snapshot takes longer than a minute

External snapshotter: v.2.0.1

I have the following sequence of events:

  • create a PVC
  • create a job that runs a pod that uses the PVC
  • then I watch for the job's events
  • get an event that the job succeeds
  • create a snapshot of the initial PVC
  • create a new PVC based on that snapshot

Creating the snapshot takes time to complete. In the events, I see the following

0: "2020-09-24T07:49:46Z	PersistentVolumeClaim	waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator"
1: "2020-09-24T07:49:46Z	PersistentVolumeClaim	External provisioner is provisioning volume for claim "default/1996571692015078109""
2: "2020-09-24T07:49:46Z	Job	                    Created pod: 1996571692015078109-xz2p8"
3: "2020-09-24T07:49:46Z	PersistentVolumeClaim	failed to provision volume with StorageClass "csi-hostpath-sc": error getting handle for DataSource Type VolumeSnapshot by Name 2319726611692524402: snapshot 2319726611692524402 is not Ready"
4: "2020-09-24T07:49:46Z	Pod	                    error while running "VolumeBinding" filter plugin for pod "1996571692015078109-xz2p8": pod has unbound immediate PersistentVolumeClaims"
5: "2020-09-24T07:49:46Z	PersistentVolumeClaim	External provisioner is provisioning volume for claim "default/1996571692015078109""
6: "2020-09-24T07:49:46Z	PersistentVolumeClaim	failed to provision volume with StorageClass "csi-hostpath-sc": error getting handle for DataSource Type VolumeSnapshot by Name 2319726611692524402: snapshot 2319726611692524402 is not Ready"
7: "2020-09-24T07:49:47Z	PersistentVolumeClaim	External provisioner is provisioning volume for claim "default/1996571692015078109""
8: "2020-09-24T07:49:47Z	PersistentVolumeClaim	failed to provision volume with StorageClass "csi-hostpath-sc": error getting handle for DataSource Type VolumeSnapshot by Name 2319726611692524402: snapshot 2319726611692524402 is not Ready"
9: "2020-09-24T07:49:49Z	PersistentVolumeClaim	waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator"
10: "2020-09-24T07:49:51Z	PersistentVolumeClaim	External provisioner is provisioning volume for claim "default/1996571692015078109""
11: "2020-09-24T07:49:51Z	PersistentVolumeClaim	failed to provision volume with StorageClass "csi-hostpath-sc": error getting handle for DataSource Type VolumeSnapshot by Name 2319726611692524402: snapshot 2319726611692524402 is not Ready"
12: "2020-09-24T07:49:59Z	PersistentVolumeClaim	External provisioner is provisioning volume for claim "default/1996571692015078109""
13: "2020-09-24T07:49:59Z	PersistentVolumeClaim	failed to provision volume with StorageClass "csi-hostpath-sc": error getting handle for DataSource Type VolumeSnapshot by Name 2319726611692524402: snapshot 2319726611692524402 is not Ready"
14: "2020-09-24T07:50:04Z	PersistentVolumeClaim	waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator"
15: "2020-09-24T07:50:15Z	PersistentVolumeClaim	External provisioner is provisioning volume for claim "default/1996571692015078109""
16: "2020-09-24T07:50:15Z	PersistentVolumeClaim	failed to provision volume with StorageClass "csi-hostpath-sc": error getting handle for DataSource Type VolumeSnapshot by Name 2319726611692524402: snapshot 2319726611692524402 is not Ready"
17: "2020-09-24T07:50:19Z	PersistentVolumeClaim	waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator"
18: "2020-09-24T07:50:34Z	PersistentVolumeClaim	waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator"
19: "2020-09-24T07:50:45Z	Pod	                    error while running "VolumeBinding" filter plugin for pod "1996571692015078109-xz2p8": pod has unbound immediate PersistentVolumeClaims"
20: "2020-09-24T07:50:47Z	PersistentVolumeClaim	External provisioner is provisioning volume for claim "default/1996571692015078109""
21: "2020-09-24T07:50:47Z	PersistentVolumeClaim	Successfully provisioned volume pvc-26fec45d-a128-43be-beb5-6383597f65d8"

In the related fragment from the snapshotter logs, I see that there was an attempt to create the snapshot content (I guess..) and it failed because of "snapshot controller failed to update snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io \"snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54\": the object has been modified". And then there was a pause for a minute between retries:

I0924 07:49:45.982620       1 snapshot_controller_base.go:144] enqueued "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" for sync
I0924 07:49:45.982669       1 snapshot_controller_base.go:159] contentWorker[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.982694       1 util.go:141] storeObjectUpdate: adding content "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54", version 8718
I0924 07:49:45.982711       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.982753       1 snapshot_controller.go:569] Check if VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54] should be deleted.
I0924 07:49:45.982768       1 snapshot_controller.go:78] syncContent: Call CreateSnapshot for content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54
I0924 07:49:45.982778       1 snapshot_controller.go:133] createSnapshot for content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]: started
I0924 07:49:45.982798       1 snapshot_controller.go:112] scheduleOperation[create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.982817       1 snapshot_controller.go:305] createSnapshotWrapper: Creating snapshot for content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54 through the plugin ...
I0924 07:49:45.982858       1 snapshot_controller.go:218] getCSISnapshotInput for content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.982874       1 snapshot_controller.go:477] getSnapshotClass: VolumeSnapshotClassName [csi-hostpath-snapclass]
I0924 07:49:45.982890       1 snapshot_controller.go:606] setAnnVolumeSnapshotBeingCreated: set annotation [snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] on content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54].
I0924 07:49:45.989291       1 snapshot_controller_base.go:144] enqueued "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" for sync
I0924 07:49:45.989360       1 snapshot_controller_base.go:159] contentWorker[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.989390       1 util.go:169] storeObjectUpdate updating content "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" with version 8719
I0924 07:49:45.989416       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.989506       1 snapshot_controller.go:569] Check if VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54] should be deleted.
I0924 07:49:45.989570       1 snapshot_controller.go:78] syncContent: Call CreateSnapshot for content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54
I0924 07:49:45.989587       1 snapshot_controller.go:133] createSnapshot for content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]: started
I0924 07:49:45.989624       1 snapshot_controller.go:112] scheduleOperation[create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.990333       1 snapshot_controller.go:118] operation "create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" is already running, skipping
I0924 07:49:45.991592       1 snapshot_controller.go:179] updateContentStatusWithEvent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:49:45.998228       1 snapshot_controller.go:200] updating VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54] error status failed Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54": the object has been modified; please apply your changes to the latest version and try again
E0924 07:49:45.998252       1 snapshot_controller.go:139] createSnapshot [create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]: error occurred in createSnapshotWrapper: failed to add VolumeSnapshotBeingCreated annotation on the content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54: "snapshot controller failed to update snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io \"snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54\": the object has been modified; please apply your changes to the latest version and try again"
E0924 07:49:45.998297       1 goroutinemap.go:150] Operation for "create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" failed. No retries permitted until 2020-09-24 07:49:46.4982667 +0000 UTC m=+10174.504566801 (durationBeforeRetry 500ms). Error: "failed to add VolumeSnapshotBeingCreated annotation on the content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54: \"snapshot controller failed to update snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io \\\"snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54\\\": the object has been modified; please apply your changes to the latest version and try again\""
I0924 07:50:45.369840       1 reflector.go:278] github.com/kubernetes-csi/external-snapshotter/pkg/client/informers/externalversions/factory.go:117: forcing resync
I0924 07:50:45.370226       1 snapshot_controller_base.go:144] enqueued "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" for sync
I0924 07:50:45.370454       1 snapshot_controller_base.go:159] contentWorker[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:50:45.370493       1 util.go:169] storeObjectUpdate updating content "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" with version 8719
I0924 07:50:45.370529       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:50:45.370671       1 snapshot_controller.go:569] Check if VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54] should be deleted.
I0924 07:50:45.370698       1 snapshot_controller.go:78] syncContent: Call CreateSnapshot for content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54
I0924 07:50:45.370712       1 snapshot_controller.go:133] createSnapshot for content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]: started
I0924 07:50:45.370733       1 snapshot_controller.go:112] scheduleOperation[create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:50:45.371077       1 snapshot_controller.go:305] createSnapshotWrapper: Creating snapshot for content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54 through the plugin ...
I0924 07:50:45.371138       1 snapshot_controller.go:218] getCSISnapshotInput for content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:50:45.371166       1 snapshot_controller.go:477] getSnapshotClass: VolumeSnapshotClassName [csi-hostpath-snapclass]
I0924 07:50:45.371193       1 snapshot_controller.go:606] setAnnVolumeSnapshotBeingCreated: set annotation [snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] on content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54].
I0924 07:50:45.376829       1 util.go:169] storeObjectUpdate updating content "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" with version 8792
I0924 07:50:45.376870       1 snapshot_controller.go:621] setAnnVolumeSnapshotBeingCreated: volume snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54 GenerateName: Namespace: SelfLink:/apis/snapshot.storage.k8s.io/v1beta1/volumesnapshotcontents/snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54 UID:1ea897fd-2b7c-428f-baf4-d5fec974f041 ResourceVersion:8792 Generation:1 CreationTimestamp:2020-09-24 07:49:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] OwnerReferences:[] Finalizers:[snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection] ClusterName: ManagedFields:[]} Spec:{VolumeSnapshotRef:{Kind:VolumeSnapshot Namespace:default Name:2319726611692524402 UID:db17e1de-4d21-4489-b14e-0ab1588d6a54 APIVersion:snapshot.storage.k8s.io/v1beta1 ResourceVersion:8715 FieldPath:} DeletionPolicy:Delete Driver:hostpath.csi.k8s.io VolumeSnapshotClassName:0xc000382380 Source:{VolumeHandle:0xc000382340 SnapshotHandle:<nil>}} Status:<nil>}
I0924 07:50:45.377581       1 snapshotter.go:56] CSI CreateSnapshot: snapshot-db17e1de-4d21-4489-b14e-0ab1588d6a54
I0924 07:50:45.377625       1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginInfo
I0924 07:50:45.377077       1 snapshot_controller_base.go:144] enqueued "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" for sync
I0924 07:50:45.377711       1 snapshot_controller_base.go:159] contentWorker[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:50:45.377733       1 util.go:169] storeObjectUpdate updating content "snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" with version 8792
I0924 07:50:45.377788       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:50:45.377887       1 snapshot_controller.go:569] Check if VolumeSnapshotContent[snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54] should be deleted.
I0924 07:50:45.378103       1 snapshot_controller.go:78] syncContent: Call CreateSnapshot for content snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54
I0924 07:50:45.378112       1 snapshot_controller.go:133] createSnapshot for content [snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]: started
I0924 07:50:45.378132       1 snapshot_controller.go:112] scheduleOperation[create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54]
I0924 07:50:45.378154       1 snapshot_controller.go:118] operation "create-snapcontent-db17e1de-4d21-4489-b14e-0ab1588d6a54" is already running, skipping
I0924 07:50:45.377639       1 connection.go:183] GRPC request: {}
I0924 07:50:45.379860       1 connection.go:185] GRPC response: {"name":"hostpath.csi.k8s.io","vendor_version":"v1.4.0-rc2-0-gee6beeaf"}
I0924 07:50:45.380286       1 connection.go:186] GRPC error: <nil>
I0924 07:50:45.380302       1 connection.go:182] GRPC call: /csi.v1.Controller/CreateSnapshot
I0924 07:50:45.380317       1 connection.go:183] GRPC request: {"name":"snapshot-db17e1de-4d21-4489-b14e-0ab1588d6a54","source_volume_id":"812ac4d5-fe3a-11ea-8aac-0242ac110003"}
I0924 07:50:45.797433       1 connection.go:185] GRPC response: {"snapshot":{"creation_time":{"nanos":381623000,"seconds":1600933845},"ready_to_use":true,"size_bytes":5368709120,"snapshot_id":"a722334a-fe3a-11ea-8aac-0242ac110003","source_volume_id":"812ac4d5-fe3a-11ea-8aac-0242ac110003"}}
I0924 07:50:45.798221       1 connection.go:186] GRPC error: <nil>
I0924 07:50:45.798237       1 snapshotter.go:76] CSI CreateSnapshot: snapshot-db17e1de-4d21-4489-b14e-0ab1588d6a54 driver name [hostpath.csi.k8s.io] snapshot ID [a722334a-fe3a-11ea-8aac-0242ac110003] time stamp [&{1600933845 381623000 {} [] 0}] size [5368709120] readyToUse [true]
I0924 07:50:45.798268       1 snapshot_controller.go:340] Created snapshot: driver hostpath.csi.k8s.io, snapshotId a722334a-fe3a-11ea-8aac-0242ac110003, creationTime 2020-09-24 07:50:45.381623 +0000 UTC, size 5368709120, readyToUse true

Why can it happen? Am I doing something wrong? Is there a way to create the snapshot on the first retry (or decrease the time between retries)?

deploy CSIDriver object

For 1.15 a CSIDriver object is optional, but it would be good practice to add it. It must not have the "mode" field.

For the upcoming 1.16, we definitely should deploy one and set "mode: persistent+ephemeral", because then we can test ephemeral mode with that deployment in the Prow CI.

Example not working on minikube 1.14

I came to this repo while reading https://kubernetes.io/blog/2019/03/07/raw-block-volume-support-to-beta/ I can't understand the purpose of mixing hostpath and raw block volumes. can you please help me to understand it?

  • Ran the script for 1.14
$ kubectl get pod
NAME                         READY   STATUS              RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running             0          6m2s
csi-hostpath-provisioner-0   1/1     Running             0          6m1s
csi-hostpath-snapshotter-0   1/1     Running             0          6m
csi-hostpath-socat-0         1/1     Running             0          5m59s
csi-hostpathplugin-0         1/3     RunContainerError   0          6m2s
  • Following error reported in the POD Logs.
Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  5m7s   default-scheduler  Successfully assigned default/csi-hostpathplugin-0 to minikube
  Normal   Pulling    5m4s   kubelet, minikube  Pulling image "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"
  Normal   Pulled     4m54s  kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"
  Normal   Created    4m53s  kubelet, minikube  Created container node-driver-registrar
  Normal   Started    4m53s  kubelet, minikube  Started container node-driver-registrar
  Normal   Pulling    4m53s  kubelet, minikube  Pulling image "quay.io/k8scsi/hostpathplugin:v1.1.0"
  Normal   Pulled     3m47s  kubelet, minikube  Successfully pulled image "quay.io/k8scsi/hostpathplugin:v1.1.0"
  Normal   Created    3m46s  kubelet, minikube  Created container hostpath
  Warning  Failed     3m46s  kubelet, minikube  Error: failed to start container "hostpath": Error response from daemon: OCI runtime create failed: open /var/run/docker/runtime-runc/moby/hostpath/state.json: no such file or directory: unknown
  Normal   Pulling    3m46s  kubelet, minikube  Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
  Warning  Failed     3m45s  kubelet, minikube  Failed to pull image "quay.io/k8scsi/livenessprobe:v1.1.0": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: dial tcp: lookup quay.io on [::1]:53: read udp [::1]:42931->[::1]:53: read: connection refused
  Warning  Failed     3m45s  kubelet, minikube  Error: ErrImagePull

Not able to mount volume to csi-app pod

While running example application to check and validate for deployment,csi-app pod get stucked in containercreating state.here is description : -

Name: my-csi-app
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Wed, 12 Feb 2020 14:59:44 +0530
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-csi-app","namespace":"default"},"spec":{"containers":[{"command":[...
Status: Pending
IP:
IPs:
Containers:
my-frontend:
Container ID:
Image: busybox
Image ID:
Port:
Host Port:
Command:
sleep
1000000
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
Mounts:
/data from my-csi-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-thlml (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
my-csi-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: csi-pvc
ReadOnly: false
default-token-thlml:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-thlml
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 21m default-scheduler Successfully assigned default/my-csi-app to minikube
Normal SuccessfulAttachVolume 21m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8"
Warning FailedMount 17m (x10 over 21m) kubelet, minikube MountVolume.MountDevice failed for volume "pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name hostpath.csi.k8s.io not found in the list of registered CSI drivers
Warning FailedMount 3m47s (x5 over 17m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[my-csi-volume], unattached volumes=[my-csi-volume default-token-thlml]: timed out waiting for the condition
Warning FailedMount 93s (x4 over 19m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[my-csi-volume], unattached volumes=[default-token-thlml my-csi-volume]: timed out waiting for the condition
Warning FailedMount 68s (x8 over 15m) kubelet, minikube MountVolume.SetUp failed for volume "pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = NotFound desc = volume id 344eeca7-4d7a-11ea-b921-0242ac110005 does not exit in the volumes list

Here is description of pv
Name: pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8
Labels:
Annotations: pv.kubernetes.io/provisioned-by: hostpath.csi.k8s.io
Finalizers: [kubernetes.io/pv-protection]
StorageClass: csi-hostpath-sc
Status: Bound
Claim: default/csi-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: hostpath.csi.k8s.io
VolumeHandle: 344eeca7-4d7a-11ea-b921-0242ac110005
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1581499491410-8081-hostpath.csi.k8s.io
Events:

Here is description of pvc
Name: csi-pvc
Namespace: default
StorageClass: csi-hostpath-sc
Status: Bound
Volume: pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"csi-pvc","namespace":"default"},"spec":{"accessMode...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: hostpath.csi.k8s.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: my-csi-app
Events:
Type Reason Age From Message


Normal ExternalProvisioning 24m persistentvolume-controller waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator
Normal Provisioning 24m hostpath.csi.k8s.io_csi-snapshotter-0_56c59124-742c-49a0-9b43-7433d37f0584 External provisioner is provisioning volume for claim "default/csi-pvc"
Normal ProvisioningSucceeded 24m hostpath.csi.k8s.io_csi-snapshotter-0_56c59124-742c-49a0-9b43-7433d37f0584 Successfully provisioned volume pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8

can anyone suggest why csi-pod mount volume failed I am implementing csi-hostpath-driver on kubernetes 1.17.
Thanks

Need to set VolumeContentSource if creating volume from snapshot

In CreateVolume, we should set VolumeContentSource after creating volume from snapshot is successful.

return &csi.CreateVolumeResponse{
	Volume: &csi.Volume{
		VolumeId:      volumeID,
		CapacityBytes: req.GetCapacityRange().GetRequiredBytes(),
		VolumeContext: req.GetParameters(),
	},

k8s 1.17: csi-hostpath-snapclass is not created during the driver deployment

I'm trying to deploy the driver on minikube with --kubernetes-version v1.17.9, so I have installed CRDs and then have run csi-driver-host-path/deploy/kubernetes-1.17/deploy.sh for that. The last line in the deploy log was "deploying snapshotclass based on snapshotter version". And then, when I tried to create a snapshot with examples/csi-snapshot-v1beta1.yaml I've got the following error for the new-snapshot-demo volume snapshot:

$ kubectl describe volumesnapshot                                                                
Name:         new-snapshot-demo
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"new-snapshot-demo","namespace...
API Version:  snapshot.storage.k8s.io/v1beta1
Kind:         VolumeSnapshot
Metadata:
  Creation Timestamp:  2020-09-14T10:21:34Z
  Generation:          1
  Resource Version:    59736
  Self Link:           /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/new-snapshot-demo
  UID:                 1acdcca0-2cbd-4825-b15d-1eb6edd793b8
Spec:
  Source:
    Persistent Volume Claim Name:  csi-pvc
  Volume Snapshot Class Name:      csi-hostpath-snapclass
Status:
  Error:
    Message:     Failed to get snapshot class with error failed to retrieve snapshot class csi-hostpath-snapclass from the informer: "volumesnapshotclass.snapshot.storage.k8s.io \"csi-hostpath-snapclass\" not found"
    Time:        2020-09-14T10:21:34Z
  Ready To Use:  false
Events:
  Type     Reason                  Age    From                 Message
  ----     ------                  ----   ----                 -------
  Warning  GetSnapshotClassFailed  4m23s  snapshot-controller  Failed to get snapshot class with error failed to retrieve snapshot class csi-hostpath-snapclass from the informer: "volumesnapshotclass.snapshot.storage.k8s.io \"csi-hostpath-snapclass\" not found"

In the "deploy-hostpath.sh" I see that the csi-hostpath-snapshotclass.yaml applied for k8s 1.16 only. Is it right? Am I doing something wrong?

master not a complete copy of the original code

When moving the hostpath driver, some changes were lost. It looks like an older version of the original code was copied, in particular this change here is missing: 6820279

git diff 6820279..660c357 pkg/hostpath/controllerserver.go

...
-       for _, cap := range caps {
-               if cap.GetBlock() != nil {
-                       return nil, status.Error(codes.Unimplemented, "Block Volume not supported")
-               }
-       }
...

The commit is there because the full history was spliced in, but the actual tree that it was spliced into was based on an older copy.

/assign

Hostpath driver fails to delete volumesnapshotcontent whose DeletionPolicy is patched to `Delete`

What happened:
A patched volumesnapshotcontent object does not get deleted.

These are the steps that I followed:

  1. Create a volumesnapshot using a volumesnapshotclass with DeletionPolicy Retain. This created the volumesnapshot and the volumesnapshotcontent object with DeletionPolicy: Retain
  2. Delete the volumesnapshot object created above. Now because the DeletionPolicy was Retain the volumesnapshotcontent object is preserved.
  3. Patch the volumesnapshotcontent object to set its DeletionPolicy to Delete
  4. Now delete the volumesnapshotcontent. At this step, the object's finalizer snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection doesn't remove itself and the deletion just sits at the terminal.

What I expected:
I expected that the object to be deleted.

Flaky test: DeleteVolume fails due to device busy

https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/kubernetes-csi_csi-driver-host-path/95/pull-kubernetes-csi-csi-driver-host-path-1-13-on-kubernetes-1-13/1170365877112016899

Test fails:

PersistentVolume pvc-b5383c0a-d189-11e9-a013-0242ac110002 still exists within 5m0s

Provisioner logs:

I0907 16:09:21.681142       1 controller.go:192] GRPC error: rpc error: code = Internal desc = failed to delete volume b5755daf-d189-11e9-93ab-2ab9e465132f: unlinkat /csi-data-dir/b5755daf-d189-11e9-93ab-2ab9e465132f: device or resource busy
E0907 16:09:21.681197       1 controller.go:1120] delete "pvc-b5383c0a-d189-11e9-a013-0242ac110002": volume deletion failed: rpc error: code = Internal desc = failed to delete volume b5755daf-d189-11e9-93ab-2ab9e465132f: unlinkat /csi-data-dir/b5755daf-d189-11e9-93ab-2ab9e465132f: device or resource busy

Hostpath driver logs show stage, publish, unstage being called, but not unpublish. Unfortunately, we don't have k8s system logs to see what's going on in kubelet.

system:serviceaccount:cert-manager:csi-snapshotter cannot create resource ...

~/csi-driver-host-path/deploy/kubernetes-1.16$ ./deploy-hostpath.sh

Installation failed after 5m tmeout

pod/csi-hostpath-snapshotter-0                0/1     CrashLoopBackOff   7          14m 
Status:"Failure", Message:"customresourcedefinitions.apiextensions.k8s.io is forbidden: User \"system:serviceaccount:cert-manager:csi-snapshotter\" cannot create resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope", Reason:"Forbidden"
Linux Debian-911-stretch-64-minimal 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3 (2019-09-02) x86_64 GNU/Linux
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

csi-snapshotter pod throws an error "volumeID is not exist"

While trying out the VolumeSnapshot, getting the error intermitently:

Failed to create snapshot: failed to take snapshot of the volume, pvc-7cf67f78-a5b9-11ea-81ba-42010a8e0fc6: "rpc error: code = Internal desc = volumeID is not exist

where it is definitely sure that the pvc pvc-7cf67f78-a5b9-11ea-81ba-42010a8e0fc6 exists in the namespace and is accessible

Tried to get the logs of csi-snapshotter and csi-hostpathplugin logs, had same errors.

k8s version: 1.14, 1.15, 1.16
CSI versions: 1.2.0, 1.3.0

VolumeSnapshotClass and VolumeSnapshot are correct as per the docs and also It was working some time with the same listed setups and sometime it not

Remove service from hostpath specs

They're unnecessary and actually a random port is specified, which may not be desirable.

We got confirmation from sig-apps in kubernetes/kubernetes#69608 that this will be supported. StatefulSet is v1 now, so they cannot change the behavior without it being breaking change.

/help
/kind cleanup

"node affinity conflict" following Deployment instructions in README

With a microk8s v1.16.2 cluster on Fedora, I followed the instructions in the README, replacing 1.13 in the README with 1.16, i.e.,

./deploy/kubernetes-1.16/deploy-hostpath.sh

One problem encountered was that csi-hostpathplugin-0 stuck in ContainerCreating

    Warning  FailedMount  108s (x9 over 3m56s)  kubelet, localhost.localdomain  MountVolume.SetUp failed for volume "registration-dir" : hostPath type check failed: /var/lib/kubelet/plugins_registry is not a directory

which I got past by

sudo mkdir /var/lib/kubelet/plugins_registry

Maybe csi-hostpath-plugin.yaml should use type DirectoryOrCreate for that path?

However, the problem that I haven't worked around is that the consumer pod doesn't schedule:

kubectl describe pod my-csi-app
 ...
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) had volume node affinity conflict.

kubectl describe pv <ID>
...
Node Affinity:
  Required Terms:
    Term 0:        topology.hostpath.csi/node in [localhost.localdomain]
...

I haven't yet learned enough to resolve this (if it's my error) or suggest a correction (if the documentation or something else is wrong). I see that https://kubernetes-csi.github.io/docs/topology.html says that a CSINode object should be created, but

kubectl get csinode
No resources found.

According to the same source, https://kubernetes-csi.github.io/docs/csi-node-object.html node-driver-registrar is responsible for creating that object, but evidently it doesn't, even though there are no errors in the log. I've found no evidence in grepping the node-driver-registrar source code that it does or should.

Deployment by README steps fails in 4-node cluster

Deployment as shown in README will succeed for sure only if attacher, provisioner, hostpathplugin and my-csi-app run in the same node. When I try the steps in 4-node cluster, the pods state will be:

NAME                         READY   STATUS              RESTARTS   AGE     IP            NODE   
csi-hostpath-attacher-0      1/1     Running             0          3m56s   10.244.1.59   host-1
csi-hostpath-provisioner-0   1/1     Running             0          3m56s   10.244.2.11   host-2 
csi-hostpathplugin-8knjt     2/2     Running             0          3m56s   192.168.8.4   host-1
csi-hostpathplugin-bw855     2/2     Running             0          3m56s   192.168.8.8   host-3
csi-hostpathplugin-dv8x7     2/2     Running             0          3m56s   192.168.8.6   host-2
my-csi-app                   0/1     ContainerCreating   0          3m3s    <none>        host-3

my-csi-app does not reach Running state as it can't mount:

mount: mounting /tmp/94131d40-35c8-11e9-946e-deadbeef0102 on
/var/lib/kubelet/pods/e884bdff-367e-11e9-8553-deadbeef0100/volumes/kubernetes.io~csi/pvc-8d479cfd-35c8-11e9-8553-deadbeef0100/mount
failed: No such file or directory

Failed to run hack/e2e-hostpath.sh with latest csi-test v3.0.0

Updated version from v1.0.0-rc2 to v3.0.0 in hack/get-sanity.sh
Run ./hack/e2e-hostpath.sh failed with below error

./hack/e2e-hostpath.sh 
Downloading csi-test from https://github.com/kubernetes-csi/csi-test/releases/download/v3.0.0/csi-sanity-v3.0.0.linux.amd64.tar.gz

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
cp: cannot stat '/tmp/csi-sanity/csi-sanity': No such file or directory
make: *** No rule to make target 'hostpath'.  Stop.
sudo: _output/hostpathplugin: command not found
sudo: /home/mrajanna/workspace/bin/csi-sanity: command not found
kill: sending signal to 3643838 failed: No such process

as the csi-test is release doesn't container linux-amd64 tar file and make hostpath command is also not present in csi-driver-host-path

map access needs lock

All hostPathVolumes and hostPathSnapshots map updates/reads need a lock.

/help
/good-first-issue

implement optional ControllerPublishVolume

The csi-driver-host-path deployment with external-attacher is used to test that external-attacher works. However, because currently csi-driver-host-path doesn't actually implement ControllerPublishVolume, external-attacher uses a simplified code path (#78 (comment)).

To test the actual code path for drivers that support attach, csi-driver-host-path should get a boolean command line flag -controllerPublishVolume that, if set, enables an implementation of ControllerPublishVolume. That implementation doesn't need to do anything besides ensuring that ControllerPublishVolume and ControllerUnpublishVolume are called correctly. The node operations must check that ControllerPublishVolume was called. This can be done by creating some publish_context that is getting checked.

topology support

The hostpath driver should support topology on Kubernetes 1.14. Then applications no longer need to be scheduled manually onto the right node and we can remove that workaround from the E2E testing in Kubernetes.

It also might make the driver complete enough to serve as default storage driver in a KinD cluster.

Cleanup readme

The README should contain some basic information like:

  • compatibility with K8s releases and CSI spec versions
  • features implemented in the driver
  • how to deploy the driver

More advanced information like development, testing, how to use with snapshots and inline volumes can go in a separate docs/ folder.

/help

hostpath.csi.k8s.io not found in the list of registered CSI drivers

While running through the example with microK8s kubernetes I keep running into this error when trying to actually start the my-csi-app pod

Warning FailedMount 5s (x6 over 22s) kubelet, rkimmel-dt MountVolume.MountDevice failed for volume "pvc-4d70c760-a4b0-11e9-8376-1866da0e836f" : driver name hostpath.csi.k8s.io not found in the list of registered CSI drivers

Looks like for some reason the hostpath csi driver is not getting registered but I'm not sure where to look to fix that or find out why it's happening.

Any help with this would be greatly appreciated

1.16 deployment doesn't work on older clusters

New CI jobs testing 1.16 deployment on older K8s versions is failing because of the dependency on the new volumeLifecycleModes:

https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-csi-1-16-on-kubernetes-1-15/1191821907750555651

error: error validating "STDIN": error validating data: ValidationError(CSIDriver.spec): unknown field "volumeLifecycleModes" in io.k8s.api.storage.v1beta1.CSIDriverSpec; if you choose to ignore these errors, turn validation off with --validate=false

I think this is working as intended since the 1.16 deployment depends on new APIs added in K8s 1.16. I think that means we should remove any CI jobs that test newer deployments against older versions. @pohly wdyt?

/kind failing-test

Hostpath across multiple nodes

Hi folks

I am trying to use the hostpath csi across multiple nodes but am running into an issue

This is the error I see when I create my app:

`
Events:
Type Reason Age From Message


Normal Scheduled 14m default-scheduler Successfully assigned default/my-csi-app to node4
Normal SuccessfulAttachVolume 14m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-1227aa63-6dee-11e9-af3e-02e1d85c598e"
Warning FailedMount 73s (x6 over 12m) kubelet, node4 Unable to mount volumes for pod "my-csi-app_default(3a1766c5-6dee-11e9-af3e-02e1d85c598e)": timeout expired waiting for volumes to attach or mount for pod "default"/"my-csi-app". list of unmounted volumes=[my-csi-volume]. list of unattached volumes=[my-csi-volume default-token-pglw4]
Warning FailedMount 8s (x15 over 14m) kubelet, node4 MountVolume.SetUp failed for volume "pvc-1227aa63-6dee-11e9-af3e-02e1d85c598e" : rpc error: code = NotFound desc = volume id 12311aad-6dee-11e9-aece-42010a960fc8 does not exit in the volumes list
`

I've changed the plugin deployment yaml to use DaemonSet instead, here is what it looks like:

kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-hostpathplugin
spec:
selector:
matchLabels:
app: csi-hostpathplugin
template:
metadata:
labels:
app: csi-hostpathplugin
spec:
hostNetwork: true
containers:
- name: node-driver-registrar
image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock"]
args:
- --v=5
- --csi-address=/csi/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock
securityContext:
privileged: true
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /csi
name: socket-dir
- mountPath: /registration
name: registration-dir
- mountPath: /csi-data-dir
name: csi-data-dir
- name: hostpath
image: quay.io/k8scsi/hostpathplugin:v1.1.0
args:
- "--v=5"
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(KUBE_NODE_NAME)"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
securityContext:
privileged: true
ports:
- containerPort: 9898
name: healthz
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 2
volumeMounts:
- mountPath: /csi
name: socket-dir
- mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
name: mountpoint-dir
- mountPath: /var/lib/kubelet/plugins
mountPropagation: Bidirectional
name: plugins-dir
- mountPath: /csi-data-dir
name: csi-data-dir
- name: liveness-probe
volumeMounts:
- mountPath: /csi
name: socket-dir
image: quay.io/k8scsi/livenessprobe:v1.1.0
args:
- --csi-address=/csi/csi.sock
- --connection-timeout=3s
- --health-port=9898
volumes:
- hostPath:
path: /var/lib/kubelet/plugins/csi-hostpath
type: DirectoryOrCreate
name: socket-dir
- hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
name: mountpoint-dir
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
name: registration-dir
- hostPath:
path: /var/lib/kubelet/plugins
type: Directory
name: plugins-dir
- hostPath:
path: /storage/
type: DirectoryOrCreate
name: csi-data-dir

I have 2 master and 2 worker nodes. When creating on one of the nodes, it works fine, but on the second one, it gives me the error above

NAME READY STATUS RESTARTS AGE
csi-hostpath-attacher-0 1/1 Running 0 19m
csi-hostpath-provisioner-0 1/1 Running 0 19m
csi-hostpath-snapshotter-0 1/1 Running 0 19m
csi-hostpath-socat-0 1/1 Running 0 19m
csi-hostpathplugin-75pwx 3/3 Running 0 19m
csi-hostpathplugin-xx2hj 3/3 Running 0 19m
my-csi-app 0/1 ContainerCreating 0 17m

Also, all nodes have the same file system and directory structure

Any help would be much appreciated

Thanks!

Restore from snapshot is always empty

Restore from snapshot creates a new volume however the data of the original volume is not restored.

Source PVC with data inside:
[root@k8smaster examples]# kubectl exec -it my-csi-app /bin/sh
/ #
/ #
/ # cd /data
/data # ls -lrt
total 4
-rw-r--r-- 1 root root 7 Feb 18 12:54 TESTFi
/data #
/data # cat TESTFi
FDFDSD
/data #

Created snapshot
[root@k8smaster csi-driver-host-path]# kubectl apply -f examples/csi-snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/new-snapshot-demo created
[root@k8smaster csi-driver-host-path]#

[root@k8smaster examples]# kubectl get volumesnapshot
NAME AGE
new-snapshot-demo 17m
[root@k8smaster examples]#
[root@k8smaster examples]# kubectl describe volumesnapshotcontent
Name: snapcontent-b42fa9dd-9405-4f1e-8044-4ef252c7703f
Namespace:
Labels:
Annotations:
API Version: snapshot.storage.k8s.io/v1alpha1
Kind: VolumeSnapshotContent
Metadata:
Creation Timestamp: 2020-02-18T12:55:33Z
Finalizers:
snapshot.storage.kubernetes.io/volumesnapshotcontent-protection
Generation: 1
Resource Version: 1179
Self Link: /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotcontents/snapcontent-b42fa9dd-9405-4f1e-8044-4ef252c7703f
UID: 2ece1990-7397-467c-abcd-328aede2d570
Spec:
Csi Volume Snapshot Source:
Creation Time: 1582030533340217899
Driver: hostpath.csi.k8s.io
Restore Size: 1073741824
Snapshot Handle: f323fa2e-524d-11ea-a162-2e85f875df77
Deletion Policy: Delete
Persistent Volume Ref:
API Version: v1
Kind: PersistentVolume
Name: pvc-cdbe2464-e081-4614-b29c-f11fece38654
Resource Version: 809
UID: 61e0d65e-a4b6-44ef-8ecb-6993852b0c8c
Snapshot Class Name: csi-hostpath-snapclass
Volume Snapshot Ref:
API Version: snapshot.storage.k8s.io/v1alpha1
Kind: VolumeSnapshot
Name: new-snapshot-demo
Namespace: default
Resource Version: 1175
UID: b42fa9dd-9405-4f1e-8044-4ef252c7703f
Events:
[root@k8smaster examples]#

Restoring from snapshot
[root@k8smaster csi-driver-host-path]# kubectl apply -f examples/csi-restore.yaml
persistentvolumeclaim/hpvc-restore created
[root@k8smaster csi-driver-host-path]#

[root@k8smaster examples]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc Bound pvc-cdbe2464-e081-4614-b29c-f11fece38654 1Gi RWO csi-hostpath-sc 22m
hpvc-restore Bound pvc-6cff0d9c-785e-4f92-acb6-e4a013d120f0 1Gi RWO csi-hostpath-sc 17m
[root@k8smaster examples]#

However the restored volume is empty

  • Tried multiple times with the same result*
    [root@k8smaster examples]# kubectl exec -it $(kubectl get pods --selector app=csi-hostpathplugin -o jsonpath='{.items[*].metadata.name}') -c hostpath /bin/sh
    / #
    / #
    / # cd /csi-data-dir/
    /csi-data-dir #
    /csi-data-dir # ls -lrt
    total 6
    drwxr-xr-x 2 root root 20 Feb 18 12:54 56c65c8a-524d-11ea-a162-2e85f875df77
    -rw-r--r-- 1 root root 135 Feb 18 12:55 f323fa2e-524d-11ea-a162-2e85f875df77.tgz
    drwxr-xr-x 2 root root 6 Feb 18 12:55 fcbe5c00-524d-11ea-a162-2e85f875df77
    /csi-data-dir # ls -l 56c65c8a-524d-11ea-a162-2e85f875df77
    total 4
    -rw-r--r-- 1 root root 7 Feb 18 12:54 TESTFi
    /csi-data-dir #
    /csi-data-dir # tar tvfz f323fa2e-524d-11ea-a162-2e85f875df77.tgz
    drwxr-xr-x root/root 0 2020-02-18 12:54:48 ./
    -rw-r--r-- root/root 7 2020-02-18 12:54:48 ./TESTFi
    /csi-data-dir #
    /csi-data-dir # ls -l fcbe5c00-524d-11ea-a162-2e85f875df77
    total 0
    /csi-data-dir #

PreStop hook always fails

csi-hostpath manifests here and in kubernetes/kubernetes contains PreStop hook to remove the driver socket before containers dies.

Since node-driver-registrar container is distroless, the hook always fails:

I0330 08:31:17.836238 1362 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ephemeral-5231", Name:"csi-hostpathplugin-0", UID:"ad6006d7-6425-4661-829e-8106640183fb", APIVersion:"v1", ResourceVersion:"1676", FieldPath:"spec.containers{node-driver-registrar}"}): type: 'Warning' reason: 'FailedPreStopHook' Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container "node-driver-registrar" in Pod "csi-hostpathplugin-0_ephemeral-5231(ad6006d7-6425-4661-829e-8106640183fb)" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: "OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused \"exec: \\\"/bin/sh\\\": stat /bin/sh: no such file or directory\": unknown\r\n"

Sadly, events are not available in prow job artifacts, it's visible only in kubelet logs.

Clone support

With the addition of clone support in kubernetes and in progress for the external-provisioner side car we should support clone operations in the hostpath driver.

Add support for volume resizing.

In preparation for volume resizing going beta, we should add support for it in hostpath driver and make sure the tests run with it.

Error in deploying csi-hostpath-driverinfo.yaml <STDIN - Cant find KIND - CSIDriver> on 15,16 and 17 K8s

Hello.

I have just cloned the repo for the host-path-csi-project:
https://github.com/kubernetes-csi/csi-driver-host-path

i have tried with 15, 16 and now on 17.4 K8S (that is what this log is).

When i do the deployment (again, im using 17+ K8S here instructions here but i got stuck at both places) i get the same error when i do the command:

deploy/kubernetes-latest/deploy.sh

as outlined here: https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/docs/deploy-1.17-and-later.md

ERROR during deployment:
deploying hostpath components
deploy/kubernetes-latest/hostpath/csi-hostpath-attacher.yaml
using image: quay.io/k8scsi/csi-attacher:v3.0.0-rc1
service/csi-hostpath-attacher unchanged
statefulset.apps/csi-hostpath-attacher unchanged
deploy/kubernetes-latest/hostpath/csi-hostpath-driverinfo.yaml
error: unable to recognize "STDIN": no matches for kind "CSIDriver" in version "storage.k8s.io/v1"
modified version of deploy/kubernetes-latest/hostpath/csi-hostpath-driverinfo.yaml:
apiVersion: storage.k8s.io/v1

i have verified that the CRDs are installed and i can see that the api "storage.k8s.io/v1" has a "Kind" CSIDriver (see output of kubectl api-resources).

[dasm@ip-0 csi-driver-host-path]$ kubectl api-resources
volumesnapshotclasses snapshot.storage.k8s.io false VolumeSnapshotClass
volumesnapshotcontents snapshot.storage.k8s.io false VolumeSnapshotContent
volumesnapshots snapshot.storage.k8s.io true VolumeSnapshot
csidrivers storage.k8s.io false CSIDriver
csinodes storage.k8s.io false CSINode
storageclasses sc storage.k8s.io false StorageClass
volumeattachments storage.k8s.io false VolumeAttachment

Here is my K8s version information:

[dasm@ip-0 csi-driver-host-path]$ kubectl version --short
Client Version: v1.18.3
Server Version: v1.17.4
[dasm@ip-0 csi-driver-host-path]$

Some tips for new users to understand this project.

  1. Do not use alpine base image, use ubuntu. Otherwise your own built go bin file would possibly not been executed due to an incomplete environment.

  2. There are two ways to use the host resources: Mount and Block. The essence of the first one is to do mount -o bind, which connects two different dirs on host. The second one requires container to have super privileges, so it can operate loop0 device on host.

Doc error: link and DaemonSet

In the README:

The Hostpath driver is configured to create new volumes under /tmp inside the hostpath container that is specified in the plugin DaemonSet found here. This path persist as long as the DaemonSet pod is up and running.

  1. The link gives a 404. I think it should point to e.g. https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/deploy/kubernetes-1.17/hostpath/csi-hostpath-plugin.yaml

  2. The description talks about the "plugin DaemonSet" and "DaemonSet pod", but the deployment creates a StatefulSet. I can't find any reference to DaemonSet elsewhere in the repo.

Not working after reboot on NixOS

Hello, I tried the deployment on a NixOS VM (unstable) which packages Kubernetes 1.14.1. I had to change some paths because the kubelet is run with --root-dir=/var/lib/kubernetes, so I changed:

diff -ru upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-attacher.yaml applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-attacher.yaml
--- upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-attacher.yaml	2019-06-03 10:50:28.171532741 +0200
+++ applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-attacher.yaml	2019-06-03 18:30:31.886449660 +0200
@@ -50,6 +50,6 @@
 
       volumes:
         - hostPath:
-            path: /var/lib/kubelet/plugins/csi-hostpath
+            path: /var/lib/kubernetes/plugins/csi-hostpath
             type: DirectoryOrCreate
           name: socket-dir
diff -ru upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-plugin.yaml applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-plugin.yaml
--- upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-plugin.yaml	2019-06-03 10:50:28.171532741 +0200
+++ applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-plugin.yaml	2019-06-04 10:35:27.617058110 +0200
@@ -45,7 +45,7 @@
           args:
             - --v=5
             - --csi-address=/csi/csi.sock
-            - --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock
+            - --kubelet-registration-path=/var/lib/kubernetes/plugins/csi-hostpath/csi.sock
           securityContext:
             privileged: true
           env:
@@ -114,19 +114,19 @@
 
       volumes:
         - hostPath:
-            path: /var/lib/kubelet/plugins/csi-hostpath
+            path: /var/lib/kubernetes/plugins/csi-hostpath
             type: DirectoryOrCreate
           name: socket-dir
         - hostPath:
-            path: /var/lib/kubelet/pods
+            path: /var/lib/kubernetes/pods
             type: DirectoryOrCreate
           name: mountpoint-dir
         - hostPath:
-            path: /var/lib/kubelet/plugins_registry
+            path: /var/lib/kubernetes/plugins_registry
             type: Directory
           name: registration-dir
         - hostPath:
-            path: /var/lib/kubelet/plugins
+            path: /var/lib/kubernetes/plugins
             type: Directory
           name: plugins-dir
         - hostPath:
diff -ru upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-provisioner.yaml applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-provisioner.yaml
--- upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-provisioner.yaml	2019-06-03 10:50:28.171532741 +0200
+++ applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-provisioner.yaml	2019-06-03 18:30:32.274451542 +0200
@@ -50,6 +50,6 @@
               name: socket-dir
       volumes:
         - hostPath:
-            path: /var/lib/kubelet/plugins/csi-hostpath
+            path: /var/lib/kubernetes/plugins/csi-hostpath
             type: DirectoryOrCreate
           name: socket-dir
diff -ru upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-snapshotter.yaml applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-snapshotter.yaml
--- upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-snapshotter.yaml	2019-06-03 10:50:28.171532741 +0200
+++ applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-snapshotter.yaml	2019-06-03 18:30:32.573452992 +0200
@@ -50,6 +50,6 @@
               name: socket-dir
       volumes:
         - hostPath:
-            path: /var/lib/kubelet/plugins/csi-hostpath
+            path: /var/lib/kubernetes/plugins/csi-hostpath
             type: DirectoryOrCreate
           name: socket-dir
diff -ru upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-testing.yaml applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-testing.yaml
--- upstream/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-testing.yaml	2019-06-03 10:50:28.171532741 +0200
+++ applied/csi-driver-host-path/deploy/kubernetes-1.14/hostpath/csi-hostpath-testing.yaml	2019-06-03 18:30:32.900454577 +0200
@@ -54,6 +54,6 @@
             name: socket-dir
       volumes:
         - hostPath:
-            path: /var/lib/kubelet/plugins/csi-hostpath
+            path: /var/lib/kubernetes/plugins/csi-hostpath
             type: DirectoryOrCreate
           name: socket-dir

I've used the script in /deploy to install it. Now the effect is that all works well until the VM is restarted and the pods using the persistent volumes show all issues with persistent volume attachment: on the log, this comes up among the tons of lines:

[ 2125.003419] kubelet[1099]: E0604 09:30:32.883832    1099 csi_mounter.go:259] kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = NotFound desc = volume id c5e23cb5-86a4-11e9-bb16-525400123456 does not exit in the volumes list
[ 2125.005744] kubelet[1099]: E0604 09:30:32.883912    1099 csi_mounter.go:433] kubernetes.io/csi: failed to remove dir [/var/lib/kubernetes/pods/c5c528d7-86a4-11e9-b379-525400123456/volumes/kubernetes.io~csi/pvc-c5bb7508-86a4-11e9-b379-525400123456/mount]: remove /var/lib/kubernetes/pods/c5c528d7-86a4-11e9-b379-525400123456/volumes/kubernetes.io~csi/pvc-c5bb7508-86a4-11e9-b379-525400123456/mount: directory not empty
[ 2125.009638] kubelet[1099]: E0604 09:30:32.883953    1099 csi_mounter.go:261] kubernetes.io/csi: mounter.SetupAt failed to remove mount dir after a NodePublish() error [/var/lib/kubernetes/pods/c5c528d7-86a4-11e9-b379-525400123456/volumes/kubernetes.io~csi/pvc-c5bb7508-86a4-11e9-b379-525400123456/mount]: remove /var/lib/kubernetes/pods/c5c528d7-86a4-11e9-b379-525400123456/volumes/kubernetes.io~csi/pvc-c5bb7508-86a4-11e9-b379-525400123456/mount: directory not empty
[ 2125.013371] kubelet[1099]: E0604 09:30:32.884067    1099 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/csi/csi-hostpath^c5e23cb5-86a4-11e9-bb16-525400123456\"" failed. No retries permitted until 2019-06-04 09:32:34.884042183 +0000 UTC m=+2222.101952104 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"pvc-c5bb7508-86a4-11e9-b379-525400123456\" (UniqueName: \"kubernetes.io/csi/csi-hostpath^c5e23cb5-86a4-11e9-bb16-525400123456\") pod \"influxdb-0\" (UID: \"c5c528d7-86a4-11e9-b379-525400123456\") : rpc error: code = NotFound desc = volume id c5e23cb5-86a4-11e9-bb16-525400123456 does not exit in the volumes list"
[ 2125.490992] kubelet[1099]: E0604 09:30:33.419623    1099 kubelet.go:1657] Unable to mount volumes for pod "dashboard-54b754889b-rf8kj_default(c6339c4d-86a4-11e9-b379-525400123456)": timeout expired waiting for volumes to attach or mount for pod "default"/"dashboard-54b754889b-rf8kj". list of unmounted volumes=[grafana-var]. list of unattached volumes=[grafana-var global-config grafana-config default-token-7ngqg]; skipping pod

Anyway, even if the volume seem to have disappeared, ths is a list of the contents of the /var/lib/csi-hostpath-data directory that stores the volumes:

drwxr-xr-x 2 root root 4096  4 giu 08.43 c5e23cb5-86a4-11e9-bb16-525400123456
drwxr-xr-x 2 root root 4096  4 giu 08.43 cb0b01ec-86a4-11e9-bb16-525400123456
drwxr-xr-x 2 root root 4096  4 giu 08.43 cb0cb49b-86a4-11e9-bb16-525400123456

and the data seems to be there... Any clue?

define resource limits to avoid eviction

Pods without resource specification are the first that get evicted when a node runs out of resources. All of our deployments should specify required resources.

Perhaps there's also something else that can be done to prevent removal of a CSI driver instance from a node?

Allow configurable root directory on the host

Currently the volume root directory is defined as:

	provisionRoot      = "/csi-data-dir"

But IIRC, it used to be tmp before. I think we need this path to be stable in e2e, so as we don't have to update the YAMLs if the code changes. The best way of doing this perhaps to make the root directory configurable, so as e2es can keep using paths they were using and other users can define new path.

Unable to provision `VolumeMode: Block` PVC

While creating a block PVC with hostpath driver it stays at pending state for ever. The plugin log shows below:

I0802 07:57:56.276473       1 server.go:117] GRPC call: /csi.v1.Controller/CreateVolume
I0802 07:57:56.276489       1 server.go:118] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-3a3698ee-b4fb-11e9-b29c-02160dfd70a0","volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":1}}]}
I0802 07:57:56.281536       1 volume_path_handler_linux.go:41] Creating device for path: /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830
I0802 07:57:56.709467       1 volume_path_handler_linux.go:75] Failed device create command for path: /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830 exit status 1 losetup: /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830: failed to set up loop device: No such file or directoryE0802 07:57:56.709513       1 controllerserver.go:160] failed to attach device: exit status 1
E0802 07:57:56.709634       1 controllerserver.go:163] failed to cleanup block file /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830: <nil>
E0802 07:57:56.709647       1 server.go:121] GRPC error: rpc error: code = Internal desc = failed to attach device: exit status 1

hostpath driver was deployed via deploy-hostpath.sh from deploy/kubernetes-1.14 directory.

Env Details:

[root@localhost kubernetes-1.14]# uname -a
Linux localhost.localdomain 5.1.6-200.fc29.x86_64 #1 SMP Mon Jun 3 17:20:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost kubernetes-1.14]# cat /etc/redhat-release 
Fedora release 29 (Twenty Nine)
[root@localhost kubernetes-1.14]# lsmod |grep loop
loop                   36864  4
[root@localhost kubernetes-1.14]# ll /dev/loop
loop0         loop1         loop2         loop8         loop-control  
[root@localhost kubernetes-1.14]# 

Raw block volume snapshot failure

Raw block volume snapshot fails with error like:

 Warning  SnapshotCreationFailed  6s    csi-snapshotter csi-hostpath  Failed to create snapshot: failed to take snapshot of the volume, pvc-cb8d96b6-5d40-11e9-bcc0-0236f32917fe: "rpc error: code = Internal desc = failed create snapshot: exit status 1: tar: can't change directory to '/csi-data-dire6fc6eb8-5d40-11e9-97b2-0236f32917fe': Not a directory\n"

There are 2 reasons to this:
1 : Following constants are defined incorrect:

provisionRoot      = "/csi-data-dir"
snapshotRoot       = "/csi-data-dir"

Thjey should be

provisionRoot      = "/csi-data-dir/"
snapshotRoot       = "/csi-data-dir/"

2: In case of raw block volumes the volume content comes as a single file as against a directory in case of a filesystem volume. So the tar command with -C option won't work. Instead, a tar file should be created directly from the file corresponding to the raw block volume. Something like

tar czf <tar_fikle_name> <file_for_raw_block_volume>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.