Giter Site home page Giter Site logo

openebs / velero-plugin Goto Github PK

View Code? Open in Web Editor NEW
61.0 21.0 31.0 10.52 MB

Velero plugin for backup/restore of OpenEBS cStor volumes

Home Page: https://docs.openebs.io

License: Apache License 2.0

Dockerfile 1.42% Makefile 3.11% Go 92.21% Shell 3.26%
velero-plugin openebs cstor-volume persistent-volumes kubernetes hacktoberfest

velero-plugin's Introduction

Velero-plugin for OpenEBS CStor volume

Velero is a utility to back up and restore your Kubernetes resource and persistent volumes.

To do backup/restore of OpenEBS CStor volumes through Velero utility, you need to install and configure OpenEBS velero-plugin.

Build Status Slack Go Report FOSSA Status CII Best Practices Releases LICENSE

Table of Contents

Compatibility matrix

Velero-plugin Version OpenEBS/Maya Release Velero Version Codebase
0.9.0 >= 0.9 >= v0.11.0 v0.9.x
1.0.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 1.0.0-velero_1.0.0
1.1.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.1.x
1.2.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.2.x
1.3.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.3.x
1.4.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.4.x
1.5.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.5.x
1.6.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.6.x
1.7.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.7.x
1.8.0-velero_1.0.0 >= 1.0.0 >= v1.0.0 v1.8.x
1.9.0 >= 1.0.0 >= v1.0.0 v1.9.x
1.10.0 >= 1.0.0 >= v1.0.0 v1.10.x
>= 1.11.0 >= 1.0.0 >= v1.0.0 v1.11.x

Note:

OpenEBS version < 0.9 is not supported for velero-plugin.

Velero-plugin version < 1.11.0 is not supported for cstor v1 volumes.

If you want to use plugin image from development branch(develop), use ci tag.

Multiarch (amd64/arm64) plugin images are available at Docker Hub.

Prerequisite for velero-plugin

A Specific version of Velero needs to be installed as per the compatibility matrix with OpenEBS versions.

For installation steps of Velero, visit https://velero.io.

For installation steps of OpenEBS, visit https://github.com/openebs/openebs/releases.

Installation of velero-plugin

Run the following command to install development image of OpenEBS velero-plugin

velero plugin add openebs/velero-plugin:1.9.0

This command will add an init container to Velero deployment to install the OpenEBS velero-plugin.

Developer Guide

To build the plugin binary

make build

To build the docker image for velero-plugin

make container IMAGE=<REPO NAME>

To push the image to repo

make deploy-image IMAGE=<REPO NAME>

Local Backup/Restore

For Local Backup Velero-plugin creates a snapshot for CStor Volume.

Configuring snapshot location

To take local backup of cStor volume, configure VolumeSnapshotLocation with provider openebs.io/cstor-blockstore and set local to true. Sample YAML file for volumesnapshotlocation can be found at example/06-local-volumesnapshotlocation.yaml.

Sample Spec for volumesnapshotlocation:

spec:
  provider: openebs.io/cstor-blockstore
  config:
    namespace: <OPENEBS_NAMESPACE>
    local: "true"

If you have multiple installation of openebs then you need to add spec.config.namespace: <OPENEBS_NAMESPACE>.

Creating a backup

Once the volumesnapshotlocation is configured, you can create a backup of your CStor persistent storage volume.

To back up data of all your applications in the default namespace, run the following command:

velero backup create localbackup --include-namespaces=default --snapshot-volumes --volume-snapshot-locations=<SNAPSHOT_LOCATION>

SNAPSHOT_LOCATION should be the same as you configured by using example/06-local-volumesnapshotlocation.yaml.

You can check the status of backup using the following command:

velero backup get

Above command will list out the all backups you created. Sample output of the above command is mentioned below :

NAME                STATUS      CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
localbackup         Completed   2019-05-09 17:08:41 +0530 IST   26d       gcp                <none>

Once the backup is completed you should see the backup marked as Completed.

Creating a restore

To restore local backup, run the following command:

velero restore create --from-backup backup_name --restore-volumes=true --namespace-mappings source_ns:destination_ns

Note:

  • Restore from local backup can be done in same cluster, and in different namespace, only where local backups are created

Limitation:

  • Restore of PV having storageClass, with volumeBindingMode set to WaitForFirstConsumer, won't work as expected

Creating a scheduled backup

To create a scheduled backup, run the following command

velero create schedule newschedule  --schedule="*/5 * * * *" --snapshot-volumes --include-namespaces=default --volume-snapshot-locations=<SNAPSHOT_LOCATION>

SNAPSHOT_LOCATION should be the same as you configured by using example/06-local-volumesnapshotlocation.yaml.

You can check the status of scheduled using the following command:

velero schedule get

It will list all the schedule you created. Sample output of the above command is as below:

NAME            STATUS    CREATED                         SCHEDULE      BACKUP TTL   LAST BACKUP   SELECTOR
newschedule     Enabled   2019-05-13 15:15:39 +0530 IST   */5 * * * *   720h0m0s     2m ago        <none>

Creating a restore from scheduled backup

To restore from any scheduled backup, refer Creating a restore

Remote Backup/Restore

For Remote Backup Velero-plugin creates a snapshot for CStor Volume and upload it to remote storage.

Configuring snapshot location for remote backup

To take remote backup of cStor volume snapshot to cloud or S3 compatible storage, configure VolumeSnapshotLocation with provider openebs.io/cstor-blockstore. Sample YAML file for volumesnapshotlocation can be found at example/06-volumesnapshotlocation.yaml.

Sample Spec for volumesnapshotlocation:

spec:
  provider: openebs.io/cstor-blockstore
  config:
    bucket: <YOUR_BUCKET>
    prefix: <PREFIX_FOR_BACKUP_NAME>
    backupPathPrefix: <PREFIX_FOR_BACKUP_PATH>
    provider: <GCP_OR_AWS>
    region: <AWS_REGION>

If you have multiple installation of openebs then you need to add spec.config.namespace: <OPENEBS_NAMESPACE>.

Note:

  • prefix is for the backup file name.

    if prefix is set to cstor then snapshot will be stored as bucket/backups/backup_name/cstor-PV_NAME-backup_name.

  • backupPathPrefix is for backup path.

    if backupPathPrefix is set to newcluster then snapshot will be stored at bucket/newcluster/backups/backup_name/prefix-PV_NAME-backup_name.

    To store backup metadata and snapshot at same location, BackupStorageLocation.prefix and VolumeSnapshotLocation.BackupPathPrefix should be same.

You can configure a backup storage location(BackupStorageLocation) similarly. Currently supported cloud-providers for velero-plugin are AWS, GCP and MinIO.

Creating a remote backup

To back up data of all your applications in the default namespace, run the following command:

velero backup create defaultbackup --include-namespaces=default --snapshot-volumes --volume-snapshot-locations=<SNAPSHOT_LOCATION>

SNAPSHOT_LOCATION should be the same as you configured by using example/06-volumesnapshotlocation.yaml.

You can check the status of backup using the following command:

velero backup get

Above command will list out the all backups you created. Sample output of the above command is mentioned below :

NAME                STATUS      CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
defaultbackup       Completed   2019-05-09 17:08:41 +0530 IST   26d       gcp                <none>

Once the backup is completed you should see the backup marked as Completed.

Note:

  • If backup name ends with "-20190513104034" format then it is considered as part of scheduled backup

Creating a restore for remote backup

To restore data from remote backup, run the following command:

velero restore create --from-backup backup_name --restore-volumes=true

With the above command, the plugin will create a CStor volume and the data from backup will be restored on this newly created volume.

You can check the status of restore using the following command:

velero restore get

Above command will list out the all restores you created. Sample output of the above command is mentioned below :

NAME                           BACKUP          STATUS      WARNINGS   ERRORS    CREATED                         SELECTOR
defaultbackup-20190513113453   defaultbackup   Completed   0          0         2019-05-13 11:34:55 +0530 IST   <none>

Once the restore is completed you should see the restore marked as Completed.

To restore in different namespace, run the following command:

velero restore create --from-backup backup_name --restore-volumes=true --namespace-mappings source_ns:destination_ns

Plugin will create the destination_ns, if it doesn't exist.

Once restore for remote backup is completed, You need to set targetip in relevant replica. Refer Setting targetip in replica.

Setting targetip in replica

After restore for remote backup is completed, you need to set target-ip for the volume in pool pod. If restore is from local snapshot then you don't need to update target-ip

  • Fetch the targetip for replica using below command.
kubectl get svc -n openebs <PV_NAME> -ojsonpath='{.spec.clusterIP}'

PV_NAME is restored PV name.

  • After getting targetip, you need to set it in all the replica of restored pv using following command:
kubectl exec -it <POOL_POD> -c cstor-pool -n openebs -- bash
zfs set io.openebs:targetip=<TARGET_IP> <POOL_NAME/VOLUME_NAME>
  • Using bash script
#set correct pod name
pool_pod=POOL_POD

for pool_pvc in $(kubectl exec $pool_pod -c cstor-pool -n openebs -- bash -c "zfs get io.openebs:targetip" | grep io.openebs:targetip | grep pvc | grep -v '@' | cut -d" " -f1)
do
  svc=$(echo $pool_pvc | cut -d/ -f2)
  ip=$(kubectl get svc -n openebs $svc -ojsonpath='{.spec.clusterIP}')
  kubectl exec $pool_pod -c cstor-pool -n openebs -- bash -c "zfs set io.openebs:targetip=$ip $pool_pvc"
done

You can automate this process by setting the config parameter autoSetTargetIP to "true" in volumesnapshotlocation. Note that restoreAllIncrementalSnapshots=true implies autoSetTargetIP=true

apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  ...
spec:
  config:
    ...
    ...
    autoSetTargetIP: "true"

Creating a scheduled remote backup

OpenEBS velero-plugin provides incremental remote backup support for CStor persistent volumes for scheduled backups. This means, the first backup of the schedule includes a snapshot of all volume data, and the subsequent backups include the snapshot of modified data from the previous backup

To create an incremental backup(or scheduled backup), run the following command:

velero create schedule newschedule  --schedule="*/5 * * * *" --snapshot-volumes --include-namespaces=default --volume-snapshot-locations=<SNAPSHOT_LOCATION>

SNAPSHOT_LOCATION should be the same as you configured by using example/06-volumesnapshotlocation.yaml.

You can check the status of scheduled using the following command:

velero schedule get

It will list all the schedule you created. Sample output of the above command is as below:

NAME            STATUS    CREATED                         SCHEDULE      BACKUP TTL   LAST BACKUP   SELECTOR
newschedule     Enabled   2019-05-13 15:15:39 +0530 IST   */5 * * * *   720h0m0s     2m ago        <none>

During the first backup iteration of a schedule, full data of the volume will be backed up. For later backup iterations of a schedule, only modified or new data from the previous iteration will be backed up. Since Velero backup comes with retain policy, you may need to update the retain policy using argument --ttl while creating a schedule. Since scheduled backups are incremental backup, if first backup(or base backup) gets expired then you won't be able to restore from that schedule.

Note:

  • If backup name ends with "-20190513104034" format then it is considered as part of scheduled backup

Creating a restore from scheduled remote backup

Backups generated by schedule are incremental backups. The first backup of the schedule includes a snapshot of all volume data, and the subsequent backups include the snapshot of modified data from the previous backup. In the older version of velero-plugin(<2.2.0) we need to create restore for all the backup, from base backup to the required backup, Refer Restoring the scheduled backup without restoreAllIncrementalSnapshots.

You can automate this process by setting the config parameter restoreAllIncrementalSnapshots to "true" in volumesnapshotlocation.

apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  ...
spec:
  config:
    ...
    ...
    restoreAllIncrementalSnapshots: "true"

To create restore from schedule, run the following command

velero restore create --from-schedule schedule_name --restore-volumes=true

The above command will create the cstor volume and restore all the snapshots backed up in that schedule.

To restore specific backup from schedule, run the following command

velero restore create --from-backup backup_name --restore-volumes=true

Above command will create the cstor volume and restore all the snapshots backed up from base backup to given backup(backup_name).

Here, base backup means the first backup created by schedule. To restore from scheduled backups, base-backup must be available.

You can restore the scheduled remote backup to a different namespace using the --namespace-mappings argument while creating a restore. Plugin will create the destination namespace, if it doesn't exist.

Once restore for remote scheduled backup is completed, You need to set targetip in relevant replica. Refer Setting targetip in replica.

If you are not setting restoreAllIncrementalSnapshots parameter in volumesnapshotlocation then follow the below section to restore from scheduled backups.

Restoring the scheduled backup without restoreAllIncrementalSnapshots

Since backups taken are incremental for a schedule, the order of restoring data is very important. You need to restore data in the order of the backups created.

First restore must be created from the first completed backup of schedule.

For example, below are the available backups for a schedule:

NAME                   STATUS      CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
sched-20190513104034   Completed   2019-05-13 16:10:34 +0530 IST   29d       gcp                <none>
sched-20190513103534   Completed   2019-05-13 16:05:34 +0530 IST   29d       gcp                <none>
sched-20190513103034   Completed   2019-05-13 16:00:34 +0530 IST   29d       gcp                <none>

Restore of data needs to be done in the following way:

velero restore create --from-backup sched-20190513103034 --restore-volumes=true
velero restore create --from-backup sched-20190513103534 --restore-volumes=true
velero restore create --from-backup sched-20190513104034 --restore-volumes=true

You can restore scheduled remote backup to different namespace using --namespace-mappings argument while creating a restore.

Once restore for remote scheduled backup is completed, You need to set targetip in relevant replica. Refer Setting targetip in replica.

Note: Velero clean-up the backups according to retain policy. By default retain policy is 30days. So you need to set retain policy for scheduled remote/cloud-backup accordingly.

License

FOSSA Status

velero-plugin's People

Contributors

abhilashshetty04 avatar abhinandan-purkait avatar akhilerm avatar avishnu avatar fossabot avatar mynktl avatar niladrih avatar pawanpraka1 avatar prateekpandey14 avatar rahulgrover99 avatar sgielen avatar shubham14bajpai avatar soniasingla avatar vishnuitta avatar zlymeda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

velero-plugin's Issues

openebs velero-plugin and helm chart installation

Describe the problem/challenge you have
Openebs is installed by helm chart
Velero is installed by helm chart
does a way exist to use the helm chart to install also the openebs/velero-plugin to be able to fully automitize the installation

tion:

  • OS (e.g. from /etc/os-release):

Question related to the requirement of S3 compatible bucket for backup

Hi @mynktl

I have a quick question based on the slides I read from here [ especially slide-8 ]
https://www.slideshare.net/OpenEBS/thoughts-on-heptios-ark-contributors-meet-21st-sept-2018

Backup/Restore via OpenEBS ARK Plugin 8 - Create a OpenEBS ARK Plugin that will implement the Block-Store API exposed by ARK - Backup Operation - ARK will invoke the Plugin-Snapshot (Backup) method. - Plugin: will call maya-apiserver backup api on a given volume - Maya-apiserver backup will call volumes (jiva/cstor) backup api - (jiva/cstor) volume controller will take a snapshot, and pass the request to one of the replicaโ€™s to push snapshot data to remote backup location (say S3 compatible -- as passed via the ark plugin or a custom backup location on mayaonline or may be a nfs server that openebs supports ). The code to actually push the data to backup location can make use of restic. We are putting it at the jiva/cstor for getting access to snapshot/incr snapshot data. - Restore Operation - ARK will invoke the Plugin-VolumeFromSnapshot (Restore) method - Plugin will invoke maya-apiserver to create a new PV/PVC and restore the data from backup. - ARK will launch the application with the PV/PVC.

Is this the same plugin mentioned in the slide and regarding the possibility of using nfs location for backup rather than s3 compatible object storage.

Little bit confused as Velero mentions s3 compatible object storage as a mandatory requirement for
backup.

How different it is to use Ark with Restic to backup openebs cstor block volumes than use this plugin ?
Both would serve the same purpose I presume.

How to combine snapshots (incremental backups) with backup deletion because of TTL?

Describe the problem/challenge you have
I created a velero Schedule earlier that would perform a full volume backup every day, with a TTL of 3 days (72 hours).

This has been running for quite a while, but I just found out that all three stored backups are much smaller than the volumes themselves.

I found out that this is because a Backup created from a Schedule is incremental by default.

This means that my backups are useless: after 72 hours, the initial full backup is deleted by Velero, and the subsequent snapshots are worthless. In my opinion, this (default!) behaviour isn't clearly documented even now that I know about it.

How can I configure my Backups, even those created from a Schedule, to be non-incremental, or how can I force the plugin to create a new full backup before my the last full backup is deleted because of TTL?

Describe the solution you'd like
A method to create a Schedule for which the Backups are full, not incremental. Or even better, a configurable max age of the last full backup after which a new full backup is created instead of an incremental one.

For now, I will work around this by creating a CronJob in the cluster that simply runs "velero backup --wait", without using the schedules feature.

Anything else you would like to add:

Environment:

  • Velero version (use velero version): Client v1.5.1, Server v.1.4.0
  • Velero features (use velero client config get features):
  • Velero-plugin version: ?
  • OpenEBS version: v1.11.0
  • Kubernetes version (use kubectl version): Client 1.19.2, Server 1.18.3+k3s1
  • Kubernetes installer & version: k3os
  • Cloud provider or hardware configuration: local arm64
  • OS (e.g. from /etc/os-release): k3os

support for multiple s3 profile

By default we are using default profile for S3 base remote storage. This issue is to extend plugin functionality to support multiple s3 profile, if user has multiple volumesnapshotlocation configured with different s3 profiles.

OpenEBS Local PV Hostpath

Hello,
could you create the plugin for OpenEBS Local PV Hostpath, please?

I saw velero is supported but can't find correct OpenEBS provider for velero.

Remove PVC annotation `openebs.io/created-through` once restore completes

What steps did you take and what happened:
Refer openebs-archive/maya#1689

What did you expect to happen:
Plugin expects that CVR shouldn't have ip address set, to restore the snapshot in zfs dataset. To achieve that, plugin add the annotation in PVC. Once PVC is bound and CVR are created and updated, plugin should remove this annotation.

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)

  • kubectl logs deployment/velero -n velero
  • kubectl logs deployment/maya-apiserver -n openebs
  • velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
  • velero backup logs <backupname>
  • velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
  • velero restore logs <restorename>

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version (use velero version):
  • Velero features (use velero client config get features):
  • Velero-plugin version
  • OpenEBS version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

Velero Backup is PartiallyFailed because no Snapshot ID for Jiva volumes

Hi Team,

We are using both cStro and Jiva volume in our environment.

When we take backup of the Snapshot ID for jiva volume is blank and because of that our backup are partially failed.

daily-k8stest-backup-20190723211027   PartiallyFailed   2019-07-23 17:10:27 -0400 EDT   9d        default            !openebs.io/controller,!openebs.io/replica
daily-k8stest-backup-20190723135827   PartiallyFailed   2019-07-23 09:58:27 -0400 EDT   9d        default            !openebs.io/controller,!openebs.io/replica
daily-k8stest-backup-20190723084422   PartiallyFailed   2019-07-23 04:44:22 -0400 EDT   9d        default            !openebs.io/controller,!openebs.io/replica
daily-k8stest-backup-20190723082311   PartiallyFailed   2019-07-23 04:23:11 -0400 EDT   29d       default            !openebs.io/controller,!openebs.io/replica
daily-k8stest-backup-20190723072722   PartiallyFailed   2019-07-23 03:27:22 -0400 EDT   29d       default            !openebs.io/controller,!openebs.io/replica
pvc-a4ffc73e-a87b-11e9-bac5-0050569bd3e7:
  Snapshot ID:        pvc-a4ffc73e-a87b-11e9-bac5-0050569bd3e7-velero-bkp-daily-k8stest-backup-20190723084422
  Type:               cstor-snapshot
  Availability Zone:  
  IOPS:               <N/A>
pvc-5a536bae-a4b7-11e9-b7b7-0050569bab7f:
  Snapshot ID:        
  Type:               cstor-snapshot
  Availability Zone:  
  IOPS:               <N/A>
pvc-67e50e1f-a7bd-11e9-bac5-0050569bd3e7:
  Snapshot ID:        pvc-67e50e1f-a7bd-11e9-bac5-0050569bd3e7-velero-bkp-daily-k8stest-backup-20190723084422
  Type:               cstor-snapshot
  Availability Zone:  
  IOPS:               <N/A>
[test]$ kubectl get pv | grep pvc-5a536bae-a4b7-11e9-b7b7-0050569bab7f
pvc-5a536bae-a4b7-11e9-b7b7-0050569bab7f   50G        RWO            Retain           Bound    openebs/openebspvc                             openebs-jiva-retain                  11d
[test ~]$ 

pvc-5a536bae-a4b7-11e9-b7b7-0050569bab7f is our Jiva volume.

Incremental backups and CStorBackup/CStorCompletedBackup resources

Hi there!

Thank you for building this plugin, it's great!

Through my testing I ended up with two questions:

  • How do I perform a restore when the first backup (the initial full) reaches its TTL and is cleaned up/deleted by Velero? I'm referring to the incremental backup section of the README.

  • In my limited testing it doesn't look like the plugin cleans up the CStorBackup/CStoreCompletedBackup resources in k8s (I'm not sure how to check whatever objects remain within OpenEBS, but I'm assuming they are not cleaned up either). Is this a planned feature or do I need to clean these up manually/via script?

Question related to BackupStorageLocation for on-prem

Hi @mynktl,

At the moment, the supported BackupStorageLocation states that there is only support for AWS and GCP. Do you have any plans to support an on-prem BackupStorageLocation, such as an S3 Object Store provided by Minio? In other words, could the OpenEBS CStor plugin send the volume backup blocks to the same location (on-prem) that Velero sends the metadata about the Pods/PVs/PVCs/STS etc to?

Thanks

[feat request] native zfs snapshots during backup

This is feature request for my original issue #175.

I would love to have the ability to automate local native ZFS snapshots for the PVC's that Velero is backing up. It allows for simple and quick rollbacks ZFS style if ever needed on a single volume. It would also allow ZFS snapshots to be taken more gracefully than using the underlying OS to take snapshots. In addition, it would be great to have snapshots pruned according to the ttl in the backup schedule.

(Bug)(cStor): Restore is partially failed when autosetTarget ip is enabled

  • Taken the velero backup of Application contains app target affinity openebs.io/target-affinity and tried to restore the application CVR's is in OFFLINE state and target pod is stuck into pending state. After some time getting the restore status as Partiallyfailed and the target is in Running state CVR's become in Health state.
Volume: pvc-dab009d8-4b50-4c77-b4f7-06d9b72a9d34
CStorRestore is Done
VeleroRestore is Partially failed
2020-12-09/05:28:15.594 main              :3106: m#162805696.8       : istgt:0.5.20121028:02:19:49:Nov 19 2020: starting
The target pod started at time(05:28:15.594)
2020-12-09/05:19:30.378 ERROR target IP address is empty for cstor-1b4089d7-2cbb-42c1-9fd9-b56eca09ca8d/pvc-dab009d8-4b50-4c77-b4f7-06d9b72a9d34
From the above log VolumeReplica created at approximately at time(05:19:30:378)
2020-12-09/05:19:49.619 Instantiating zvol cstor-1b4089d7-2cbb-42c1-9fd9-b56eca09ca8d/pvc-dab009d8-4b50-4c77-b4f7-06d9b72a9d34
2020-12-09/05:20:05.046 ERROR Failed to connect to 172.22.117.135:6060 fd(19)
TargetIp was set at time(05:19:49.619) ---------->  which means restore was done
Connected to target at a time (~05:28:23.178)
2020-12-09/05:28:23.178 [tgt 172.22.117.135:6060:8]: Handshake command for zvol pvc-dab009d8-4b50-4c77-b4f7-06d9b72a9d34
Become Healthy at time (5:28:34)
2020-12-09/05:28:34.968 zvol cstor-1b4089d7-2cbb-42c1-9fd9-b56eca09ca8d/pvc-dab009d8-4b50-4c77-b4f7-06d9b72a9d34 status change: DEGRADED -> HEALTHY
  Cluster:  error executing PVAction for persistentvolumes/pvc-44309d6e-d4ba-4df4-bf0e-1a655dd632df: rpc error: code = Unknown desc = Error setting targetip on CVR, need to set it manually. Refer: https://github.com/openebs/velero-plugin#setting-targetip-in-replica: CVR for volume{pvc-dab009d8-4b50-4c77-b4f7-06d9b72a9d34} are not ready!
  Namespaces: <none>

pvc yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: openebs.io/provisioner-iscsi
  creationTimestamp: "2020-12-09T05:19:29Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    openebs.io/target-affinity: percona
  name: percona-mysql-claim
  namespace: backup-percona-cstor
  resourceVersion: "2717673"
  selfLink: /api/v1/namespaces/backup-percona-cstor/persistentvolumeclaims/percona-mysql-claim
  uid: dab009d8-4b50-4c77-b4f7-06d9b72a9d34
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: openebs-cstor-disk
  volumeMode: Filesystem
  volumeName: pvc-dab009d8-4b50-4c77-b4f7-06d9b72a9d34
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Gi
  phase: Bound

cstor-istgt.log
cstor-mgmt.log
pool1.log
pool2.log
pool3.log

Possibility to rename api server service

Hi!

When I try to do a backup from a testing deployment (MySQL server with PV) the following error occours:

time="2019-07-07T12:46:03Z" level=info msg="Initializing velero plugin for CStor map[s3Url:http://minio.velero.svc:9000 DisableSSL:true bucket:velero-test prefix: provider:aws region:minio s3ForcePathStyle:true]" backup=velero/defaultbackup7 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/snapshot/snap.go:36" pluginName=velero-blockstore-cstor
time="2019-07-07T12:46:03Z" level=error msg="Error getting IP Address for service{maya-apiserver-service} : services \"maya-apiserver-service\" not found" backup=velero/defaultbackup7 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:139" pluginName=velero-blockstore-cstor
time="2019-07-07T12:46:03Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/defaultbackup7 error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:166" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" group=v1 logSource="pkg/backup/item_backupper.go:410" name=pvc-6319d404-a0ab-11e9-9f37-000c298d4a86 namespace=default persistentVolume=pvc-6319d404-a0ab-11e9-9f37-000c298d4a86 resource=pods volumeSnapshotLocation=default
time="2019-07-07T12:46:03Z" level=info msg="Persistent volume is not a supported volume type for snapshots, skipping." backup=velero/defaultbackup7 group=v1 logSource="pkg/backup/item_backupper.go:430" name=pvc-6319d404-a0ab-11e9-9f37-000c298d4a86 namespace=default persistentVolume=pvc-6319d404-a0ab-11e9-9f37-000c298d4a86 resource=pods

I am using K8s On-premises and successfully installed OpenEBS using the Helm Chart. It looks like the name of the maya api server is fixed to maya-apiserver-service and in the Helm Chart it is set to {{ template "openebs.fullname" . }}-apiservice. So is there a possibility to set the name in the Velero plugin manually without creating an renamed copy of the api server service?

msg="Error getting volume snapshotter for volume snapshot location"

I've set up a MiniO and its working correctly doing backups, just not the volumes - the pvc data isn't getting backed up

time="2022-11-16T01:41:39Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 namespace= persistentVolume=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 resource=persistentvolumes volumeSnapshotLocation=default
time="2022-11-16T01:41:39Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 namespace= persistentVolume=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 resource=persistentvolumes volumeSnapshotLocation=default
velero install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.2.1,openebs/velero-plugin:1.9.0 \
    --bucket velero \
    --secret-file ./credentials-velero \
    --use-volume-snapshots=true \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://<IP OF MINIO>:9000

My snapshot location as example 06-volumesnapshotlocation.yaml
So I applied the below to the cluster:

---
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/cstor-blockstore
  config:
    bucket: velero
    provider: aws
    region: minio
    namespace: openebs
    restoreAllIncrementalSnapshots: "false"
    autoSetTargetIP: "true"
    restApiTimeout: 1m

Then i try to take backup

velero create backup npm --include-namespaces npm --snapshot-volumes
velero backup logs npm|grep error
time="2022-11-16T02:12:43Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 namespace= persistentVolume=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 resource=persistentvolumes volumeSnapshotLocation=default
time="2022-11-16T02:12:43Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 namespace= persistentVolume=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 resource=persistentvolumes volumeSnapshotLocation=default

Only the meta data is backup up - which is amazing, but i need the data too !
Screen Shot 2022-11-16 at 1 15 03 pm

I installed openebs cstor from the helm with these values

# Default values for cstor-operators.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

release:
  version: "3.4.0"

# If false, openebs NDM sub-chart will not be installed
openebsNDM:
  enabled: true

rbac:
  # rbac.create: `true` if rbac resources should be created
  create: true
  # rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
  pspEnabled: false

imagePullSecrets:
# - name: "image-pull-secret"

cspcOperator:
  componentName: cspc-operator
  poolManager:
    image:
      registry:
      repository: openebs/cstor-pool-manager
      tag: 3.4.0
  cstorPool:
    image:
      registry:
      repository: openebs/cstor-pool
      tag: 3.4.0
  cstorPoolExporter:
    image:
      registry:
      repository: openebs/m-exporter
      tag: 3.4.0
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cspc-operator
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  annotations: {}
  resyncInterval: "30"
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}
  baseDir: "/var/openebs"
  sparseDir: "/var/openebs/sparse"

cvcOperator:
  componentName: cvc-operator
  target:
    image:
      registry:
      repository: openebs/cstor-istgt
      tag: 3.4.0
  volumeMgmt:
    image:
      registry:
      repository: openebs/cstor-volume-manager
      tag: 3.4.0
  volumeExporter:
    image:
      registry:
      repository: openebs/m-exporter
      tag: 3.4.0
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cvc-operator
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  annotations: {}
  resyncInterval: "30"
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}
  baseDir: "/var/openebs"
  logLevel: "2"

csiController:
  priorityClass:
    create: true
    name: cstor-csi-controller-critical
    value: 900000000
  componentName: "openebs-cstor-csi-controller"
  logLevel: "5"
  resizer:
    name: "csi-resizer"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-resizer
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v1.2.0
  snapshotter:
    name: "csi-snapshotter"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-snapshotter
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.0.3
  snapshotController:
    name: "snapshot-controller"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/snapshot-controller
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.0.3
  attacher:
    name: "csi-attacher"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-attacher
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.1.0
  provisioner:
    name: "csi-provisioner"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-provisioner
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.0.0
  annotations: {}
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}

cstorCSIPlugin:
  name: cstor-csi-plugin
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cstor-csi-driver
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  remount: "true"

csiNode:
  priorityClass:
    create: true
    name: cstor-csi-node-critical
    value: 900001000
  componentName: "openebs-cstor-csi-node"
  driverRegistrar:
    name: "csi-node-driver-registrar"
    image:
      registry: k8s.gcr.io/
      repository: sig-storage/csi-node-driver-registrar
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v2.3.0
  logLevel: "5"
  updateStrategy:
    type: RollingUpdate
  annotations: {}
  podAnnotations: {}
  resources: {}
  # limits:
  #   cpu: 10m
  #   memory: 32Mi
  # requests:
  #   cpu: 10m
  #   memory: 32Mi
  ## Labels to be added to openebs-cstor-csi-node pods
  podLabels: {}
  # kubeletDir path can be configured to run on various different k8s distributions like
  # microk8s where kubelet root dir is not (/var/lib/kubelet/). For example microk8s,
  # we need to change the kubelet directory to `/var/snap/microk8s/common/var/lib/kubelet/`
  kubeletDir: "/var/lib/kubelet/"
  nodeSelector: {}
  tolerations: []
  securityContext: {}

csiDriver:
  create: true
  podInfoOnMount: true
  attachRequired: false

admissionServer:
  componentName: cstor-admission-webhook
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cstor-webhook
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  failurePolicy: "Fail"
  annotations: {}
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}

serviceAccount:
  # Annotations to add to the service account
  annotations: {}
  cstorOperator:
    create: true
    name: openebs-cstor-operator
  csiController:
    # Specifies whether a service account should be created
    create: true
    name: openebs-cstor-csi-controller-sa
  csiNode:
    # Specifies whether a service account should be created
    create: true
    name: openebs-cstor-csi-node-sa

analytics:
  enabled: true
  # Specify in hours the duration after which a ping event needs to be sent.
  pingInterval: "24h"

cleanup:
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: bitnami/kubectl
    tag:

kubectl get VolumeSnapshotClass -o yaml

apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1
  deletionPolicy: Delete
  driver: cstor.csi.openebs.io
  kind: VolumeSnapshotClass
  metadata:
    annotations:
      meta.helm.sh/release-name: openebs-cstor
      meta.helm.sh/release-namespace: openebs
      snapshot.storage.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2022-11-15T11:53:35Z"
    generation: 1
    labels:
      app.kubernetes.io/managed-by: Helm
    name: csi-cstor-snapshotclass
    resourceVersion: "4242"
    uid: 57d21003-068b-4fe5-87bc-a3b4f4118db0
kind: List
metadata:
  resourceVersion: ""

Cannot restore backups of zfs-localpv PVs to encrypted ZFS pools

Can't get velero to restore PV backups from snapshot it creates, please help me debug this.

Backs it up like a champ

$ velero backup create --include-namespaces redis-test redis-test
Backup request "redis-test" submitted successfully.
Run `velero backup describe redis-test` or `velero backup logs redis-test` for more details.

$ velero backup describe redis-test --details
Name:         redis-test
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.20.2+k3s1
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=20

Phase:  Completed

Errors:    0
Warnings:  0

Namespaces:
  Included:  redis-test
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  auto

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2021-02-05 10:16:18 +0200 EET
Completed:  2021-02-05 10:16:32 +0200 EET

Expiration:  2021-03-07 10:16:18 +0200 EET


Total items to be backed up:  25
Items backed up:              25

Resource List:
  apps/v1/ControllerRevision:
    - redis-test/redis-745597d796
  apps/v1/StatefulSet:
    - redis-test/redis
  discovery.k8s.io/v1beta1/EndpointSlice:
    - redis-test/redis-qlfmj
    - redis-test/redis-t7hsx
  v1/ConfigMap:
    - redis-test/kube-root-ca.crt
    - redis-test/redis-config
  v1/Endpoints:
    - redis-test/redis
  v1/Event:
    - redis-test/redis-0.1660cbfef6e8b1e3
    - redis-test/redis-0.1660cbff2f156fe0
    - redis-test/redis-0.1660cbff6cd7a348
    - redis-test/redis-0.1660cbff70a70c30
    - redis-test/redis-0.1660cbff77e06f00
    - redis-test/redis-storage-redis-0.1660cbfeb681af2d
    - redis-test/redis-storage-redis-0.1660cbfeba9f91cf
    - redis-test/redis-storage-redis-0.1660cbfebaec74eb
    - redis-test/redis-storage-redis-0.1660cbfece7abbb0
    - redis-test/redis.1660cbfeb6195847
    - redis-test/redis.1660cbfeb89a7981
  v1/Namespace:
    - redis-test
  v1/PersistentVolume:
    - pvc-51791429-c865-4439-8c76-387d335c8cd3
  v1/PersistentVolumeClaim:
    - redis-test/redis-storage-redis-0
  v1/Pod:
    - redis-test/redis-0
  v1/Secret:
    - redis-test/default-token-sv8cj
  v1/Service:
    - redis-test/redis
  v1/ServiceAccount:
    - redis-test/default

Velero-Native Snapshots:
  pvc-51791429-c865-4439-8c76-387d335c8cd3:
    Snapshot ID:        pvc-51791429-c865-4439-8c76-387d335c8cd3..redis-test
    Type:               zfs-localpv
    Availability Zone:
    IOPS:               <N/A>

Backups are really there, and the snapshot is a working one. I even tested it by importing it with zfs recv

$ mcli ls cube/velero-con/backups/redis-test
[2021-02-05 10:16:32 EET]    29B redis-test-csi-volumesnapshotcontents.json.gz
[2021-02-05 10:16:32 EET]    29B redis-test-csi-volumesnapshots.json.gz
[2021-02-05 10:16:32 EET] 3.8KiB redis-test-logs.gz
[2021-02-05 10:16:32 EET]    29B redis-test-podvolumebackups.json.gz
[2021-02-05 10:16:32 EET]   403B redis-test-resource-list.json.gz
[2021-02-05 10:16:32 EET]   240B redis-test-volumesnapshots.json.gz
[2021-02-05 10:16:32 EET]  10KiB redis-test.tar.gz
[2021-02-05 10:16:32 EET] 2.1KiB velero-backup.json
[2021-02-05 10:16:26 EET]  46KiB zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test
[2021-02-05 10:16:20 EET]   806B zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test.zfsvol

$ mcli cp cube/velero-con/backups/redis-test/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test /tmp/ 
...-8c76-387d335c8cd3-redis-test:  46.20 KiB / 46.20 KiB โ”ƒโ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ”ƒ 1.18 MiB/s 0s
$ file /tmp/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test
/tmp/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test: ZFS shapshot (little-endian machine), version 33554449, type: ZFS, destination GUID: FA 8C 94 12 7F 4D 8D 67, name: 'data/k3s/pv/pvc-51791429-c865-4439-8c76-387d335c8cd3@redis-test'

Now I delete the namespace, wait a bit, make sure that no PV and zfs volume is gone.

$ kubectl delete namespace redis-test
namespace "redis-test" deleted

When I restore it, everything is ok apart from the PV

$ velero restore create --from-backup redis-test
Restore request "redis-test-20210205104711" submitted successfully.
Run `velero restore describe redis-test-20210205104711` or `velero restore logs redis-test-20210205104711` for more details.

$ velero restore describe redis-test-20210205104711                                                                            
Name:         redis-test-20210205104711
Namespace:    velero
Labels:       <none>
Annotations:  <none>

Phase:  PartiallyFailed (run 'velero restore logs redis-test-20210205104711' for more information)

Started:    2021-02-05 10:47:12 +0200 EET
Completed:  2021-02-05 10:47:24 +0200 EET

Errors:
  Velero:     <none>
  Cluster:  error executing PVAction for persistentvolumes/pvc-51791429-c865-4439-8c76-387d335c8cd3: rpc error: code = Unknown desc = zfs: error in restoring pvc-51791429-c865-4439-8c76-387d335c8cd3.redis-test, status:{Failed}
  Namespaces: <none>

Backup:  redis-test

Namespaces:
  Included:  all namespaces found in the backup
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
  Cluster-scoped:  auto

Namespace mappings:  <none>

Label selector:  <none>

Restore PVs:  auto

Judging by the logs, velero downloads the snap successfully, but fails to "CreateVolumeFromSnapshot"

$ velero restore logs redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="starting restore" logSource="pkg/controller/restore_controller.go:467" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="Starting restore of backup velero/redis-test" logSource="pkg/restore/restore.go:363" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="Restoring cluster level resource 'persistentvolumes'" logSource="pkg/restore/restore.go:726" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="Getting client for /v1, Kind=PersistentVolume" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="Restoring persistent volume from snapshot." logSource="pkg/restore/restore.go:922" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="zfs: Initializing velero plugin for ZFS-LocalPV" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/zfs/snapshot/snap.go:36" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="Reading from {backups/redis-test/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test.zfsvol} with provider{aws} to bucket{velero-con}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:138" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="successfully read object{backups/redis-test/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test.zfsvol} to {aws}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:146" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:17Z" level=info msg="Client{1} operation completed.. completed count{0}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:178" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=error msg="zfs: restore failed vol pvc-51791429-c865-4439-8c76-387d335c8cd3 snap redis-test err: zfs: error in restoring pvc-51791429-c865-4439-8c76-387d335c8cd3.redis-test, status:{Failed}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/zfs/plugin/restore.go:280" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=info msg="Transfer done.. closing the server" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/server.go:311" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=info msg="successfully restored object{backups/redis-test/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test} from {aws}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:106" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=error msg="zfs: error doRestore returning snap pvc-51791429-c865-4439-8c76-387d335c8cd3..redis-test err zfs: error in restoring pvc-51791429-c865-4439-8c76-387d335c8cd3.redis-test, status:{Failed}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/zfs/plugin/restore.go:359" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=error msg="zfs: error CreateVolumeFromSnapshot returning snap pvc-51791429-c865-4439-8c76-387d335c8cd3..redis-test err zfs: error in restoring pvc-51791429-c865-4439-8c76-387d335c8cd3.redis-test, status:{Failed}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/zfs/plugin/zfs.go:133" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'persistentvolumeclaims' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for /v1, Kind=PersistentVolumeClaim" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing AddPVFromPVCAction" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Adding PV pvc-51791429-c865-4439-8c76-387d335c8cd3 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Skipping persistentvolumes/pvc-51791429-c865-4439-8c76-387d335c8cd3 because it's already been restored." logSource="pkg/restore/restore.go:866" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing ChangePVCNodeSelectorAction" cmd=/velero logSource="pkg/restore/change_pvc_node_selector.go:65" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Done executing ChangePVCNodeSelectorAction" cmd=/velero logSource="pkg/restore/change_pvc_node_selector.go:128" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing ChangeStorageClassAction" cmd=/velero logSource="pkg/restore/change_storageclass_action.go:65" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Done executing ChangeStorageClassAction" cmd=/velero logSource="pkg/restore/change_storageclass_action.go:76" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore PersistentVolumeClaim: redis-storage-redis-0" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'secrets' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for /v1, Kind=Secret" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore Secret: default-token-sv8cj" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'configmaps' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for /v1, Kind=ConfigMap" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore ConfigMap: kube-root-ca.crt" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restore of ConfigMap, kube-root-ca.crt skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1164" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore ConfigMap: redis-config" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'serviceaccounts' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for /v1, Kind=ServiceAccount" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for serviceaccounts" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing ServiceAccountAction" cmd=/velero logSource="pkg/restore/service_account_action.go:47" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Done executing ServiceAccountAction" cmd=/velero logSource="pkg/restore/service_account_action.go:78" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore ServiceAccount: default" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'pods' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for /v1, Kind=Pod" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing AddPVCFromPodAction" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:44" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Adding PVC redis-test/redis-storage-redis-0 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:58" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Skipping persistentvolumeclaims/redis-test/redis-storage-redis-0 because it's already been restored." logSource="pkg/restore/restore.go:866" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing InitRestoreHookPodAction" cmd=/velero logSource="pkg/restore/init_restorehook_pod_action.go:49" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Pod redis-test/redis-0 has no init.hook.restore.velero.io/container-image annotation, no initRestoreHook in annotation" cmd=/velero logSource="internal/hook/item_hook_handler.go:350" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Handling InitRestoreHooks from RestoreSpec" cmd=/velero logSource="internal/hook/item_hook_handler.go:138" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Returning pod redis-test/redis-0 with 0 init container(s)" cmd=/velero logSource="internal/hook/item_hook_handler.go:157" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Returning from InitRestoreHookPodAction" cmd=/velero logSource="pkg/restore/init_restorehook_pod_action.go:57" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:71" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Done executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:94" pluginName=velero restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore Pod: redis-0" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'controllerrevisions.apps' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for apps/v1, Kind=ControllerRevision" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore ControllerRevision: redis-745597d796" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'endpoints' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore Endpoints: redis" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore EndpointSlice: redis-qlfmj" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore EndpointSlice: redis-t7hsx" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Skipping restore of resource because the restore spec excludes it" logSource="pkg/restore/restore.go:416" resource=events restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'services' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:1002" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore Service: redis" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Restoring resource 'statefulsets.apps' into namespace 'redis-test'" logSource="pkg/restore/restore.go:724" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Getting client for apps/v1, Kind=StatefulSet" logSource="pkg/restore/restore.go:768" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Attempting to restore StatefulSet: redis" logSource="pkg/restore/restore.go:1107" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Waiting for all restic restores to complete" logSource="pkg/restore/restore.go:488" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Done waiting for all restic restores to complete" logSource="pkg/restore/restore.go:504" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Waiting for all post-restore-exec hooks to complete" logSource="pkg/restore/restore.go:508" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="Done waiting for all post-restore exec hooks to complete" logSource="pkg/restore/restore.go:516" restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:23Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:482" restore=velero/redis-test-20210205104711

$ kubectl logs -n velero velero-6d56d7bc6-zlcx8 | grep -B 3 error
--
time="2021-02-05T08:47:12Z" level=info msg="Reading from {backups/redis-test/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test.zfsvol} with provider{aws} to bucket{velero-con}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:138" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:12Z" level=info msg="successfully read object{backups/redis-test/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test.zfsvol} to {aws}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:146" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:17Z" level=info msg="Client{1} operation completed.. completed count{0}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:178" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=error msg="zfs: restore failed vol pvc-51791429-c865-4439-8c76-387d335c8cd3 snap redis-test err: zfs: error in restoring pvc-51791429-c865-4439-8c76-387d335c8cd3.redis-test, status:{Failed}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/zfs/plugin/restore.go:280" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=info msg="Transfer done.. closing the server" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/server.go:311" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=info msg="successfully restored object{backups/redis-test/zfs-pvc-51791429-c865-4439-8c76-387d335c8cd3-redis-test} from {aws}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:106" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=error msg="zfs: error doRestore returning snap pvc-51791429-c865-4439-8c76-387d335c8cd3..redis-test err zfs: error in restoring pvc-51791429-c865-4439-8c76-387d335c8cd3.redis-test, status:{Failed}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/zfs/plugin/restore.go:359" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711
time="2021-02-05T08:47:22Z" level=error msg="zfs: error CreateVolumeFromSnapshot returning snap pvc-51791429-c865-4439-8c76-387d335c8cd3..redis-test err zfs: error in restoring pvc-51791429-c865-4439-8c76-387d335c8cd3.redis-test, status:{Failed}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/zfs/plugin/zfs.go:133" pluginName=velero-blockstore-openebs restore=velero/redis-test-20210205104711

Environment:

  • Velero version (use velero version):
Client:
        Version: v1.5.3
        Git commit: 123109a3bcac11dbb6783d2758207bac0d0817cb
Server:
        Version: v1.5.3
  • Velero features (use velero client config get features):
    features: <NOT SET>
  • Velero-plugin version
    openebs/velero-plugin:2.5.0
  • OpenEBS version
    openebs.io/version: 1.3.0
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"archive", BuildDate:"2020-11-25T13:19:56Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2+k3s1", GitCommit:"1d4adb0301b9a63ceec8cabb11b309e061f43d5f", GitTreeState:"clean", BuildDate:"2021-01-14T23:52:37Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes installer & version:
k3s version v1.20.2+k3s1 (1d4adb03)
go version go1.15.5
  • Cloud provider or hardware configuration:
    three kvm vms (4vcpu/8g/190g zfs pool each) as k3s nodes in multimaster mode.

  • OS (e.g. from /etc/os-release):
    PRETTY_NAME="Ubuntu 20.04.2 LTS"

Automate manual steps to set target-ip

Describe the problem/challenge you have

After restoring a backup you need to update the target-ip's, this can be time-consuming when there's a lot of pvc's and it is easy to make a mistake.

Describe the solution you'd like

It would be perfect if these steps could be included in the restore process, but I'm not sure this is possible. Otherwise it would be nice if there was some kind of script which could do this automatically.

Unable to install velero plugin

Hi,
I have setup OpenEBS as per the documentation. Now I am trying to install velero-plugin as per the documentation.Using the following instruction:
velero install
--provider aws
--plugins openebs/velero-plugin:ci
--bucket busybox
--secret-file secret-file
--use-volume-snapshots=false
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://<ipaddress:port>,publicUrl=http://<ipaddress:port>

Getting the following error:
level=info msg="Checking that all backup storage locations are valid" logSource="pkg/cmd/server/server.go:413" An error occurred: some backup storage locations are invalid: error getting backup store for location "default": unable to locate ObjectStore plugin named velero.io/openebs

Velero version: 1.2
OpenEBS: 1.6
OS: Ubuntu 18.04

Pls help.

[Question] Creating encrypted backups from encrypted ZFS pools

What steps did you take and what happened:

I'm using OpenEBS ZFS-localPV

  1. Added a new zpool with sudo zpool create -o ashift=12 -o feature@encryption=enabled -O encryption=on -O keylocation=file:///root/zfs-encrypt.key -O keyformat=raw encrypted-pool `sudo losetup -f /tmp/zfs-encrypted.img --show`
  2. Created a new StorageClass to create PVCs for this pool
  3. Setup a new PVC from the storage class and wrote some plain data into it
  4. Ran a Velero backup velero backup create encrypted-test --snapshot-volumes --include-namespaces=apps --volume-snapshot-locations=default --storage-location=default
  5. The backup completed successfully and the data is found on my S3 storage
  6. Downloaded the zfs-pvc-0828badb-1386-4869-a475-00f9795d262d-encrypted-test file from the S3 bucket (UUID matches my PVC on the cluster)
  7. Ran strings zfs-pvc-0828badb-1386-4869-a475-00f9795d262d-encrypted-test | grep find_me and found the contents of the file on the encrytped PVC

What did you expect to happen:

The strings command doesn't print the contents of the file backed up from the encrypted pool.

The output of the following commands will help us better understand what's going on:

$ kubectl get storageclass/openebs-zfs-encrypted -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfs-encrypted
  uid: 6a79fea8-7bcc-4ea0-a609-162b0489a25c
parameters:
  dedup: "off"
  fstype: zfs
  poolname: encrypted-pool
provisioner: zfs.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: Immediate

$ zfs get -p encryption,keystatus encrypted-pool
NAME            PROPERTY    VALUE        SOURCE
encrypted-pool  encryption  aes-256-gcm  -
encrypted-pool  keystatus   available    -

$ zfs get -p encryption,keystatus encrypted-pool/pvc-0828badb-1386-4869-a475-00f9795d262d@encrypted-test
NAME                                                                    PROPERTY    VALUE        SOURCE
encrypted-pool/pvc-0828badb-1386-4869-a475-00f9795d262d@encrypted-test  encryption  aes-256-gcm  -
encrypted-pool/pvc-0828badb-1386-4869-a475-00f9795d262d@encrypted-test  keystatus   available    -

$ kubectl -n apps get pvc/encrypted-storage
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
encrypted-storage   Bound    pvc-0828badb-1386-4869-a475-00f9795d262d   1Gi        RWO            openebs-zfs-encrypted   53m

Anything else you would like to add:

Since there is no specific documentation on this subject in either this nor the drivers repository, I'm not sure if I might just have misunderstood or misconfigured something.

What I'm trying to do is have both encrypted ZFS filesystems backing my PVCs on the actual disk AND have the backup be encrypted in the cloud as well! Meaning it's not necessarily possible to restore a backup (fully) without the encryption key from the host (specified when creating the zpool) and the data on my PVCs is encrypted at rest.

Environment:

  • Velero version (use velero version): 1.9.0
  • Velero features (use velero client config get features): NOT SET
  • Velero-plugin version: 3.3.0
  • OpenEBS version: 2.1.0
  • Kubernetes version (use kubectl version): v1.23.6
  • Kubernetes installer & version: v1.24.3+k3s1
  • Cloud provider or hardware configuration: Raspberry Pi 4
  • OS (e.g. from /etc/os-release): Ubuntu 20

Backup partially fails: error taking snapshot of volume: error reading from server: EOF

What steps did you take and what happened:
I followed the various steps from velero and this repo documentations to create a remote snapshot location to a minio instance running on a machine outside the cluster.

I deployed a couple of services and tried to do a backup, unfortunately I am facing an error. The backup reports as Partially Failed and the logs shows that certain volumes couldn't be backed up with the following error:

time="2022-10-13T12:42:00Z" level=warning msg="Epoll wait failed : interrupted system call" backup=velero/test3 cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/server.go
:302" pluginName=velero-blockstore-openebs

<... can be 50+ similar log entries and then ...>

time="2022-10-13T12:42:00Z" level=info msg="1 errors encountered backup up item" backup=velero/test3 logSource="pkg/backup/backup.go:413" name=registry-server-7d5466494d-84fj5
time="2022-10-13T12:42:00Z" level=error msg="Error backing up item" backup=velero/test3 error="error taking snapshot of volume: rpc error: code = Unavailable desc = error reading from server: EOF" logSource="pkg/backup/backup.go:4
17" name=registry-server-7d5466494d-84fj5

What did you expect to happen:
I expect the backup to be successful

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)

  • kubectl logs deployment/velero -n velero -> here
  • kubectl logs deployment/maya-apiserver -n openebs -> N/A
  • velero backup describe <backupname>
    Name:         test3
    Namespace:    velero
    Labels:       velero.io/storage-location=local-backup
    Annotations:  velero.io/source-cluster-k8s-gitversion=v1.25.2
                  velero.io/source-cluster-k8s-major-version=1
                  velero.io/source-cluster-k8s-minor-version=25
    
    Phase:  PartiallyFailed (run `velero backup logs test3` for more information)
    
    Errors:    4
    Warnings:  165
    
    Namespaces:
      Included:  *
      Excluded:  <none>
    
    Resources:
      Included:        *
      Excluded:        <none>
      Cluster-scoped:  auto
    
    Label selector:  <none>
    
    Storage Location:  local-backup
    
    Velero-Native Snapshot PVs:  auto
    
    TTL:  720h0m0s
    
    Hooks:  <none>
    
    Backup Format Version:  1.1.0
    
    Started:    2022-10-13 13:41:29 +0100 BST
    Completed:  2022-10-13 13:42:38 +0100 BST
    
    Expiration:  2022-11-12 12:41:28 +0000 GMT
    
    Total items to be backed up:  767
    Items backed up:              767
    
    Velero-Native Snapshots:  1 of 5 snapshots completed successfully (specify --details for more information)
    
    CSI Volume Snapshots: <none included>
    
  • velero backup logs <backupname> -> here

Anything else you would like to add:
Currently migrating a 25 nodes cluster to k8s. This is the initial setup/test before definitive migration with just 5 nodes, self-hosted microk8s cluster running cStor

Environment:

  • Velero version (use velero version):
    Client:
            Version: v1.9.2
            Git commit: -
    Server:
            Version: v1.9.2
    
  • Velero features (use velero client config get features): features: EnableCSI
  • Velero-plugin version: 3.3.0
  • OpenEBS version: 3.3.0
  • Kubernetes version (use kubectl version): 1.25
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration: 5 nodes, home made from consumer products, bought over the years so nodes are slightly different
    • Intel i7 or i9 from 8th to 10th Gen
    • 16GB to 32GB
    • 2 nodes have 3x 1TB SSD for cStor
  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Backup of large volume fails

I have a volume that has about 150GB and the backup of it fails.

What steps did you take and what happened:

Using this VolumeSnapshotLocation:

apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/cstor-blockstore
  config:
    bucket: velero
    prefix: cstor
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://10.0.1.221:9000
    restoreAllIncrementalSnapshots: "true"
    autoSetTargetIP: "true"
velero create backup backup-test-cstor-2

Backup is created but fails to upload caused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000).

velero backup logs backup-test-cstor-2
time="2023-05-19T05:02:27Z" level=warning msg="Failed to close file interface : blob (code=Unknown): MultipartUpload: upload multipart failed\n\tupload id: YjQ1ZWE0ODAtN2Q5MS00ZDkyLTg5NDgtMjU5MDZiY2YzMjE0LmJhMDkzODUxLWEzM2ItNDRjYi1hOTdjLWVlMDMxMGEyNTVhNQ\ncaused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit" backup=velero/backup-test-cstor-3 cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/conn.go:322" pluginName=velero-blockstore-openebs
time="2023-05-19T05:02:37Z" level=error msg="Error backing up item" backup=velero/backup-test-cstor-3 error="error taking snapshot of volume: rpc error: code = Unknown desc = Failed to upload snapshot, status:{Failed}" logSource="pkg/backup/backup.go:435" name=influxdb-influxdb2-0

This is strange since I thought the multiPartChunkSize was calculated from the file size.

Then I tried defining the multiPartChunkSize.

apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/cstor-blockstore
  config:
    bucket: velero
    prefix: cstor
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://10.0.1.221:9000
    multiPartChunkSize: 64Mi
    restoreAllIncrementalSnapshots: "true"
    autoSetTargetIP: "true"
velero create backup backup-test-cstor-3

But with this the backup just fails with another error that is not very informative.

velero backup logs backup-test-cstor-3
time="2023-05-18T09:20:03Z" level=info msg="1 errors encountered backup up item" backup=velero/backup-test-cstor-2 logSource="pkg/backup/backup.go:431" name=influxdb-influxdb2-0
time="2023-05-18T09:20:03Z" level=error msg="Error backing up item" backup=velero/backup-test-cstor-2 error="error taking snapshot of volume: rpc error: code = Unavailable desc = error reading from server: EOF" logSource="pkg/backup/backup.go:435" name=influxdb-influxdb2-0

Is there anything that I'm missing to make the backup of large volumes work?

What did you expect to happen:
Backup to succeed and upload successfully.

Anything else you would like to add:
I'm also receiving a lot of these warnings and I'm not sure what they are or how to fix them.

time="2023-05-18T09:20:03Z" level=warning msg="Epoll wait failed : interrupted system call" backup=velero/backup-test-cstor-2 cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/server.go:302" pluginName=velero-blockstore-openebs

Environment:

  • Velero version (use velero version):
Client:
	Version: v1.11.0
	Git commit: -
Server:
	Version: v1.11.0
  • Velero features (use velero client config get features):
features: <NOT SET>
  • Velero-plugin version
v3.4.0
  • OpenEBS version
NAME   	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART        	APP VERSION
openebs	openebs  	3       	2023-03-01 13:21:27.563122 +0000 UTC	deployed	openebs-3.4.1	3.4.0 
  • Kubernetes version (use kubectl version):
Client Version: v1.27.1
Kustomize Version: v5.0.1
Server Version: v1.26.4
  • Kubernetes installer & version:
MicroK8s v1.26.4 revision 5219
  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Optimize restore for scheduled backups

Describe the problem/challenge you have
To restore scheduled backups User need to create restore for all the backups. If schedule has Restic base backup for any volume then restore requires additional configuration to avoid restoring Restic snapshots.

Describe the solution you'd like
Single restore of schedule backups should restore all the required snapshot from base snapshot to targeted backup snapshot. With this approach User can restore restic/cstor snapshot by single restore.

velero-plgun 3.5.0 failed to restore from minio: PVC{nginx-example/nginx-logs} is not bounded!

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.]

STATUS=PartiallyFailed

What did you expect to happen:
STATUS=Completed

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)

  • kubectl logs deployment/velero -n velero
  • kubectl logs deployment/maya-apiserver -n openebs
  • velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
  • velero backup logs <backupname>
  • velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
  • velero restore logs <restorename>

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    component: velero
  name: velero
  namespace: velero
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    component: velero
  name: velero
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: velero
    namespace: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    component: velero
  name: velero
  namespace: velero
spec:
  replicas: 1
  selector:
    matchLabels:
      deploy: velero
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "8085"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        component: velero
        deploy: velero
    spec:
      containers:
      - args:
        - server
        command:
        - /velero
        env:
        - name: AWS_SHARED_CREDENTIALS_FILE
          value: /credentials/cloud
        - name: VELERO_SCRATCH_DIR
          value: /scratch
        - name: VELERO_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: LD_LIBRARY_PATH
          value: /plugins
        image: velero/velero:v1.11.1
        imagePullPolicy: Always
        name: velero
        ports:
        - containerPort: 8085
          name: metrics
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 256Mi
          requests:
            cpu: 500m
            memory: 128Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /plugins
          name: plugins
        - mountPath: /credentials
          name: cloud-credential
        - mountPath: /scratch
          name: scratch
      dnsPolicy: ClusterFirst
      initContainers:
      - image: velero/velero-plugin-for-aws:v1.7.1
        imagePullPolicy: Always
        name: velero-plugin-for-aws
        resources: {}
        volumeMounts:
        - mountPath: /target
          name: plugins
      - image: openebs/velero-plugin:3.5.0
        imagePullPolicy: IfNotPresent
        name: openebs-velero-plugin
        resources: {}
        volumeMounts:
        - mountPath: /target
          name: plugins
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      volumes:
      - emptyDir: {}
        name: plugins
      - name: cloud-credential
        secret:
          defaultMode: 420
          secretName: cloud-credential
      - emptyDir: {}
        name: scratch

disaster simulation

kubectl delete ns nginx-example

check backups

kubectl exec -n velero $(kubectl get po -n velero -l component=velero -oname | head -n 1)  -it -- /velero backup get

output

NAME                       STATUS            ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION      SELECTOR
inc-nginx-backup-with-pv   Completed         0        0          2023-09-21 08:42:03 +0000 UTC   29d       default-local-minio   <none>
nginx-backup-with-pv       Completed         0        1          2023-09-21 08:30:45 +0000 UTC   29d       default-local-minio   <none>

restore

kubectl exec -n velero $(kubectl get po -n velero -l component=velero -oname | head -n 1)  -it -- /velero restore create --from-backup inc-nginx-backup-with-pv

check result

kubectl exec -n velero $(kubectl get po -n velero -l component=velero -oname | head -n 1)  -it -- /velero restore get  inc-nginx-backup-with-pv-20230921085147

output

NAME                                      BACKUP                     STATUS            STARTED                         COMPLETED                       ERRORS   WARNINGS   CREATED                         SELECTOR
inc-nginx-backup-with-pv-20230921085147   inc-nginx-backup-with-pv   PartiallyFailed   2023-09-21 08:51:47 +0000 UTC   2023-09-21 09:00:08 +0000 UTC   1        2          2023-09-21 08:51:47 +0000 UTC   <none>
kubectl exec -n velero $(kubectl get po -n velero -l component=velero -oname | head -n 1)  -it -- /velero restore logs inc-nginx-backup-with-pv-20230921085147

output

time="2023-09-21T08:51:47Z" level=info msg="starting restore" logSource="pkg/controller/restore_controller.go:458" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Starting restore of backup velero/inc-nginx-backup-with-pv" logSource="pkg/restore/restore.go:396" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'persistentvolumes' will be restored at cluster scope" logSource="pkg/restore/restore.go:2030" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'persistentvolumeclaims' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'serviceaccounts' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'configmaps' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'pods' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'replicasets.apps' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Skipping restore of resource because it cannot be resolved via discovery" logSource="pkg/restore/restore.go:1941" resource=clusterclasses.cluster.x-k8s.io restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'endpoints' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'services' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'deployments.apps' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Resource 'endpointslices.discovery.k8s.io' will be restored into namespace 'nginx-example'" logSource="pkg/restore/restore.go:2028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Skipping restore of resource because the restore spec excludes it" logSource="pkg/restore/restore.go:1958" resource=events restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Skipping restore of resource because it cannot be resolved via discovery" logSource="pkg/restore/restore.go:1941" resource=clusterbootstraps.run.tanzu.vmware.com restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Skipping restore of resource because it cannot be resolved via discovery" logSource="pkg/restore/restore.go:1941" resource=clusters.cluster.x-k8s.io restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Skipping restore of resource because it cannot be resolved via discovery" logSource="pkg/restore/restore.go:1941" resource=clusterresourcesets.addons.cluster.x-k8s.io restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Getting client for /v1, Kind=PersistentVolume" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Restoring persistent volume from snapshot." logSource="pkg/restore/restore.go:1104" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Initializing velero plugin for CStor" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/snapshot/snap.go:36" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Ip address of velero-plugin server: 10.42.0.51" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:208" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Setting restApiTimeout to 1m0s" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:286" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Restoring remote snapshot{inc-nginx-backup-with-pv} for volume:pvc-a117021e-6232-4e85-8e4f-133114466a24" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:538" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Reading from {ones-backup/backups/inc-nginx-backup-with-pv/ones-pvc-a117021e-6232-4e85-8e4f-133114466a24-inc-nginx-backup-with-pv.pvc} with provider{aws} to bucket{velero}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:138" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="successfully read object{ones-backup/backups/inc-nginx-backup-with-pv/ones-pvc-a117021e-6232-4e85-8e4f-133114466a24-inc-nginx-backup-with-pv.pvc} to {aws}" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:146" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Creating namespace=nginx-example" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/cstor/pvc_operation.go:338" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T08:51:47Z" level=info msg="Creating PVC for volumeID:pvc-a117021e-6232-4e85-8e4f-133114466a24 snapshot:inc-nginx-backup-with-pv in namespace=nginx-example" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/cstor/pvc_operation.go:131" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=error msg="CreatePVC returned error=PVC{nginx-example/nginx-logs} is not bounded!" cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/cstor/pv_operation.go:205" pluginName=velero-blockstore-openebs restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 1 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=pvc-a117021e-6232-4e85-8e4f-133114466a24 namespace= progress= resource=persistentvolumes restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for /v1, Kind=PersistentVolumeClaim" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing AddPVFromPVCAction" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Adding PV pvc-a117021e-6232-4e85-8e4f-133114466a24 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Skipping persistentvolumes/pvc-a117021e-6232-4e85-8e4f-133114466a24 because it's already been restored." logSource="pkg/restore/restore.go:1028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing ChangePVCNodeSelectorAction" cmd=/velero logSource="pkg/restore/change_pvc_node_selector.go:66" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done executing ChangePVCNodeSelectorAction" cmd=/velero logSource="pkg/restore/change_pvc_node_selector.go:138" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing ChangeStorageClassAction" cmd=/velero logSource="pkg/restore/change_storageclass_action.go:68" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done executing ChangeStorageClassAction" cmd=/velero logSource="pkg/restore/change_storageclass_action.go:79" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore PersistentVolumeClaim: nginx-logs" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 2 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=nginx-logs namespace=nginx-example progress= resource=persistentvolumeclaims restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for /v1, Kind=ServiceAccount" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for serviceaccounts" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing ServiceAccountAction" cmd=/velero logSource="pkg/restore/service_account_action.go:47" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done executing ServiceAccountAction" cmd=/velero logSource="pkg/restore/service_account_action.go:78" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore ServiceAccount: default" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 3 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=default namespace=nginx-example progress= resource=serviceaccounts restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for /v1, Kind=ConfigMap" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore ConfigMap: kube-root-ca.crt" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 4 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=kube-root-ca.crt namespace=nginx-example progress= resource=configmaps restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for /v1, Kind=Pod" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing AddPVCFromPodAction" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:44" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Adding PVC nginx-example/nginx-logs as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:58" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Skipping persistentvolumeclaims/nginx-example/nginx-logs because it's already been restored." logSource="pkg/restore/restore.go:1028" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing ChangeImageNameAction" cmd=/velero logSource="pkg/restore/change_image_name_action.go:68" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done executing ChangeImageNameAction" cmd=/velero logSource="pkg/restore/change_image_name_action.go:81" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing InitRestoreHookPodAction" cmd=/velero logSource="pkg/restore/init_restorehook_pod_action.go:49" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Pod nginx-example/nginx-deployment-79bcd4b657-wq6t7 has no init.hook.restore.velero.io/container-image annotation, no initRestoreHook in annotation" cmd=/velero logSource="internal/hook/item_hook_handler.go:387" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Handling InitRestoreHooks from RestoreSpec" cmd=/velero logSource="internal/hook/item_hook_handler.go:139" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Returning pod nginx-example/nginx-deployment-79bcd4b657-wq6t7 with 0 init container(s)" cmd=/velero logSource="internal/hook/item_hook_handler.go:180" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Returning from InitRestoreHookPodAction" cmd=/velero logSource="pkg/restore/init_restorehook_pod_action.go:61" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing PodVolumeRestoreAction" cmd=/velero logSource="pkg/restore/pod_volume_restore_action.go:71" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done executing PodVolumeRestoreAction" cmd=/velero logSource="pkg/restore/pod_volume_restore_action.go:103" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore Pod: nginx-deployment-79bcd4b657-wq6t7" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="the managed fields for nginx-example/nginx-deployment-79bcd4b657-wq6t7 is patched" logSource="pkg/restore/restore.go:1522" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 5 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=nginx-deployment-79bcd4b657-wq6t7 namespace=nginx-example progress= resource=pods restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for apps/v1, Kind=ReplicaSet" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for replicasets.apps" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing ChangeImageNameAction" cmd=/velero logSource="pkg/restore/change_image_name_action.go:68" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done executing ChangeImageNameAction" cmd=/velero logSource="pkg/restore/change_image_name_action.go:81" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore ReplicaSet: nginx-deployment-79bcd4b657" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="the managed fields for nginx-example/nginx-deployment-79bcd4b657 is patched" logSource="pkg/restore/restore.go:1522" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 6 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=nginx-deployment-79bcd4b657 namespace=nginx-example progress= resource=replicasets.apps restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore Endpoints: my-nginx" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="the managed fields for nginx-example/my-nginx is patched" logSource="pkg/restore/restore.go:1522" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 7 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=my-nginx namespace=nginx-example progress= resource=endpoints restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore Service: my-nginx" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="the managed fields for nginx-example/my-nginx is patched" logSource="pkg/restore/restore.go:1522" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 8 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=my-nginx namespace=nginx-example progress= resource=services restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing item action for deployments.apps" logSource="pkg/restore/restore.go:1196" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Executing ChangeImageNameAction" cmd=/velero logSource="pkg/restore/change_image_name_action.go:68" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done executing ChangeImageNameAction" cmd=/velero logSource="pkg/restore/change_image_name_action.go:81" pluginName=velero restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore Deployment: nginx-deployment" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="the managed fields for nginx-example/nginx-deployment is patched" logSource="pkg/restore/restore.go:1522" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 9 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=nginx-deployment namespace=nginx-example progress= resource=deployments.apps restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Getting client for discovery.k8s.io/v1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:918" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore status includes excludes: <nil>" logSource="pkg/restore/restore.go:1189" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Attempting to restore EndpointSlice: my-nginx-6tswg" logSource="pkg/restore/restore.go:1337" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="the managed fields for nginx-example/my-nginx-6tswg is patched" logSource="pkg/restore/restore.go:1522" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Restored 10 items out of an estimated total of 10 (estimate will change throughout the restore)" logSource="pkg/restore/restore.go:669" name=my-nginx-6tswg namespace=nginx-example progress= resource=endpointslices.discovery.k8s.io restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Waiting for all pod volume restores to complete" logSource="pkg/restore/restore.go:551" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done waiting for all pod volume restores to complete" logSource="pkg/restore/restore.go:567" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Waiting for all post-restore-exec hooks to complete" logSource="pkg/restore/restore.go:571" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="Done waiting for all post-restore exec hooks to complete" logSource="pkg/restore/restore.go:579" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=error msg="Cluster resource restore error: error executing PVAction for persistentvolumes/pvc-a117021e-6232-4e85-8e4f-133114466a24: rpc error: code = Unknown desc = Failed to read PVC for volumeID=pvc-a117021e-6232-4e85-8e4f-133114466a24 snap=inc-nginx-backup-with-pv: PVC{nginx-example/nginx-logs} is not bounded!" logSource="pkg/controller/restore_controller.go:494" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=warning msg="Namespace nginx-example, resource restore warning: could not restore, PersistentVolumeClaim \"nginx-logs\" already exists. Warning: the in-cluster version is different than the backed-up version." logSource="pkg/controller/restore_controller.go:509" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=warning msg="Namespace nginx-example, resource restore warning: could not restore, ConfigMap \"kube-root-ca.crt\" already exists. Warning: the in-cluster version is different than the backed-up version." logSource="pkg/controller/restore_controller.go:509" restore=velero/inc-nginx-backup-with-pv-20230921085147
time="2023-09-21T09:00:08Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:512" restore=velero/inc-nginx-backup-with-pv-20230921085147
# kubectl describe po -n nginx-example nginx-deployment-79bcd4b657-wq6t7
  Warning  FailedMount  49s                 kubelet, ubuntu.local  Unable to attach or mount volumes: unmounted volumes=[nginx-logs], unattached volumes=[kube-api-access-q8wjp nginx-logs]: timed out waiting for the condition
# kubectl get pvc -n nginx-example
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
nginx-logs   Bound    pvc-b0e72129-dcc5-48ac-b891-b052bd38ad74   50Mi       RWO            cstor-csi-disk   33m
#  kubectl get pv pvc-b0e72129-dcc5-48ac-b891-b052bd38ad74
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS     REASON   AGE
pvc-b0e72129-dcc5-48ac-b891-b052bd38ad74   50Mi       RWO            Delete           Bound    nginx-example/nginx-logs   cstor-csi-disk            26m
# kubectl get pv pvc-a117021e-6232-4e85-8e4f-133114466a24
Error from server (NotFound): persistentvolumes "pvc-a117021e-6232-4e85-8e4f-133114466a24" not found

The pv(pvc-b0e72129-dcc5-48ac-b891-b052bd38ad74) is dynamically created by the volume provider, and its name is different from what velero requires (pvc-a117021e-6232-4e85-8e4f-133114466a24), causing the recovery to fail?
But it looks like all the relationships are correct, why does the pod still fail to start due to mounted volumes?

# kubectl get VolumeSnapshotLocation -n velero
NAME    AGE
minio   179m
# kubectl get VolumeSnapshotLocation -n velero minio -o yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: minio
  namespace: velero
spec:
  config:
    autoSetTargetIP: "true"
    backupPathPrefix: ones-backup
    bucket: velero
    multiPartChunkSize: 64Mi
    namespace: openebs
    prefix: ones
    provider: aws
    region: minio
    restApiTimeout: 1m
    restoreAllIncrementalSnapshots: "true"
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
  credential:
    key: cloud
    name: cloud-credential
  provider: openebs.io/cstor-blockstore

autoSetTargetIP: "true"

Environment:

  • Velero version (use velero version):
    velero 1.11.1
  • Velero features (use velero client config get features):
  • Velero-plugin version
    3.5.0
  • OpenEBS version
    openebs/cspc-operator:3.5.0
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:34:02Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.12+k3s1", GitCommit:"7515237f85851b6876f6f9085931641f3b1269ef", GitTreeState:"clean", BuildDate:"2023-07-28T00:20:47Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes installer & version:
    k3sv1.25.12+k3s1
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
# kubectl get no -o wide
NAME           STATUS   ROLES                       AGE    VERSION         INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k3s-node1      Ready    <none>                      3d5h   v1.25.12+k3s1   192.168.56.32   192.168.56.32   Ubuntu 18.04.6 LTS      4.15.0-213-generic       containerd://1.7.1-k3s1
k3s-node2      Ready    <none>                      3d5h   v1.25.12+k3s1   192.168.56.33   192.168.56.33   CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   containerd://1.7.1-k3s1
k3s-node3      Ready    <none>                      3d5h   v1.25.12+k3s1   192.168.56.34   192.168.56.34   CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   containerd://1.7.1-k3s1
ubuntu.local   Ready    control-plane,etcd,master   3d5h   v1.25.12+k3s1   192.168.56.1    192.168.56.1    Ubuntu 20.04.4 LTS      5.15.0-78-generic        containerd://1.7.1-k3s1

BDD to verify cstor local snapshot

Cases:

  • Add a BDD test to create a backup with local-snapshot
  • Add a BDD test to verify restore of the above backup in different namespace

Plugin for local PV to restore in different cluster

Restore for local PV in the same cluster is working fine, but if we restore in different cluster which have different node then restore won't work. To fix this behavior we need to have a restore-item plugin which can remove the node annotation from PVC during dynamic provisioning.

[feat request] zfs snapshots

I've been digging through all your documentation and not finding an answer. What I was hoping to achieve was having the velero plugin take a zfs snapshot when it backs up to minio. Even better if the plugin cleared them out according to the ttl in the schedule. This is just to able to do a quick rollback if needed. Attaching the snapshot and backup locations I'm currently using.

----
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: zfspv
  namespace: velero
spec:
  config:
    bucket: velero
    insecureSkipTLSVerify: "true"
    namespace: openebs
    prefix: zfs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: https://minio.minio:443
  provider: openebs.io/zfspv-blockstore
---
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: default
  namespace: velero
spec:
  config:
    insecureSkipTLSVerify: "true"
    region: minio
    s3ForcePathStyle: "true"
    s3Url: https://minio.minio:443
  default: true
  objectStorage:
    bucket: velero
  provider: aws

Support to backup/restore cstor V1

Describe the problem/challenge you have
As of now plugin supports backup/restore of cstor v1alpha1 volumes. This issue is to support the backup/restore for cstor v1 volumes.

Describe the solution you'd like
[A clear and concise description of what you want to happen.]

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version (use velero version):
  • Velero features (use velero client config get features):
  • Velero-plugin version
  • OpenEBS version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

nfs provisioner and openebs velero plugin

Trying to take snapshot of nfs provisioned persistent volume using openebs velero plugin.

This result in error Persistent volume is not a supported volume type for snapshots

--------------------------
time="2023-05-16T07:49:13Z" level=info msg="Initializing velero plugin for CStor" backup=velero/backuptwo-16052023 cmd=/plugins/velero-blockstore-openebs logSource="/go/src/github.com/openebs/velero-plugin/pkg/snapshot/snap.go:36" pluginName=velero-blockstore-openebs
time="2023-05-16T07:49:13Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/backuptwo-16052023 error="rpc error: code = Unknown desc = failed to get address for maya-apiserver/cvc-server service" error.file="/go/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:259" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:524" name=pvc-xxxx-xxxx-xxxx namespace= persistentVolume=pvc-xxxx-xxxx-xxxx resource=persistentvolumes volumeSnapshotLocation=cstor-sp-velero
time="2023-05-16T07:49:13Z" level=info msg="Persistent volume is not a supported volume type for snapshots, skipping." backup=velero/backuptwo-16052023 logSource="pkg/backup/item_backupper.go:544" name=pvc-xxxx-xxxx-xxxx  namespace= persistentVolume=pvc-xxxx-xxxx-xxxx  resource=persistentvolumes
------------------------

Environment:

  • Velero version : Version: v1.11.0
  • OpenEBS version: nfs-provisioner version: 0.9.0
  • Kubernetes version (use kubectl version): 1.26
  • Cloud provider or hardware configuration: Digital Ocean

Backup sometimes fails with "Connection refused" from cstor-pool-mgmt to velero

What steps did you take and what happened:
I have a nightly backup that backups 9 PVCs. It occasionally fully succeeds, but usually, I have some PartiallyFailed. It used to be that most succeeded and some occasionally PartiallyFailed. I'm not sure what changed but what's definitely different is that there is more data now, and there is a Rebuild going on which may cause increased load. (This is running on Raspberry Pi's so load can quickly become too high.)

Last night, this backup occurred at 04:37:19 UTC:

NAME                           STATUS            ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
volumes-full-20210226-043717   PartiallyFailed   8        6194       2021-02-26 05:37:19 +0100 CET   2d        default            <none>
Started:    2021-02-26 05:37:19 +0100 CET
Completed:  2021-02-26 06:57:24 +0100 CET

Expiration:  2021-03-01 05:37:19 +0100 CET

Total items to be backed up:  1185
Items backed up:              1185

Velero-Native Snapshots:  1 of 9 snapshots completed successfully (specify --details for more information)

It is important to note that one PVC did succeed. The 8 errors in the logs are all the same - 8 times this error for a different PVC:

time="2021-02-26T04:38:34Z" level=info msg="1 errors encountered backup up item" backup=velero/volumes-full-20210226-043717 logSource="pkg/backup/backup.go:427" name=openvpn-0
time="2021-02-26T04:38:34Z" level=error msg="Error backing up item" backup=velero/volumes-full-20210226-043717 error="error taking snapshot of volume: rpc error: code = Unknown desc = Failed to send backup request: Error calling REST api: Error when connecting to maya-apiserver : Post \"http://10.43.17.162:5656/latest/backups/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logSource="pkg/backup/backup.go:431" name=openvpn-0

Here, openvpn-0 is one of my Pods, it uses the PVC openvpn-files-new, with ID pvc-cbb3262b-fa2e-430a-8805-9774e77622ec - it's only 10 GB and it has three replicas, all of which are Healthy. 10.43.17.162 is the service IP of the maya-apiserver-service, behind which is the pod maya-apiserver-69849d6b87-bvnhf running on node kathleen. Logs around that time of the maya-apiserver show that creating a snapshot takes a bit more than a minute:

I0226 04:37:34.349365       7 backup_endpoint_v1alpha1.go:160] Creating backup snapshot volumes-full-20210226-043717 for volume "pvc-cbb3262b-fa2e-430a-8805-9774e77622ec"
I0226 04:38:34.789750       7 backup_endpoint_v1alpha1.go:167] Backup snapshot:'volumes-full-20210226-043717' created successfully for volume:pvc-cbb3262b-fa2e-430a-8805-9774e77622ec
I0226 04:38:34.973999       7 backup_endpoint_v1alpha1.go:160] Creating backup snapshot volumes-full-20210226-043717 for volume "pvc-60c6a231-8b85-43c4-8031-32bd422f81f0"
I0226 04:38:35.750616       7 backup_endpoint_v1alpha1.go:238] LastBackup resource created for backup:volumes-full-20210226-043717 volume:pvc-cbb3262b-fa2e-430a-8805-9774e77622ec
I0226 04:38:35.750828       7 backup_endpoint_v1alpha1.go:134] Creating backup volumes-full-20210226-043717 for volume "pvc-cbb3262b-fa2e-430a-8805-9774e77622ec" poolUUID:67623196-d6e5-4956-9eb4-db81259c9089
I0226 04:38:35.818222       7 backup_endpoint_v1alpha1.go:144] Backup resource:'volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec' created successfully

However, the same node, kathleen, is also performing a repair currently (related to openebs/openebs#3346), so it has a pretty high load. Its cstor-pool-mgmt logs around that time indicate it can't connect to Velero:

I0226 04:38:35.836346       6 new_backup_controller.go:223] CStorBackup event added: volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec, a2e8a546-1a0f-459f-959f-1c41182dc269
I0226 04:38:35.844944       6 event.go:281] Event(v1.ObjectReference{Kind:"CStorBackup", Namespace:"default", Name:"volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec", UID:"a2e8a546-1a0f-459f-959f-1c41182dc269", APIVersion:"openebs.io/v1alpha1", ResourceVersion:"146088151", FieldPath:""}): type: 'Normal' reason: 'Synced' Received Resource create event
I0226 04:38:35.851346       6 handler.go:40] Sync handler called for key:default/volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec with op:add
I0226 04:38:35.955855       6 handler.go:76] Completed operation:add for backup:volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec, status:Init
I0226 04:38:35.956169       6 run_backup_controller.go:109] Successfully synced 'default/volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec' for operation: add
I0226 04:38:35.961765       6 new_backup_controller.go:241] CStorBackup Modify event : volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec, a2e8a546-1a0f-459f-959f-1c41182dc269
I0226 04:38:35.962352       6 event.go:281] Event(v1.ObjectReference{Kind:"CStorBackup", Namespace:"default", Name:"volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec", UID:"a2e8a546-1a0f-459f-959f-1c41182dc269", APIVersion:"openebs.io/v1alpha1", ResourceVersion:"146088153", FieldPath:""}): type: 'Normal' reason: 'Synced' Received Resource modify event
I0226 04:38:35.968852       6 handler.go:40] Sync handler called for key:default/volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec with op:Sync
I0226 04:38:36.205785       6 volumereplica.go:332] Backup Command for volume: pvc-cbb3262b-fa2e-430a-8805-9774e77622ec created, Cmd: [zfs send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717 | nc -w 3 10.42.3.2 9001]
I0226 04:38:36.206349       6 new_backup_controller.go:241] CStorBackup Modify event : volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec, a2e8a546-1a0f-459f-959f-1c41182dc269
I0226 04:38:36.206769       6 event.go:281] Event(v1.ObjectReference{Kind:"CStorBackup", Namespace:"default", Name:"volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec", UID:"a2e8a546-1a0f-459f-959f-1c41182dc269", APIVersion:"openebs.io/v1alpha1", ResourceVersion:"146088156", FieldPath:""}): type: 'Normal' reason: 'Synced' Received Resource modify event
E0226 04:38:41.587916       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:0 :exit status 1
E0226 04:38:46.691668       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:1 :exit status 1
E0226 04:38:52.163753       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:2 :exit status 1
E0226 04:38:57.872741       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:3 :exit status 1
E0226 04:39:03.482621       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:4 :exit status 1
E0226 04:39:08.941898       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:5 :exit status 1
E0226 04:39:14.029835       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:6 :exit status 1
E0226 04:39:20.732106       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:7 :exit status 1
E0226 04:39:25.797779       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:8 :exit status 1
E0226 04:39:30.894068       6 volumereplica.go:337] Unable to start backup pvc-cbb3262b-fa2e-430a-8805-9774e77622ec. error : (UNKNOWN) [10.42.3.2] 9001 (?) : Connection refused
WARNING: could not send cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-cbb3262b-fa2e-430a-8805-9774e77622ec@volumes-full-20210226-043717: does not exist
 retry:9 :exit status 1
2021-02-26T04:39:35.901Z	ERROR	volumereplica/volumereplica.go:345		{"eventcode": "cstor.volume.backup.create.failure", "msg": "Failed to create backup CStor volume", "rname": "pvc-cbb3262b-fa2e-430a-8805-9774e77622ec"}
github.com/openebs/maya/cmd/cstor-pool-mgmt/volumereplica.CreateVolumeBackup
	/home/travis/gopath/src/github.com/openebs/maya/cmd/cstor-pool-mgmt/volumereplica/volumereplica.go:345
github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller.(*BackupController).syncEventHandler
	/home/travis/gopath/src/github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller/handler.go:116
github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller.(*BackupController).eventHandler
	/home/travis/gopath/src/github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller/handler.go:90
github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller.(*BackupController).syncHandler
	/home/travis/gopath/src/github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller/handler.go:52
github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller.(*BackupController).processNextWorkItem.func1
	/home/travis/gopath/src/github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller/run_backup_controller.go:103
github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller.(*BackupController).processNextWorkItem
	/home/travis/gopath/src/github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller/run_backup_controller.go:111
github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller.(*BackupController).runWorker
	/home/travis/gopath/src/github.com/openebs/maya/cmd/cstor-pool-mgmt/controller/backup-controller/run_backup_controller.go:64
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
E0226 04:39:35.922494       6 handler.go:119] Failed to create backup(volumes-full-20210226-043717-pvc-cbb3262b-fa2e-430a-8805-9774e77622ec): exit status 1
E0226 04:39:35.922589       6 handler.go:58] exit status 1

Here, 10.42.3.2 is indeed the IP of the Velero Pod.

My hypothesis:

  • At 04:37:34, snapshot creation starts. Velero waits a minute for this to complete. (Is this the POST http://10.43.17.162:5656/latest/backups/ call?)
  • At 04:38:34, a minute has passed, Velero times out and stops backing up the volume with an error.
  • A fraction of a second later, snapshot creation succeeds. It took quite long, perhaps because of high load.
  • At 04:38:36, cstor-pool-mgmt tries sending the ZFS snapshot to Velero, but at that point, it is not listening anymore.

Perhaps a simple fix would be to attempt a longer timeout for the snapshot to be created. I could patch my Velero plugin to set the timeout to 10 minutes for example, to see if that indeed resolves the issue? (Edit: The snapshot create requests almost all take a bit more than 60 seconds, which is suspicious, I think it would be better to take a look at why that is the case.)

The output of the following commands will help us better understand what's going on:

Environment:

  • Velero version (use velero version): v1.5.3
  • Velero features (use velero client config get features):
  • Velero-plugin version: 2.6.0
  • OpenEBS version: 2.1.0
  • Kubernetes version (use kubectl version): v1.18.9+k3s1
  • Kubernetes installer & version: k3os v0.11.1
  • Cloud provider or hardware configuration: Bare-metal 4x raspberry pi 4
  • OS (e.g. from /etc/os-release): k3os v0.11.1

After a restore, cStorVolume remains in Init, CVRs remain in Error state

What steps did you take and what happened:

I wanted to test a full restore of a namespace (a single application). I tested this on a namespace called "vikunja", which contains two Deployments, two Services, two Pods, one Ingress and one PVC, backed by a cStorVolume and three cStorVolumeReplicas that are all healthy. The restore succeeded, however, in the new namespace the application never came up because its cStorVolume remains in Init state and its three cStorVolumeReplicas remain in Error state.

This is the Restore command I ran:

velero restore create vikunja-test-restore --from-backup volumes-full-20201011-043725 --include-namespaces 'vikunja' --namespace-mappings 'vikunja:vikunja-restore' --restore-volumes=true

Where volumes-full is a (non-Schedule) backup that contains a full copy of most of the volumes in my cluster.

The restore finished successfully in approximately one minute. Indeed, after this, the vikunja-restore namespace contains two Deployments, two Services, two Pods, one Ingress and one PVC. I modified the Ingress so that the endpoint URI is different from the original namespace, and visited the application at the new URI expecting to find the same application, but in the state from the backup instead of the most recent state. However, the application does not load, because the PVC cannot be mounted as the iSCSI target is not ready.

$ kubectl get cstorvolume -n openebs | grep 9e222
pvc-9e222d37-984f-45ad-a42b-5fac7892b51f   Init      7m48s   50Gi
$ kubectl describe cstorvolume -n openebs pvc-9e222d37-984f-45ad-a42b-5fac7892b51f 
[...]
Events:
  Type    Reason   Age    From                                                             Message
  ----    ------   ----   ----                                                             -------
  Normal  Updated  6m32s  pvc-9e222d37-984f-45ad-a42b-5fac7892b51f-target-5cf9bbd767288st  Updated resize conditions
  Normal  Updated  6m32s  pvc-9e222d37-984f-45ad-a42b-5fac7892b51f-target-5cf9bbd767288st  successfully resized volume from 0 to 50Gi
$ kubectl get cvr -n openebs | grep 9e222
pvc-9e222d37-984f-45ad-a42b-5fac7892b51f-cstor-disk-pool-eiq0   32.6M   1.76M       Error     7m55s
pvc-9e222d37-984f-45ad-a42b-5fac7892b51f-cstor-disk-pool-una0   32.6M   1.76M       Error     7m55s
pvc-9e222d37-984f-45ad-a42b-5fac7892b51f-cstor-disk-pool-7it6   32.6M   1.75M       Error     7m55s
$ kubectl describe cvr -n openebs pvc-9e222d37-984f-45ad-a42b-5fac7892b51f-cstor-disk-pool-eiq0
[...]
Events:
  Type     Reason      Age                    From                Message
  ----     ------      ----                   ----                -------
  Normal   Synced      8m55s                  CStorVolumeReplica  Received Resource create event
  Normal   Created     8m51s                  CStorVolumeReplica  Resource created successfully
  Normal   Synced      8m51s (x2 over 8m55s)  CStorVolumeReplica  Received Resource modify event
  Warning  SyncFailed  26s (x18 over 8m51s)   CStorVolumeReplica  failed to sync CVR error: unable to update snapshot list details in CVR: failed to get the list of snapshots: Output: failed listsnap command for cstor-67623196-d6e5-4956-9eb4-db81259c9089/pvc-9e222d37-984f-45ad-a42b-5fac7892b51f with err 2

I tried to let the situation resolve overnight but to no avail.

What did you expect to happen:

I expected the volume and its replicas to eventually become Healthy with the same contents as the volume had during the restored backup.

The output of the following commands will help us better understand what's going on:

  • maya-apiserver logs (restore is around 21:24):
I1011 06:18:29.198928       7 backup_endpoint_v1alpha1.go:501] Deleting backup snapshot volumes-full-20201011-043725 for volume "pvc-1c329d55-f024-4306-8489-0efbd65f58c5"
I1011 06:18:29.531233       7 backup_endpoint_v1alpha1.go:507] Snapshot:'volumes-full-20201011-043725' deleted successfully for volume:pvc-1c329d55-f024-4306-8489-0efbd65f58c5
I1011 21:24:35.125935       7 volume_endpoint_v1alpha1.go:78] received cas volume request: http method {GET}
I1011 21:24:35.126216       7 volume_endpoint_v1alpha1.go:171] received volume read request: pvc-9e222d37-984f-45ad-a42b-5fac7892b51f
W1011 21:24:35.557971       7 task.go:433] notfound error at runtask {readlistsvc}: error {target service not found}
W1011 21:24:35.560904       7 runner.go:166] nothing to rollback: no rollback tasks were found
2020/10/11 21:24:35.561160 [ERR] http: Request GET /latest/volumes/pvc-9e222d37-984f-45ad-a42b-5fac7892b51f
failed to read volume: volume {pvc-9e222d37-984f-45ad-a42b-5fac7892b51f} not found in namespace {vikunja-restore}
I1011 21:24:35.574866       7 volume_endpoint_v1alpha1.go:78] received cas volume request: http method {POST}
I1011 21:24:35.575543       7 volume_endpoint_v1alpha1.go:135] received volume create request
I1011 21:24:38.119247       7 select.go:154] Overprovisioning restriction policy not added as overprovisioning is enabled on spc cstor-disk-pool
I1011 21:24:39.041828       7 volume_endpoint_v1alpha1.go:165] volume 'pvc-9e222d37-984f-45ad-a42b-5fac7892b51f' created successfully
I1011 21:24:39.058360       7 volume_endpoint_v1alpha1.go:78] received cas volume request: http method {GET}
I1011 21:24:39.058460       7 volume_endpoint_v1alpha1.go:171] received volume read request: pvc-9e222d37-984f-45ad-a42b-5fac7892b51f
I1011 21:24:42.053882       7 volume_endpoint_v1alpha1.go:226] volume 'pvc-9e222d37-984f-45ad-a42b-5fac7892b51f' read successfully
I1011 21:24:47.994475       7 restore_endpoint_v1alpha1.go:105] Restore volume 'pvc-9e222d37-984f-45ad-a42b-5fac7892b51f' created successfully 
I1011 21:24:48.937246       7 restore_endpoint_v1alpha1.go:143] Restore:volumes-full-20201011-043725-61650654-92c8-4fa3-ad5d-326c3f5a553c created for volume "pvc-9e222d37-984f-45ad-a42b-5fac7892b51f" poolUUID:223b4546-5ed3-4e1e-b8b7-7f42c4502dbb
I1011 21:24:49.503239       7 restore_endpoint_v1alpha1.go:143] Restore:volumes-full-20201011-043725-0359f610-7763-428c-b0fc-a92c2f5463c6 created for volume "pvc-9e222d37-984f-45ad-a42b-5fac7892b51f" poolUUID:30381e2c-0392-4b82-b552-ca843fdaf6d7
I1011 21:24:50.294236       7 restore_endpoint_v1alpha1.go:143] Restore:volumes-full-20201011-043725-44813e28-d02f-454e-8891-dd2b65d31a66 created for volume "pvc-9e222d37-984f-45ad-a42b-5fac7892b51f" poolUUID:67623196-d6e5-4956-9eb4-db81259c9089
I1011 21:24:55.476539       7 restore_endpoint_v1alpha1.go:246] Restore:volumes-full-20201011-043725-61650654-92c8-4fa3-ad5d-326c3f5a553c status is InProgress
I1011 21:25:00.685293       7 restore_endpoint_v1alpha1.go:246] Restore:volumes-full-20201011-043725-61650654-92c8-4fa3-ad5d-326c3f5a553c status is InProgress
I1011 21:25:06.459184       7 restore_endpoint_v1alpha1.go:246] Restore:volumes-full-20201011-043725-44813e28-d02f-454e-8891-dd2b65d31a66 status is InProgress
I1011 21:25:12.062433       7 restore_endpoint_v1alpha1.go:246] Restore:volumes-full-20201011-043725-0359f610-7763-428c-b0fc-a92c2f5463c6 status is InProgress
I1011 21:25:17.099452       7 restore_endpoint_v1alpha1.go:246] Restore:volumes-full-20201011-043725-61650654-92c8-4fa3-ad5d-326c3f5a553c status is Done
I1011 21:25:17.099542       7 restore_endpoint_v1alpha1.go:246] Restore:volumes-full-20201011-043725-44813e28-d02f-454e-8891-dd2b65d31a66 status is Done
I1011 21:25:17.099581       7 restore_endpoint_v1alpha1.go:246] Restore:volumes-full-20201011-043725-0359f610-7763-428c-b0fc-a92c2f5463c6 status is Done
I1012 04:38:52.017035       7 backup_endpoint_v1alpha1.go:160] Creating backup snapshot volumes-full-20201012-043837 for volume "pvc-318b2f41-40db-4ee7-8564-4c8677665bbf"
I1012 04:38:53.195149       7 backup_endpoint_v1alpha1.go:167] Backup snapshot:'volumes-full-20201012-043837' created successfully for volume:pvc-318b2f41-40db-4ee7-8564-4c8677665bbf
  • velero restore logs vikunja-test-restore | grep -v 'Skipping namespace': https://pastebin.com/vh9ACfAn
  • velero restore describe vikunja-test-restore:
velero restore describe vikunja-test-restore
Name:         vikunja-test-restore
Namespace:    velero
Labels:       <none>
Annotations:  <none>

Phase:  Completed

Started:    2020-10-11 23:24:29 +0200 CEST
Completed:  2020-10-11 23:25:28 +0200 CEST

Warnings:
  Velero:     <none>
  Cluster:  could not restore, persistentvolumes "pvc-9e222d37-984f-45ad-a42b-5fac7892b51f" already exists. Warning: the in-cluster version is different than the backed-up version.
  Namespaces:
    vikunja-restore:  could not restore, persistentvolumeclaims "storage" already exists. Warning: the in-cluster version is different than the backed-up version.

Backup:  volumes-full-20201011-043725

Namespaces:
  Included:  vikunja
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
  Cluster-scoped:  auto

Namespace mappings:  vikunja=vikunja-restore

Label selector:  <none>

Restore PVs:  true
  • Logs of the cStor disk pool pod:
$ kubectl logs cstor-disk-pool-7it6-657f7f5897-2tg4b -n openebs cstor-pool | grep -C3 10-11/21:24 
2020-10-11/06:17:47.588 [tgt 10.43.208.76:6060:17]: Destroy snapshot command for cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-2ef2eddc-c59f-4222-a112-6fb7ebfccdb3@volumes-full-20201011-043725
2020-10-11/06:17:50.336 [tgt 10.43.42.69:6060:16]: Create snapshot command for cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-1c329d55-f024-4306-8489-0efbd65f58c5@volumes-full-20201011-043725
2020-10-11/06:18:29.476 [tgt 10.43.42.69:6060:16]: Destroy snapshot command for cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-1c329d55-f024-4306-8489-0efbd65f58c5@volumes-full-20201011-043725
2020-10-11/21:24:42.125 zvol cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-9e222d37-984f-45ad-a42b-5fac7892b51f status change: DEGRADED -> DEGRADED
2020-10-11/21:24:42.125 zvol cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-9e222d37-984f-45ad-a42b-5fac7892b51f rebuild status change: INIT -> INIT
2020-10-11/21:24:42.126 ERROR target IP address is empty for cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-9e222d37-984f-45ad-a42b-5fac7892b51f
2020-10-12/04:38:52.176 [tgt 10.43.120.72:6060:28]: Create snapshot command for cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-318b2f41-40db-4ee7-8564-4c8677665bbf@volumes-full-20201012-043837
2020-10-12/04:41:13.316 [tgt 10.43.120.72:6060:28]: Destroy snapshot command for cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-318b2f41-40db-4ee7-8564-4c8677665bbf@volumes-full-20201012-043837
2020-10-12/04:41:17.452 [tgt 10.43.80.233:6060:18]: Create snapshot command for cstor-30381e2c-0392-4b82-b552-ca843fdaf6d7/pvc-20bffc04-703d-44b0-8820-e124a9c0bc51@volumes-full-20201012-043837

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version (use velero version): v1.5.1
  • Velero features (use velero client config get features):
  • Velero-plugin version: openebs/velero-plugin-arm64:2.1.0
  • OpenEBS version: 2.1.0
  • Kubernetes version (use kubectl version): client 1.19.2, server 1.18.3
  • Kubernetes installer & version: k3os
  • Cloud provider or hardware configuration: arm64
  • OS (e.g. from /etc/os-release): k3os

Heptio Ark is now Velero

Hi! Thanks for creating a plugin for Ark.

I wanted to give you a heads up that today we announced the rename of the project: https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-heptio-open-source-projects-to-vmware/.

The Velero team is working on a 0.11 release which will complete the code- and plugin-related changes to complete the rebranding.

We'd also be happy to add you the list of community plugins when you're ready to announce your plugin to the larger Ark community.

Let me know if you have any questions!

Issue in restoring multiple volume

During restore for multiple volumes, restore stuck for the second volume.

logs:

time="2019-05-29T15:30:42Z" level=info msg="New volume(&{pvc-b686969f-81ef-11e9-9fcc-005056934f66 cstorminio test 3vol-minio-1  }) created" cmd=/plugins/velero-blockstore-cstor logSource="/home/mayank/go/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:405" pluginName=velero-blockstore-cstor restore=velero/3vol-minio-1-20190529210027

Restic REST Remote Destination

This would help backing up to more endpoints via the use of rclone restic server to for example google drive

Currently we are restricted to S3 or GCP Filestore. Not good for scalability and not cost-economic

Restore failed of minio

What steps did you take and what happened:

  • Backup the minio using s3 bucked successfully.
  • When tried restoreing it failed.
    [A clear and concise description of what the bug is, and what commands you ran.]

What did you expect to happen:

  • restore should complete successfully.

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)

root@gitlab-k8s-master:~# velero restore describe gitlab-backup-minio-20200424171858
Name:         gitlab-backup-minio-20200424171858
Namespace:    velero
Labels:       <none>
Annotations:  <none>

Phase:  PartiallyFailed (run 'velero restore logs gitlab-backup-minio-20200424171858' for more information)

Warnings:
  Velero:     <none>
  Cluster:    <none>
  Namespaces:
    default:  not restored: persistentvolumeclaims "gitlab-minio" already exists and is different from backed up version.

Errors:
  Velero:     <none>
  Cluster:  error executing PVAction for persistentvolumes/pvc-6a4665f5-3457-11e9-816f-0050569876a2: rpc error: code = Aborted desc = entry{16} not found in ClientList
  Namespaces: <none>

Backup:  gitlab-backup-minio

Namespaces:
  Included:  *
  Excluded:  <none>

Resources:
  Included:        persistentvolumeclaims, persistentvolumes
  Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
  Cluster-scoped:  auto

Namespace mappings:  <none>

Label selector:  <none>

Restore PVs:  true
  • velero restore logs <restorename>
root@gitlab-k8s-master:~# velero restore logs gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="starting restore" logSource="pkg/controller/restore_controller.go:450" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=namespaces logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=storageclasses.storage.k8s.io logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=secrets logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=configmaps logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=serviceaccounts logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=limitranges logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=pods logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=replicasets.apps logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=customresourcedefinitions.apiextensions.k8s.io logSource="pkg/restore/restore.go:116" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=services logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=events logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=resourcequotas logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=replicationcontrollers logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=namespaces logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=pods logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=secrets logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=serviceaccounts logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=limitranges logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=nodes logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=endpoints logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=podtemplates logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=configmaps logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=apiservices.apiregistration.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=replicasets.apps logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=statefulsets.apps logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=daemonsets.apps logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=deployments.apps logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=controllerrevisions.apps logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=events.events.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=horizontalpodautoscalers.autoscaling logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=jobs.batch logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cronjobs.batch logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=certificatesigningrequests.certificates.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=networkpolicies.networking.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=ingresses.networking.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=ingressclasses.networking.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=podsecuritypolicies.policy logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=poddisruptionbudgets.policy logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=roles.rbac.authorization.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=clusterrolebindings.rbac.authorization.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=clusterroles.rbac.authorization.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=rolebindings.rbac.authorization.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=volumeattachments.storage.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=storageclasses.storage.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=csidrivers.storage.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=csinodes.storage.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=validatingwebhookconfigurations.admissionregistration.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=mutatingwebhookconfigurations.admissionregistration.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=customresourcedefinitions.apiextensions.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=priorityclasses.scheduling.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=leases.coordination.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=runtimeclasses.node.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=endpointslices.discovery.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=networkpolicies.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=ipamconfigs.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=clusterinformations.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=bgppeers.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=globalnetworksets.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=hostendpoints.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=ipamhandles.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=bgpconfigurations.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=ipamblocks.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=networksets.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=blockaffinities.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=felixconfigurations.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=globalnetworkpolicies.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=ippools.crd.projectcalico.org logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=volumesnapshotlocations.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=deletebackuprequests.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=restores.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=podvolumerestores.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=podvolumebackups.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=backupstoragelocations.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=resticrepositories.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=schedules.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=backups.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=serverstatusrequests.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=downloadrequests.velero.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=volumesnapshots.volumesnapshot.external-storage.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=volumesnapshotdatas.volumesnapshot.external-storage.k8s.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=litmusresults.litmus.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorpools.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=blockdevices.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorvolumes.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorrestores.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=upgradetasks.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=runtasks.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=storagepools.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorbackups.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorvolumepolicies.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorcompletedbackups.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=blockdeviceclaims.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=disks.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorpoolinstances.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorvolumereplicas.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=castemplates.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=storagepoolclaims.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=cstorvolumeclaims.openebs.io logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Not including resource" groupResource=ingresses.extensions logSource="pkg/restore/restore.go:136" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Starting restore of backup velero/gitlab-backup-minio" logSource="pkg/restore/restore.go:377" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Restoring cluster level resource 'persistentvolumes' from: /tmp/063784103/resources/persistentvolumes/cluster" logSource="pkg/restore/restore.go:726" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Getting client for /v1, Kind=PersistentVolume" logSource="pkg/restore/restore.go:772" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Restoring persistent volume from snapshot." logSource="pkg/restore/restore.go:888" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Initializing velero plugin for CStor" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/snapshot/snap.go:36" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Ip address of velero-plugin server: 192.168.32.194" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:156" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Restoring cloud snapshot{gitlab-backup-minio} for volume:pvc-6a4665f5-3457-11e9-816f-0050569876a2" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:415" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:01Z" level=info msg="Reading from {backups/gitlab-backup-minio/-pvc-6a4665f5-3457-11e9-816f-0050569876a2-gitlab-backup-minio.pvc} with provider{gcp} to bucket{e2e-gitlab-backup}" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:118" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:02Z" level=info msg="successfully read object{backups/gitlab-backup-minio/-pvc-6a4665f5-3457-11e9-816f-0050569876a2-gitlab-backup-minio.pvc} to {gcp}" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:126" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:02Z" level=info msg="Creating PVC for volumeID:pvc-6a4665f5-3457-11e9-816f-0050569876a2 snapshot:gitlab-backup-minio" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/pvc_operation.go:120" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:08Z" level=info msg="PVC(gitlab-minio) created.." cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/pvc_operation.go:147" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:08Z" level=info msg="Generated PV name is pvc-517ffc06-1f2c-4faf-8164-7f7527f8f277" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:506" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:14Z" level=info msg="Client{16} operation completed.. completed count{1}" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:160" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:14Z" level=info msg="Client{17} operation completed.. completed count{2}" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:160" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:22Z" level=info msg="Client{15} operation completed.. completed count{3}" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:160" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Client{16} operation completed.. completed count{4}" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:160" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=error msg="entry{16} not found in ClientList" cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:51" pluginName=velero-blockstore-cstor restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Restoring resource 'persistentvolumeclaims' into namespace 'default' from: /tmp/063784103/resources/persistentvolumeclaims/namespaces/default" logSource="pkg/restore/restore.go:724" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Getting client for /v1, Kind=PersistentVolumeClaim" logSource="pkg/restore/restore.go:772" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:933" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Executing AddPVFromPVCAction" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Adding PV pvc-6a4665f5-3457-11e9-816f-0050569876a2 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Skipping persistentvolumes/pvc-6a4665f5-3457-11e9-816f-0050569876a2 because it's already been restored." logSource="pkg/restore/restore.go:859" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:933" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:25Z" level=info msg="Executing ChangeStorageClassAction" cmd=/velero logSource="pkg/restore/change_storageclass_action.go:63" pluginName=velero restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:26Z" level=info msg="Done executing ChangeStorageClassAction" cmd=/velero logSource="pkg/restore/change_storageclass_action.go:74" pluginName=velero restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:26Z" level=info msg="Attempting to restore PersistentVolumeClaim: gitlab-minio" logSource="pkg/restore/restore.go:1031" restore=velero/gitlab-backup-minio-20200424171858
time="2020-04-24T11:49:26Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:465" restore=velero/gitlab-backup-minio-20200424171858

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version (use velero version): v1.1.0
  • Velero features (use velero client config get features):
  • Velero-plugin version: 1.9.0
  • OpenEBS version; 1.9.0
  • Kubernetes version (use kubectl version): 1.8
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): ubuntu 18.04

velero plugin should create namespace for PV/PVC

I want to restore a backup. I deleted my namespace. If a restore a blackup without creating the namespace, I'll obtain those errors

 Cluster:  error executing PVAction for persistentvolumes/pvc-4c81e2c5-c4ae-4417-af5b-3cb7cefa8e90: rpc error: code = Unknown desc = Failed to read PVC for volumeID=pvc-4c81e2c5-c4ae-4417-af5b-3cb7cefa8e90 snap=before-disaster2: failed to create PVC=rocketchat/rocketchat-rocketchat: namespaces "rocketchat" not found
            error executing PVAction for persistentvolumes/pvc-87ffb873-3a5b-49e6-8877-d3c2e488f01a: rpc error: code = Unknown desc = Failed to read PVC for volumeID=pvc-87ffb873-3a5b-49e6-8877-d3c2e488f01a snap=before-disaster2: failed to create PVC=rocketchat/datadir-rocketchat-mongodb-primary-0: namespaces "rocketchat" not found
  Namespaces: <none>

If I restore a backup that doesn't contains PV/PVC.. Velero will create the namespace.

I think the plugins should be able to create the namespace if it's not found. I think the namespace creation shouldn't be mandatory prior to restore a backup.

Restore of PVC fail with error - plugin panicked: runtime error: invalid memory address or nil pointer dereference

What steps did you take and what happened:
I was trying to backup and restore a namespace. A busybox pod is running in the namespace and has a PVC attached.
PVC was provisioned by following the steps on https://github.com/openebs/cstor-operators/blob/master/docs/tutorial/volumes/snapshot.md

Steps followed for backup/restore

Create a namespace and deploy a pod.

root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# kubectl create ns test-backup
namespace/test-backup created

root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# kubectl get pvc -n test-backup
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
demo-cstor-vol   Bound    pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4   5Gi        RWO            cstor-csi-stripe   52s

root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# kubectl get all -n test-backup
NAME          READY   STATUS    RESTARTS   AGE
pod/busybox   1/1     Running   0          14s
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  # name -- volumeSnapshotLocation Name (local-default...)
  name: openebs-local
  namespace: velero
spec:
  provider: openebs.io/cstor-blockstore
  config:
    namespace: openebs
    local: "true"

Create backup job

root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# velero backup create test-localbackup --include-namespaces=test-backup --snapshot-volumes --volume-snapshot-locations=openebs-local --storage-location k8s-view-01
Backup request "test-localbackup" submitted successfully.
Run `velero backup describe test-localbackup` or `velero backup logs test-localbackup` for more details.


root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# velero backup describe test-localbackup --details
Name:         test-localbackup
Namespace:    velero
Labels:       velero.io/storage-location=k8s-view-01
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.18.3
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=18

Phase:  Completed

Namespaces:
  Included:  test-backup
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  k8s-view-01

Velero-Native Snapshot PVs:  true

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1

Started:    2020-08-20 00:49:46 -0700 PDT
Completed:  2020-08-20 00:49:49 -0700 PDT

Expiration:  2020-09-19 00:49:46 -0700 PDT

Total items to be backed up:  15
Items backed up:              15

Resource List:
  v1/Event:
    - test-backup/busybox.162cea6ae5e0cb59
    - test-backup/busybox.162cea6aec12f771
    - test-backup/busybox.162cea6ceb3cdf70
    - test-backup/busybox.162cea6d3bd1e3cc
    - test-backup/busybox.162cea6d40cbb003
    - test-backup/busybox.162cea6d4dc38d2f
    - test-backup/demo-cstor-vol.162cea641ca39795
    - test-backup/demo-cstor-vol.162cea641ce77755
    - test-backup/demo-cstor-vol.162cea641f5590fd
  v1/Namespace:
    - test-backup
  v1/PersistentVolume:
    - pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4
  v1/PersistentVolumeClaim:
    - test-backup/demo-cstor-vol
  v1/Pod:
    - test-backup/busybox
  v1/Secret:
    - test-backup/default-token-rwzlf
  v1/ServiceAccount:
    - test-backup/default

Velero-Native Snapshots:
  pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4:
    Snapshot ID:        pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4-velero-bkp-test-localbackup
    Type:               cstor-snapshot
    Availability Zone:
    IOPS:               <N/A>

Restore from that backup

root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# velero restore create --from-backup test-localbackup --restore-volumes=true --namespace-mappings test-backup:restored-test-backup
Restore request "test-localbackup-20200820005458" submitted successfully.
Run `velero restore describe test-localbackup-20200820005458` or `velero restore logs test-localbackup-20200820005458` for more details.


root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# velero restore describe test-localbackup-20200820005458
Name:         test-localbackup-20200820005458
Namespace:    velero
Labels:       <none>
Annotations:  <none>

Phase:  PartiallyFailed (run 'velero restore logs test-localbackup-20200820005458' for more information)

Errors:
  Velero:     <none>
  Cluster:  error executing PVAction for persistentvolumes/pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4: rpc error: code = Aborted desc = plugin panicked: runtime error: invalid memory address or nil pointer dereference
  Namespaces: <none>

Backup:  test-localbackup

Namespaces:
  Included:  all namespaces found in the backup
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
  Cluster-scoped:  auto

Namespace mappings:  test-backup=restored-test-backup

Label selector:  <none>

Restore PVs:  true

PVC details

root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# kubectl get pvc -A
NAMESPACE              NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
restored-test-backup   demo-cstor-vol   Lost     pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4   0                         cstor-csi-stripe   36s
test-backup            demo-cstor-vol   Bound    pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4   5Gi        RWO            cstor-csi-stripe   7m33s


root@rohan-virtual-machine:/home/rohan/openebs/cstor-operators# kubectl -n restored-test-backup describe pvc demo-cstor-vol
Name:          demo-cstor-vol
Namespace:     restored-test-backup
StorageClass:  cstor-csi-stripe
Status:        Lost
Volume:        pvc-d85191ca-0b87-4dbd-a8ff-57503d4ef8d4
Labels:        velero.io/backup-name=test-localbackup
               velero.io/restore-name=test-localbackup-20200820005458
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: cstor.csi.openebs.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      0
Access Modes:
VolumeMode:    Filesystem
Mounted By:    busybox
Events:
  Type     Reason         Age   From                         Message
  ----     ------         ----  ----                         -------
  Warning  ClaimMisbound  113s  persistentvolume-controller  Two claims are bound to the same volume, this one is bound incorrectly

What did you expect to happen:

Restore should have been successful and PVC should have been attached to the pod

Environment:

  • Velero version (use velero version): v1.4.0
  • Velero features (use velero client config get features):
  • Velero-plugin version
  • OpenEBS version 1.11
  • Kubernetes version (use kubectl version): 1.18
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Velero Restore not possible because of psp.

As per the velero restore steps we need to exec into the cstor pods to set service IP.
In our infractore psp is enabled and its not allow exec in privileged container.

With the currect process we cant restore velero backup.

Error:

kubectl -n openebs exec -it cstor-01w9-6d4b8f8cf4-t5rgf -c cstor-pool -- bash
Error from server (Forbidden): pods "cstor-01w9-6d4b8f8cf4-t5rgf" is forbidden: cannot exec into or attach to a privileged container

cannot take volumesnapshot

time="2022-07-18T19:30:43Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/mysql-backup error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:453" name=pvc-5df7dc7e-2f9f-4566-9a76-ac5a278893aa namespace= persistentVolume=pvc-5df7dc7e-2f9f-4566-9a76-ac5a278893aa resource=persistentvolumes volumeSnapshotLocation=default

change storage class restore item action

I just wanted to report that your plugin does not like the change storage class restore item action. The restored datasets never zfs send|receive to the new sc location, but the pvc's and pv's reflect the new sc. This is on the same node. Was just trying to restore a backup from spinning drive zfs pool to a ssd pool. I had to manually run zfs send|receive and edit the zfsvolumes afterwards.

Using velero/velero-plugin-for-aws:v1.3.0 openebs/velero-plugin:3.0.0 on velero v1.7.0.

---
apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: change-storage-class-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e. the built-in restore item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/change-storage-class: RestoreItemAction
data:
  # add 1+ key-value pairs here, where the key is the old
  # storage class name and the value is the new storage
  # class name.
  openebs-zfspv-rust: openebs-zfspv-ssd

How to take incremental backup of persistent volumes in AWS using Velero

We have currently set it up the Velero to take daily backup of pv. but problem is causing us too much cost of snapshots .

is there any way to take incremental backup using velero like AWS lifecycle Manager Policy ?

Also, need a help to retain the snapshot for certain period of time after that it should deleted automatically.

Thanks in advance !!!

NFS volumes backup are failing

NFS volumes backup are failing, below is the error.

[nginx]$ cat bb | grep error
time="2019-07-24T14:53:23Z" level=error msg="Insufficient info for PV : &PersistentVolume{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:pvc-06331c19-ae22-11e9-9027-0050569bab7f,GenerateName:,Namespace:,SelfLink:/api/v1/persistentvolumes/pvc-06331c19-ae22-11e9-9027-0050569bab7f,UID:0c677d05-ae22-11e9-9027-0050569bab7f,ResourceVersion:4854034,Generation:0,CreationTimestamp:2019-07-24 14:48:07 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{EXPORT_block: \nEXPORT\n{\n\tExport_Id = 1;\n\tPath = /export/pvc-06331c19-ae22-11e9-9027-0050569bab7f;\n\tPseudo = /export/pvc-06331c19-ae22-11e9-9027-0050569bab7f;\n\tAccess_Type = RW;\n\tSquash = no_root_squash;\n\tSecType = sys;\n\tFilesystem_id = 1.1;\n\tFSAL {\n\t\tName = VFS;\n\t}\n}\n,Export_Id: 1,Project_Id: 0,Project_block: ,Provisioner_Id: 88071de9-a4b7-11e9-94bf-5a1395f146fb,kubernetes.io/createdby: nfs-dynamic-provisioner,pv.kubernetes.io/provisioned-by: openebs.io/nfs,volume.beta.kubernetes.io/mount-options: vers=4.1,},OwnerReferences:[],Finalizers:[kubernetes.io/pv-protection],ClusterName:,Initializers:nil,},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{2 9} {<nil>} 2G DecimalSI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:&NFSVolumeSource{Server:10.102.43.10,Path:/export/pvc-06331c19-ae22-11e9-9027-0050569bab7f,ReadOnly:false,},RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:nil,},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openebs,Name:nginx-pv-claim-velero,UID:06331c19-ae22-11e9-9027-0050569bab7f,APIVersion:v1,ResourceVersion:4854025,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:openebs-nfs,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,},}" backup=velero/daily-k8stest-backup-nfs1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:222" pluginName=velero-blockstore-cstor
time="2019-07-24T14:53:23Z" level=error msg="Error attempting to get volume ID for persistent volume" backup=velero/daily-k8stest-backup-nfs1 error="rpc error: code = Unknown desc = Insufficient info for PV" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:223" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).GetVolumeID" group=v1 logSource="pkg/backup/item_backupper.go:415" name=pvc-06331c19-ae22-11e9-9027-0050569bab7f namespace=openebs persistentVolume=pvc-06331c19-ae22-11e9-9027-0050569bab7f resource=pods volumeSnapshotLocation=default
[nginx]$
[nginx]$ velero backup describe daily-k8stest-backup-nfs1 --details
Name:         daily-k8stest-backup-nfs1
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  <none>

Phase:  PartiallyFailed

Namespaces:
  Included:  openebs
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  !openebs.io/controller,!openebs.io/replica

Storage Location:  default

Snapshot PVs:  true

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1

Started:    2019-07-24 10:53:22 -0400 EDT
Completed:  2019-07-24 10:53:25 -0400 EDT

Expiration:  2019-08-23 10:53:22 -0400 EDT

Validation errors:  <none>

Persistent Volumes: <none included>
[nginx]$

Support for Restore from Backup taken from old schema.

Describe the problem/challenge you have
This issue is about to restore the backup, taken from cstor v1alpha1, to cstor v1.

Describe the solution you'd like

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version (use velero version):
  • Velero features (use velero client config get features):
  • Velero-plugin version
  • OpenEBS version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

DisableSSL option not working

What steps did you take and what happened:

Our S3 bucket has a self signed certificate, we have the CA, but we are not able to supply it (I don't see any options to do this). The DisableSSL option that seems to be available also doesn't seem to do anything.

What did you expect to happen:

Expected that the volumes would be backup up, instead we got the following error:

time="2020-09-11T12:23:04Z" level=error msg="Failed to close cloud conn : blob (key \"backups/backuptest/cstor-pvc-bcd7025b-a0ba-11ea-b95b-005056ab875a-backuptest.pvc\") (code=Unknown): RequestError: send request failed\ncaused by: Put \"https://OURBUCKET\": x509: certificate signed by unknown authority" backup=velero/backuptest cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:108" pluginName=velero-blockstore-cstor
time="2020-09-11T12:23:04Z" level=info msg="1 errors encountered backup up item" backup=velero/backuptest logSource="pkg/backup/backup.go:444"

Environment:

  • Velero version (use velero version): 1.4.2
  • Velero-plugin version: 1.12
  • OpenEBS version: 1.10.0
  • Kubernetes version (use kubectl version): 3.11

Large backup in aws without setting multiPartChunkSize stops at 50Gi

What steps did you take and what happened:
I just try to do a backup of a ZFS volume bigger than 50Gi and it crashes saying that:

caused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit"

What did you expect to happen:
With multiPartChunkSize not set up it should calculate partSize automatically but it seems that it took 5Mi as default value.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
I fixed it setting it explicitly to a bigger value.

Environment:

  • Velero version (use velero version):
velero version
Client:
	Version: v1.13.0
	Git commit: 76670e940c52880a18dbbc59e3cbee7b94cd3352
Server:
	Version: v1.13.0
  • Velero features (use velero client config get features):
velero client config get features
features: <NOT SET>
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Panic when deleting snapshot that failed

What steps did you take and what happened:

  • velero backup create monitor --include-namespaces=monitor --snapshot-volumes
  • wait for backup to complete (snapshot fails)
  • velero backup delete monitor

What did you expect to happen:
Plugin doesn't panic and allows velero to clean resources

The output of the following commands will help us better understand what's going on:

This is the info about the backup after trying to delete:

Name:         monitor
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  <none>

Phase:  Deleting

Namespaces:
  Included:  monitor
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Snapshot PVs:  true

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1

Started:    2020-05-05 23:29:22 +0200 CEST
Completed:  2020-05-05 23:31:10 +0200 CEST

Expiration:  2020-06-04 23:29:22 +0200 CEST

Persistent Volumes:  0 of 1 snapshots completed successfully (specify --details for more information)

Deletion Attempts (1 failed):
  2020-05-05 23:34:43 +0200 CEST: Processed
  Errors:
    error deleting snapshot : rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range

The logs for this particular backup are not available, as doing velero backup describe monitor returns

Logs for backup "monitor" are not available until it's finished processing. Please wait until the backup has a phase of Completed or Failed and try again.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version 1.3.2
  • Velero features
  • Velero-plugin version 1.9.0
  • OpenEBS version 1.9.1
  • Kubernetes version 1.17.0
  • Kubernetes installer & version: kubeadm
  • Cloud provider or hardware configuration: bare metal
  • OS (e.g. from /etc/os-release): Ubuntu 18

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.