Giter Site home page Giter Site logo

utkuozdemir / pv-migrate Goto Github PK

View Code? Open in Web Editor NEW
1.3K 10.0 71.0 7.52 MB

CLI tool to easily migrate Kubernetes persistent volumes

License: Apache License 2.0

Go 95.70% Shell 1.82% HCL 0.89% Dockerfile 0.42% Smarty 1.17%
kubernetes persistent-volumes persistent-volume-claims migration

pv-migrate's Introduction

pv-migrate

build codecov Go Report Card Latest GitHub release GitHub license GitHub stars GitHub forks GitHub issues GitHub all releases Docker Pulls SSHD Docker Pulls Rsync Docker Pulls

pv-migrate is a CLI tool/kubectl plugin to easily migrate the contents of one Kubernetes PersistentVolumeClaim to another.


⚠️ Maintenance status: I get that it can be frustrating not to hear back about the stuff you've brought up or the changes you've suggested. But honestly, for over a year now, I've hardly had any time to keep up with my personal open-source projects, including this one. I am still committed to keep this tool working and slowly move it forward, but please bear with me if I can't tackle your fixes or check out your code for a while. Thanks for your understanding.


Demo

pv-migrate demo GIF

Introduction

On Kubernetes, if you need to rename a resource (like a Deployment) or to move it to a different namespace, you can simply create a copy of its manifest with the new namespace and/or name and apply it.

However, it is not as simple with PersistentVolumeClaim resources: They are not only metadata, but they also store data in the underlying storage backend.

In these cases, moving the data stored in the PVC can become a problem, making migrations more difficult.

Use Cases

➡️ You have a database that has a PersistentVolumeClaim db-data of size 50Gi.
Your DB grew over time, and you need more space for it.
You cannot resize the PVC because it doesn't support volume expansion.
Simply create a new, bigger PVC db-data-v2 and use pv-migrate to copy data from db-data to db-data-v2.

➡️ You need to move PersistentVolumeClaim my-pvc from namespace ns-a to namespace ns-b.
Simply create the PVC with the same name and manifest in ns-b and use pv-migrate to copy its content.

➡️ You are moving from one cloud provider to another, and you need to move the data from one Kubernetes cluster to the other.
Just use pv-migrate to copy the data securely over the internet.

➡️ You need to change the StorageClass of a volume, for instance, from a ReadWriteOnce one like local-path) to a ReadWriteMany like NFS. As the storageClass is not editable, you can use pv-migrate to transfer the data from the old PVC to the new one with the desired StorageClass.

Highlights

  • Supports in-namespace, in-cluster as well as cross-cluster migrations
  • Uses rsync over SSH with a freshly generated Ed25519 or RSA keys each time to securely migrate the files
  • Allows full customization of the manifests (e.g. specifying your own docker images for rsync and sshd, configuring affinity etc.)
  • Supports multiple migration strategies to do the migration efficiently and fallback to other strategies when needed
  • Customizable strategy order
  • Supports arm32v7 (Raspberry Pi etc.) and arm64 architectures as well as amd64
  • Supports completion for popular shells: bash, zsh, fish, powershell

Installation

See INSTALL.md for various installation methods and shell completion configuration.

Usage

See USAGE.md for the CLI reference and examples.

Star History

Star History Chart

Contributing

See CONTRIBUTING for details.

pv-migrate's People

Contributors

0dragosh avatar alex-vmw avatar cgroschupp avatar dependabot[bot] avatar fl42 avatar frankkkkk avatar goreleaserbot avatar hwiese1980 avatar jellyfrog avatar jtackaberry avatar mattmattox avatar mertcangurcan avatar renovate-bot avatar renovate[bot] avatar slushysnowman avatar utkuozdemir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pv-migrate's Issues

x86_64 cli doesnt work

Describe the bug
cli for x86_64 linux does not work
To Reproduce
Steps to reproduce the behavior:

  1. run the command pv-migrate for x86_64 linux on a linux machine with AMD64 processor

Expected behavior
for it to work
Console output
-bash /usr/local/bin/pv-migrate: cannot excecute binary file
Version
0.5.8

All strategies are failing

Describe the bug
All strategies are failing

Expected behavior
At least one should work.

Console output

🚀  Starting migration
💡  PVC wordembedding-resources-pvc is mounted to node aks-heavyram-33180665-vmss000000, ignoring...
💭  Will attempt 3 strategies: mnt2, svc, lbsvc
🚁  Attempting strategy: mnt2
🦊  Strategy 'mnt2' cannot handle this migration, will try the next one
🚁  Attempting strategy: svc
🔑  Generating SSH key pair
creating 7 resource(s)
beginning wait for 7 resources with timeout of 1m0s
Deployment is not ready: default/pv-migrate-ff8y3-sshd. 0 out of 1 expected pods are ready
📂  Copying data...   0% |                                                                                 | () [0s:0s]�   Cleaning up
uninstall: Deleting pv-migrate-ff8y3
Starting delete for "pv-migrate-ff8y3-sshd" Service
Starting delete for "pv-migrate-ff8y3-rsync" Job
Starting delete for "pv-migrate-ff8y3-sshd" Deployment
Starting delete for "pv-migrate-ff8y3-sshd" Secret
Starting delete for "pv-migrate-ff8y3-rsync" Secret
Starting delete for "pv-migrate-ff8y3-sshd" ServiceAccount
Starting delete for "pv-migrate-ff8y3-rsync" ServiceAccount
beginning wait for 7 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-ff8y3
✨  Cleanup done
🔶  Migration failed with this strategy, will try with the remaining strategies
🚁  Attempting strategy: lbsvc
🔑  Generating SSH key pair
creating 4 resource(s)
beginning wait for 4 resources with timeout of 1m0s
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
Service does not have load balancer ingress IP address: default/pv-migrate-1ei5r-sshd
🧹  Cleaning up
uninstall: Deleting pv-migrate-1ei5r
Starting delete for "pv-migrate-1ei5r-sshd" Service
Starting delete for "pv-migrate-1ei5r-sshd" Deployment
Starting delete for "pv-migrate-1ei5r-sshd" Secret
Starting delete for "pv-migrate-1ei5r-sshd" ServiceAccount
beginning wait for 4 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-1ei5r
✨  Cleanup done
🔶  Migration failed with this strategy, will try with the remaining strategies
❌  Error: all strategies have failed

Version

  • AKS 1.20
  • Source and destination container runtimes [e.g. containerd://1.4.4-k3s2, docker://19.3.6]
  • pv-migrate version 0.7.2 (commit: 766fddb)
  • Installation method binary download
  • ReadWriteOnce (if I shut down the service that mounts it, it still does not work)

Run as non-root user

Not all clusters allow running as root. If possible, we should run the sshd/rsync pods with the least privilege. If not possible, evaluate other protocols.

Another option is trying to create PSP/role/rolebinding/serviceaccount to get root authorization.

Adding support for default NetworkPolicy if needed

Is your feature request related to a problem? Please describe.
Our Cluster default denys all traffic, to use pv-migrate, i have to deploy an extra network policy before.

Describe the solution you'd like
It would great if pv-migrate would be able to deploy a small network policy which allow traffic between both pods.

Panic

Hi there. I just tried to use pv-migrate for some tests. I'm using the binary release for x86_64 and it exploded with a panic. I don't know any Golang at all so I haven't tried to troubleshoot this. Thanks

[jonathan@zeus kubernetes]$ ~/bin/linux/pv-migrate --source-namespace camerahub-preprod --source media --dest-namespace camerahub-preprod --dest media2
INFO[0000] Both claims exist and bound, proceeding...   
INFO[0000] Creating sshd pod                             podName=pv-migrate-sshd-k51ey
INFO[0000] Waiting for pod to start running              podName=pv-migrate-sshd-k51ey
INFO[0004] sshd pod running                              podName=pv-migrate-sshd-k51ey
INFO[0004] Creating rsync job                            jobName=pv-migrate-rsync-k51ey
INFO[0004] Waiting for rsync job to finish               jobName=pv-migrate-rsync-k51ey
PANI[0009] Job failed, exiting                           jobName=pv-migrate-rsync-k51ey podName=pv-migrate-rsync-k51ey-j5vm9
E1130 20:32:21.167374 3459660 runtime.go:66] Observed a panic: &logrus.Entry{Logger:(*logrus.Logger)(0xc000082240), Data:logrus.Fields{"jobName":"pv-migrate-rsync-k51ey", "podName":"pv-migrate-rsync-k51ey-j5vm9"}, Time:time.Time{wall:0xbfe9739549f22e73, ext:9484508853, loc:(*time.Location)(0x1eb9b20)}, Level:0x0, Caller:(*runtime.Frame)(nil), Message:"Job failed, exiting", Buffer:(*bytes.Buffer)(nil), err:""} (&{0xc000082240 map[jobName:pv-migrate-rsync-k51ey podName:pv-migrate-rsync-k51ey-j5vm9] 2020-11-30 20:32:21.166866547 +0000 GMT m=+9.484508853 panic <nil> Job failed, exiting <nil> })
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/Cellar/go/1.12/libexec/src/runtime/panic.go:522
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus/entry.go:227
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus/entry.go:256
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus/entry.go:294
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/main.go:378
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/controller.go:202
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/shared_informer.go:552
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:203
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/shared_informer.go:548
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/shared_informer.go:546
/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71
/usr/local/Cellar/go/1.12/libexec/src/runtime/asm_amd64.s:1337
panic: (*logrus.Entry) (0x1227a80,0xc0003324e0) [recovered]
	panic: (*logrus.Entry) (0x1227a80,0xc0003324e0)

goroutine 45 [running]:
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1227a80, 0xc0003324e0)
	/usr/local/Cellar/go/1.12/libexec/src/runtime/panic.go:522 +0x1b5
github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus.Entry.log(0xc000082240, 0xc00036e120, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus/entry.go:227 +0x2ce
github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus.(*Entry).Log(0xc000332480, 0xc000000000, 0xc000572918, 0x1, 0x1)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus/entry.go:256 +0xe4
github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus.(*Entry).Panic(0xc000332480, 0xc0005ad918, 0x1, 0x1)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/github.com/sirupsen/logrus/entry.go:294 +0x55
main.createJobWaitTillCompleted.func1(0x1237160, 0xc000325c00, 0x1237160, 0xc000062a80)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/main.go:378 +0x310
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/controller.go:202
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0x42ac1d, 0x0, 0x0)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/shared_informer.go:552 +0x1dd
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0xc0005ade38, 0x42a77f, 0xc0000c4080)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:203 +0xde
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/shared_informer.go:548 +0x89
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000061f68)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005adf68, 0xdf8475800, 0x0, 0x13e1101, 0xc0000a1920)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc000642500)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/client-go/tools/cache/shared_informer.go:546 +0x9c
github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0000f6a00, 0xc00063ecc0)
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/Users/utku/go/src/github.com/utkuozdemir/pv-migrate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62

Don't make logs a hard dependency

There are systems where tailing logs is not permitted by RBAC, but creating/getting pods is.

Being able to read logs or not does not affect the end result - pv-migrate can fallback to a spinner style progress bar if there is an error on reading logs instead.

More flexible and wide tolerations on migration pod

Is your feature request related to a problem? Please describe.
The tools is amazing but I found some issues running it in a cluster configured with taints on "scheduling" instead of on "running".

Describe the solution you'd like
pv-migrate pod can tolerate taints on both "running" and "scheduling".

I suggest to make tolerations a bit more flexible:

    - effect: NoExecute
      operator: Exists
    - effect: NoSchedule
      operator: Exists

Describe alternatives you've considered
I did a little trick on pod startup, adding additional tolerations to it

`--sshd-image`/`--rsync-image` not working

Describe the bug
Hi,

On pv-migrate built from commit 2a19f03363820bd89be9f3aba6bc1c5248b02283, the options --sshd-image/--rsync-image are not taken into account and the default image is always used.

To Reproduce
Steps to reproduce the behavior:
Launch pv migrate with different images:pv-migrate migrate XXX --strategies lbsvc --rsync-image img-test:2 --sshd-image test-img:1 pvc1 pvc2

Describe the new launched pod:

Containers:
  rsync:
    Container ID:  
    Image:         docker.io/utkuozdemir/pv-migrate-rsync:1.0.0
---
  Normal  Pulling                 31s   kubelet                  Pulling image "docker.io/utkuozdemir/pv-migrate-rsync:1.0.0"

Expected behavior
The images specified in the argument should be used instead

Console output
n/a

Version

  • Source and destination Kubernetes versions 1.21
  • pv-migrate version and architecture see above

Thanks and cheers !

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Pending Branch Automerge

These updates await pending status checks before automerging. Click on a checkbox to abort the branch automerge, and create a PR instead.

  • chore(deps): update helm release metallb to v6

Detected dependencies

dockerfile
Dockerfile
  • alpine 3.19.1
docker/rsync/Dockerfile
  • alpine 3.19.1
docker/sshd/Dockerfile
  • alpine 3.19.1
github-actions
.github/workflows/build-debug.yml
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • actions/setup-go v5.0.0
  • ubuntu 22.04
.github/workflows/build-rsync-image.yml
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • docker/setup-qemu-action v3.0.0
  • docker/setup-buildx-action v3.3.0
  • docker/login-action v3.1.0
  • docker/build-push-action v5.3.0
  • ubuntu 22.04
.github/workflows/build-sshd-image.yml
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • docker/setup-qemu-action v3.0.0
  • docker/setup-buildx-action v3.3.0
  • docker/login-action v3.1.0
  • docker/build-push-action v5.3.0
  • ubuntu 22.04
.github/workflows/build.yml
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • actions/setup-go v5.0.0
  • golangci/golangci-lint-action v4.0.0
  • goreleaser/goreleaser-action v5.0.0
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • actions/setup-go v5.0.0
  • helm/kind-action v1.9.0
  • helm/kind-action v1.9.0
  • codecov/codecov-action v4.3.0
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/release-rsync-image.yml
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • docker/setup-qemu-action v3.0.0
  • docker/setup-buildx-action v3.3.0
  • docker/login-action v3.1.0
  • docker/build-push-action v5.3.0
  • ubuntu 22.04
.github/workflows/release-sshd-image.yml
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • docker/setup-qemu-action v3.0.0
  • docker/setup-buildx-action v3.3.0
  • docker/login-action v3.1.0
  • docker/build-push-action v5.3.0
  • ubuntu 22.04
.github/workflows/release.yml
  • actions/checkout v4.1.3@1d96c772d19495a3b5c517cd2bc0cb401ea0529f
  • actions/setup-go v5.0.0
  • docker/login-action v3.1.0
  • goreleaser/goreleaser-action v5.0.0
  • rajatjindal/krew-release-bot v0.0.46
  • ubuntu 22.04
gomod
go.mod
  • go 1.22.2
  • github.com/forPelevin/gomoji v1.2.0
  • github.com/hashicorp/go-multierror v1.1.1
  • github.com/schollz/progressbar/v3 v3.14.2
  • github.com/sirupsen/logrus v1.9.3
  • github.com/spf13/cobra v1.8.0
  • github.com/spf13/pflag v1.0.5
  • github.com/stretchr/testify v1.9.0
  • golang.org/x/crypto v0.22.0
  • gopkg.in/yaml.v3 v3.0.1
  • helm.sh/helm/v3 v3.14.4
  • k8s.io/api v0.30.0
  • k8s.io/apimachinery v0.30.0
  • k8s.io/cli-runtime v0.30.0
  • k8s.io/client-go v0.30.0
  • k8s.io/utils v0.0.0-20240310230437-4693a0247e57@4693a0247e57
helm-values
helm/pv-migrate/values.yaml
  • docker.io/utkuozdemir/pv-migrate-sshd 1.1.0
  • docker.io/utkuozdemir/pv-migrate-rsync 1.0.0
regex
.github/workflows/build.yml
  • golangci/golangci-lint v1.57.2
  • goreleaser/goreleaser v1.25.1
  • kyoh86/richgo v0.3.12
  • cilium/cilium-cli v0.16.4
  • kubernetes-sigs/kind v0.22.0
  • metallb 5.0.3
  • kubernetes-sigs/kind v0.22.0
.github/workflows/release.yml
  • goreleaser/goreleaser v1.25.1
go.mod
  • golang/go 1.22.2

  • Check this box to trigger a request for Renovate to run again on this repository

Allow subpath migration

Sometimes you just want to copy a subdirectory, not the whole PVC content. Feature can be provided via flags.

Ability to provide a pod/job template as YAML

There is a never-ending requests of being able to customize the generated pods - to add tolerations, labels, nodeSelector and so on.

Adding a flag for each of these options is not feasible and it is simply reinventing the pod yaml. Instead, we can provide an external YAML with the customizations, so that all the future customization requests will be met.

pv-migrate would have a default pod template YAML which it outputs with a new command. Example:

pv-migrate print-template rsync
pv-migrate migrate --rsync-template my-custom-rsync.yaml ...

pv-migrate stays timeless in "Connecting to the rsync server" status

Describe the bug
When I execute any pv-migrate command, the process stays for ever in the step "Connecting to the rsync server" and never pass to the next step.

To Reproduce
Steps to reproduce the behavior:

  1. Run command pv-migrate -i old_pvc new_pvc
  2. See error

Expected behavior
Pass all the steps and migrate pvcs.

Console output
`pol.guasch@vdi-linux-sis-2:~$ pv-migrate migrate -i --source-namespace spservicelevels --dest-namespace spservicelevels mssql-linux-master mssql-linux-master-new
🚀 Starting migration
💡 PVC mssql-linux-master is mounted to node kubernetes-worker-05, ignoring...
💭 Will attempt 3 strategies: mnt2,svc,lbsvc
🚁 Attempting strategy: mnt2
🦊 Strategy 'mnt2' cannot handle this migration, will try the next one
🚁 Attempting strategy: svc
🔑 Generating SSH key pair
🔑 Creating secret for the public key
🚀 Creating sshd pod
⏳ Waiting for the sshd pod to start running
🚀 Sshd pod started
🔑 Creating secret for the private key
🔗 Connecting to the rsync server
:warn: Cannot tail logs to display progress

`

Version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.20", GitCommit:"1f3e19b7beb1cc0110255668c4238ed63dadb7ad", GitTreeState:"clean", BuildDate:"2021-06-16T12:51:17Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.21) and server (1.18) exceeds the supported minor version skew of +/-1

  • pv-migrate version and architecture [e.g. v0.5.10 - linux_x86_64]
  • Installation method [ binary download]

Additional context
I can't access to the pod and see what happen.

pol.guasch@vdi-linux-sis-2:~$ k exec -it pv-migrate-rsync-phqvx-rv6d6 -c app sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: unable to upgrade connection: container not found ("app")

helm-set fullnameOverride - deployment not ready

Describe the bug
Copying between 2 clusters. When we define fullnameOverride source deployment does not get ready (as detected in pv-migrate). As we see in kubernetes it looks OK.

When we remove parameter fullnameOverride it continues with copying.

To Reproduce
Steps to reproduce the behavior:

kubectl pv-migrate migrate redisldaipvc redisldaipvc \
> --helm-values /home/uporabnik/myAppMediaserver/deploy/scripts/kuberneteswip/kubetest1/x-openebs-local/pvmigration.yaml \
> --source-kubeconfig ~/.kube/config-1220.yaml \
> --source-context lju1 \
> --source-namespace default \
> --dest-kubeconfig ~/.kube/config-2135.yaml \
> --dest-context lju2 \
> --dest-namespace default \
> --dest-delete-extraneous-files \
> --log-level debug \
> --ignore-mounted \
> --helm-set fullnameOverride=pvmig23455

Expected behavior
A clear and concise description of what you expected to happen.
When we override full name deployment getting ready should still be detected.

Console output
WITH fullnameOverride

uporabnik@ubuntu:~$ kubectl pv-migrate migrate redisldaipvc redisldaipvc \
> --helm-values /home/uporabnik/myAppMediaserver/deploy/scripts/kuberneteswip/kubetest1/x-openebs-local/pvmigration.yaml \
> --source-kubeconfig ~/.kube/config-1220.yaml \
> --source-context lju1 \
> --source-namespace default \
> --dest-kubeconfig ~/.kube/config-2135.yaml \
> --dest-context lju2 \
> --dest-namespace default \
> --dest-delete-extraneous-files \
> --log-level debug \
> --ignore-mounted \
> --helm-set fullnameOverride=pvmig23455
🚀  Starting migration
❕  Extraneous files will be deleted from the destination
💡  PVC redisldaipvc is mounted to node lju1062.lju1, ignoring...
💡  PVC redisldaipvc is mounted to node lju2135.lju2, ignoring...
💭  Will attempt 3 strategies: mnt2, svc, lbsvc
🚁  Attempting strategy: mnt2
🦊  Strategy 'mnt2' cannot handle this migration, will try the next one
🚁  Attempting strategy: svc
🦊  Strategy 'svc' cannot handle this migration, will try the next one
🚁  Attempting strategy: lbsvc
🔑  Generating SSH key pair
creating 4 resource(s)
beginning wait for 4 resources with timeout of 1m0s
Deployment is not ready: default/pvmig23455-sshd. 0 out of 1 expected pods are ready
🧹  Cleaning up
uninstall: Deleting pv-migrate-bdacd-src
Starting delete for "pvmig23455-sshd" Service
Starting delete for "pvmig23455-sshd" Deployment
Starting delete for "pvmig23455-sshd" Secret
Starting delete for "pvmig23455-sshd" ServiceAccount
beginning wait for 4 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-bdacd-src
✨  Cleanup done
🔶  Migration failed with this strategy, will try with the remaining strategies
Error: all strategies failed

WITH REMOVED fullnameOverride

uporabnik@ubuntu:~$ kubectl pv-migrate migrate redisldaipvc redisldaipvc \
> --helm-values /home/uporabnik/myAppMediaserver/deploy/scripts/kuberneteswip/kubetest1/x-openebs-local/pvmigration.yaml \
> --source-kubeconfig ~/.kube/config-1220.yaml \
> --source-context lju1 \
> --source-namespace default \
> --dest-kubeconfig ~/.kube/config-2135.yaml \
> --dest-context lju2 \
> --dest-namespace default \
> --dest-delete-extraneous-files \
> --log-level debug \
> --ignore-mounted
🚀  Starting migration
❕  Extraneous files will be deleted from the destination
💡  PVC redisldaipvc is mounted to node lju1062.lju1, ignoring...
💡  PVC redisldaipvc is mounted to node lju2135.lju2, ignoring...
💭  Will attempt 3 strategies: mnt2, svc, lbsvc
🚁  Attempting strategy: mnt2
🦊  Strategy 'mnt2' cannot handle this migration, will try the next one
🚁  Attempting strategy: svc
🦊  Strategy 'svc' cannot handle this migration, will try the next one
🚁  Attempting strategy: lbsvc
🔑  Generating SSH key pair
creating 4 resource(s)
beginning wait for 4 resources with timeout of 1m0s
Deployment is not ready: default/pv-migrate-cdcca-src-sshd. 0 out of 1 expected pods are ready
creating 3 resource(s)
beginning wait for 3 resources with timeout of 1m0s
📂  Copying data...   0% |

Version

  • Source and destination Kubernetes versions [e.g. v1.23.5+k3s1]

  • pv-migrate version and architecture [e.g. v0.5.5 - darwin_x86_64]:
    kubectl pv-migrate -v
    pv-migrate version 0.12.1 (commit: 8460071) (build date: 2022-04-11T21:42:07Z)

  • Installation method [e.g. homebrew, binary download]: krew

  • Source and destination PVC type, size and accessModes [e.g. ReadWriteMany, 8G, kubernetes.io/gce-pd -> ReadWriteOnce, N/A, rancher.io/local-path ]:

  • static predefined PV, PVC with type: "local"

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: renovate.json
Error type: The renovate configuration file contains some invalid settings
Message: Invalid regExp for regexManagers[0].fileMatch: '.github/workflows/**'

Simple migration

Have you considered simple migrations such as Helm chart renames (Prometheus operator) or cluster rebuild where no data has to actually move?

I've done this manually on AWS in the past by creating the new PV and PVC, getting the YAML, deleting the new PV and PVC and then recreating them with reference to the old EBS volumes substituted in.

How to use your tool without internet?

Is your feature request related to a problem? Please describe.
We have cluster without internet access
Describe the solution you'd like
Allow to use tool without downloading image from internet

Describe alternatives you've considered
Add image path variable to change default path.

all strategies fail if the pvc's name is longer than 63 characters

Describe the bug
all strategies fail if the pvc's name is longer than 63 characters
spec.template.spec.volumes[0].name: Invalid value: "dev-cms-coremedia-delivery-unit1-persistent-commerce-cache-cae-live-unit1-0": must be no more than 63 characters

To Reproduce
Steps to reproduce the behavior:

  1. Run command ' pv-migrate migrate dev-cms-coremedia-delivery-unit1-persistent-commerce-cache-cae-live-unit1-0 commerce-cache-cae-live-unit1 --ignore-mounted --log-format json --log-level debug'
  2. See error:
  3. {"dest":"dev-cms-coremedia-delivery-unit1-persistent-commerce-cache-cae-live-unit1-new-0","dest_ns":"","error":"Deployment.apps "pv-migrate-h070t-src-sshd" is invalid: [spec.template.spec.volumes[0].name: Invalid value: "dev-cms-coremedia-delivery-unit1-persistent-commerce-cache-cae-live-unit1-0": must be no more than 63 characters, spec.template.spec.containers[0].volumeMounts[0].name: Not found: "dev-cms-coremedia-delivery-unit1-persistent-commerce-cache-cae-live-unit1-0"]","id":"h070t","level":"warning","msg":":large_orange_diamond: Migration failed with this strategy, will try with the remaining strategies","source":"dev-cms-coremedia-delivery-unit1-persistent-commerce-cache-cae-live-unit1-0","source_ns":"","strategy":"lbsvc","time":"2022-02-03T14:56:48+01:00"}
    Error: all strategies have failed

Expected behavior
migrate data even if names arelonger than 63 characters

Console output
Add the error logs and/or the output to help us diagnose the problem.

Version

  • Source and destination Kubernetes versions [e.g. v1.17.14-gke.1600, v1.21.1+k3s1]
    v1.20.7
  • Source and destination container runtimes [e.g. containerd://1.4.4-k3s2, docker://19.3.6]
  • pv-migrate version and architecture [e.g. v0.5.5 - darwin_x86_64]
    pv-migrate version 0.10.1 (commit: fdc11ad) (build date: 2022-01-02T14:04:16Z)
  • Installation method [e.g. homebrew, binary download]
  • Source and destination PVC type, size and accessModes [e.g. ReadWriteMany, 8G, kubernetes.io/gce-pd -> ReadWriteOnce, N/A, rancher.io/local-path ]

Additional context
Add any other context about the problem here.

ssh: port 22: Connection refused with svc strategy

Is your feature request related to a problem? Please describe.
It looks like for svc strategy, It tries to connect with

sshRemoteHost := releaseName + "-sshd." + sourceNs

In my case, I'm having this issue while trying to launch the command with this strategy :

ssh: connect to host pv-migrate-wjl9r-sshd.************** port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(228) [Receiver=3.2.3]
rsync attempt 1/10 failed, waiting 5 seconds before trying again

I think that I'm having this issue because releaseName + "-sshd." + sourceNs can't be resolved on my cluster. The default is usually search <namespace>.svc.cluster.local svc.cluster.local cluster.local

Here is my question :

  • Shoudn't we use svcName.sourceNs.svc.cluster.local as It used to be - k8s doc ?

pv-migrate-wjl9r-sshd ClusterIP *********** <none> 22/TCP is well created.

Describe the solution you'd like
I may be wrong, but I don't think adding svc.cluster.local when using svc strategy would cause any regressions?

Describe alternatives you've considered
There may be an option to specify the full domain?

Strategy "mnt2" always fails

Describe the bug
Strategy "mnt2" always fails, fallback "svc" works like a charm. Is there a way to get the reason why "mnt2" fails?

To Reproduce

  1. Create namespace "migrate"
  2. Create pvcs "data" and "destination"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: premium-rwo

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: destination
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: premium-rwo
  1. Run migration
-> % pv-migrate migrate data destination
🚀  Starting migration
💭  Will attempt 3 strategies: mnt2,svc,lbsvc
🚁  Attempting strategy: mnt2
🧹  Cleaning up
✨  Cleanup successful
⚠️  Migration failed with this strategy, will try with the remaining strategies
🚁  Attempting strategy: svc
🔑  Generating SSH key pair
🔑  Creating secret for the public key
🚀  Creating sshd pod
⏳  Waiting for the sshd pod to start running
🚀  Sshd pod started
🔑  Creating secret for the private key
🔗  Connecting to the rsync server
📂  Copying data... 100% |████████████████| ()
🧹  Cleaning up
✨  Cleanup successful
✅  Migration succeeded

Expected behavior
I'd expect the strategy "mnt2" to work. No matter what I do, it always fails and I can't see why.

Console output
see above

Version

  • Source and destination Kubernetes version: v1.21.5-gke.1802
  • Source and destination container runtimes: n/a
  • pv-migrate version 0.6.3 (commit: 20684eb26a83c54c88ea2f6327d82c43ba7e3389)
  • Darwin MacBook-Pro.local 20.6.0 Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64 x86_64
  • Installation method: homebrew
  • Source and destination PVC type, size and accessModes: pd.csi.storage.gke.io, ReadWriteOnce, 10GB

Falling back between strategies failing randomly

Because the resources with the same task id from the previous strategy are still being cleaned up.

We can enumerate the strategies and append it to the generated resource names, like: pv-migrate-sshd-v6gxq-1, pv-migrate-sshd-v6gxq-2

Alternatively, generate a different id per strategy.

All strategies failed

Describe the bug
Migrating pvc from DigitalOcean to AWS(EKS)

To Reproduce

DigitalOcean pvc

Name:          pvc-migrate-1
Namespace:     pvc
StorageClass:  do-block-storage
Status:        Bound
Volume:        pvc-b48b77a2-c41f-4097-9d84-f30e641b62df
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem

AWS pvc

Name:          pvc-migrate
Namespace:     pvc
StorageClass:  gp2
Status:        Bound
Volume:        pvc-77e5847c-7c2e-4dce-8500-a4166f2881fa
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
               volume.kubernetes.io/selected-node: ip-10-0-27-30.eu-central-1.compute.internal
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem

Executed command for migration

pv-migrate migrate -i \
  --source-kubeconfig ~/test \
  --source-namespace pvc \
  --dest-kubeconfig ~/.kube/config \
  --dest-namespace pvc \
  pvc-migrate-1 pvc-migrate

Expected behavior
Migrate pvc from digitalocean to aws

Console output

🚀  Starting migration
💡  PVC pvc-migrate is mounted to node ip-10-0-27-30.eu-central-1.compute.internal, ignoring...
💭  Will attempt 3 strategies: mnt2, svc, lbsvc
🚁  Attempting strategy: mnt2
🦊  Strategy 'mnt2' cannot handle this migration, will try the next one
🚁  Attempting strategy: svc
🦊  Strategy 'svc' cannot handle this migration, will try the next one
🚁  Attempting strategy: lbsvc
🔑  Generating SSH key pair
🧹  Cleaning up
✨  Cleanup done
🔶  Migration failed with this strategy, will try with the remaining strategies
Error: all strategies failed

Version

  • Source and destination Kubernetes versions [ 1.21.5-do.0, 1.22]
  • Source and destination container runtimes [ containerd://1.4.11, docker://20.10.13]
  • pv-migrate version and architecture [v1.0.0 - linux_x86_64]
  • Installation method [binary download]

SSHD source resources gets uninstalled whilst the loadbalancer service is still being created

Hi ,

We are trying to migrate PVs using lbsvc strategy and when we run the command, the SSHD source resources are getting uninstalled while the lb service is still being created. I guess there is a time out of 1 min set for the resources to be ready to proceed with installing the destination rsync resources. For some reason, our lb service takes more than 1 min to get the ingress IP and it suddenly starts uninstalling the resources. I would like to know if this is expected behaviour or is there a way to increase the timeout.

To Reproduce
Steps to reproduce the behavior:

  1. Run command '.pv-migrate migrate
    --source-kubeconfig ~/.kube/config
    --source-context gke1
    --source-namespace upgradetest
    --source-kubeconfig ~/.kube/config
    --dest-context gke2
    --dest-namespace upgradetest
    --dest-delete-extraneous-files
    --helm-set sshd.service.loadBalancerIP=10.99.0.171
    --helm-set sshd.service.annotations."networking.gke.io/load-balancer-type"=Internal '
  2. See error
    creating 4 resource(s)
    beginning wait for 4 resources with timeout of 1m0s
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    Service does not have load balancer ingress IP address: upgradetest/pv-migrate-abaac-src-sshd
    🧹 Cleaning up
    uninstall: Deleting pv-migrate-abaac-src

Expected behavior
successful migration

Version

  • Source and destination Kubernetes versions 1.22.8-gke.200
  • pv-migrate version 0.12.1 (commit: 8460071) (build date: 2022-04-11T21:42:07Z)
  • Installation method homebrew

Additional context
Please let me know if you need additional info.

Thanks,
Srujan.

changing the default image used

Is your feature request related to a problem? Please describe.
when trying to use pv-migrate in an air-gapped environment pv-migrate cannot work because it has no contact with docker.io and thus cannot pull the used image

Describe the solution you'd like
a flag like pv-migrate --image

Describe alternatives you've considered
currently I'm using imageContentSourcePolicy to change docker.io to my internal registry but this is not a good fix

Error: exec plugin: invalid apiVersion "client.authentication.k8s.io"/v1alpha1"

Describe the bug
pv-migrate does not work with Kubernetes version 1.24+

To Reproduce
Steps to reproduce the behavior:

  1. I was trying to run pv-migrate from EKS cluster v1.16 to GKE cluster v1.24
  2. Error: exec plugin: invalid apiVersion "client.authentication.k8s.io"/v1alpha1"

Expected behavior
Migrate pvc from EKS v1.16 to GKE v1.24

Console output
Error: exec plugin: invalid apiVersion "client.authentication.k8s.io"/v1alpha1"

Version

  • Source and destination Kubernetes versions v1.16, v1.24
  • pv-migrate version and architecture v1.0.0 - darwin_x86_64
  • Installation method binary download

Additional context
Kubernetes ChangeLog states the following
The client.authentication.k8s.io/v1alpha1 ExecCredential has been removed. If you are using a client-go credential plugin that relies on the v1alpha1 API please contact the distributor of your plugin for instructions on how to migrate to the v1 API.

Rethink logging

Structural logging wasn't the best idea for a CLI application. Need to be user friendly, focused on the UX. Maybe get rid of logrus.

Allow for longer transfers without a constant connection to the pod/k8s api

Is your feature request related to a problem? Please describe.
I triggered a migration from my laptop using pv-migrate to migrate a large amount of data (1.5TB) into my cluster-managed storage. Due to the length of time the migration took, my laptop went in-and-out of connectivity and ended up cycling through all the migration strategies.

Describe the solution you'd like
I can see two not mutually-exclusive solutions here:

  • once a migration has been confirmed to start with a given strategy, a more robust reconnection mechanism to prevent unnecessary transfer halts & strategy cycling. After all, it creates a Job, which Kubernetes will manage Pods for until success is achieved.
  • a fire-and-forget mode with no progress output, leaving cleanup of the Job, Pods and Services up to the user
    • stretch goal here could be to have a --cleanup mode that looks for successfully-completed pv-migrate jobs and cleans them up

Ability to set LoadBalancer IP and connection IP manually

Is your feature request related to a problem? Please describe.
My provider gives to nodes only internal IPs, that work with external IP using NAT redirection.

For example, my node has only internal IP like 10.0.0.1 on the network interface, but I can access to it using provided external IP like 213.177.1.2.

So metallb load balancer declares that internal ips to services, but to connect to that service externally we must use other ip like 213.177.1.2.

By that reason pv-migrate always fails to make a connection, because it sees only internal IP and try connect to it.

Describe the solution you'd like
Solution for this problem can be manually declaring desired LoadBalancer IP to use, and external IP to which connection must be done.

Does this project need a new maintainer?

As the title says, I was wondering if this project needed a new maintainer;
Mostly because @cgroschupp did open a PR a while ago, and it's still open.
I also forked the project to try to avoid using a network copy if possible (my code is pretty ugly, to be honest, but still).
There are multiple situations where this could result in being a super useful tool, so that's the reason behind the question.
I would be interested in contributing too.
Regards

Add needed permissions list to use this command

Is your feature request related to a problem? Please describe.
I had to migrate pvc from old namespace to new namespace. I wanted to do it with pv-migrate, but I could not due to lack of permissions. The problem was I did not know about the missing permissions because pv-migrate did not log it.

Describe the solution you'd like

  1. Create file like: https://github.com/up9inc/mizu/blob/main/docs/PERMISSIONS.md

Describe alternatives you've considered

  1. pv-migrate should log lack of permission.

pv-migrate is not working if the pvc is used by a running pod

Describe the bug
pvc is not getting copied if the pvc is being used by a running pod in kubernetes. Tried with --ignore-mounted option to ignore but the tool is throwing Multi-Attach volume error in sshd pod

pv-migrate version: v0.13.0

To Reproduce
Steps to reproduce the behavior:
Below is the pvc template which we used
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
namespace: namespace
spec:
accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: standard

Pvc is created by the above template and it is using by running pod. Now if i try to copy this pvc to another namespace getting Multi-Attach volume error in the sshd pod description.

Below is the command used:
pv-migrate migrate
--source-kubeconfig /path/to/source/kubeconfig
--source-context some-context
--source-namespace source-ns
--dest-kubeconfig /path/to/dest/kubeconfig
--dest-context some-other-context
--dest-namespace dest-ns
--ignore-mounted
--dest-delete-extraneous-files
old-pvc new-pvc

With the above command, though it is showing the below but the copy is not successful with Multi Attach volume error

💡 PVC is mounted to node , ignoring...

Expected behavior
With the option --ignore-mounted source pvc should be able to copy to destination pvc without failure.

Regression: lbsvc not working

Describe the bug
Hi,
There is a regression since commit 9212755 (first not working) on the strategy lbsvc.

To Reproduce
Cluster with IPv6 svcs.

ON commit 0fef214

./pv-migrate --log-level debug migrate --ignore-mounted --source-kubeconfig src --source-namespace ff9a6d29fd5041998ddebc6edc8c0fc8 --dest-kubeconfig dst --dest-namespace ff9a6d29fd5041998ddebc6edc8c0fc8 --strategies lbsvc ff9a6d29fd5041998ddebc6edc8c0fc8-www ff9a6d29fd5041998ddebc6edc8c0fc8-www
🚀  Starting migration
💡  PVC ff9a6d29fd5041998ddebc6edc8c0fc8-www is mounted to node hello-30, ignoring...
💡  PVC ff9a6d29fd5041998ddebc6edc8c0fc8-www is mounted to node foobar-2, ignoring...
💭  Will attempt 1 strategies: lbsvc
🚁  Attempting strategy: lbsvc
🔑  Generating SSH key pair
creating 4 resource(s)
beginning wait for 4 resources with timeout of 1m0s
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd. 0 out of 1 expected pods are ready
🔶  Migration failed with this strategy, will try with the remaining strategies
❌  Error: all strategies have failed

last events of the deployment are:

status:                                                                         
  conditions:                                                                   
  - lastTransitionTime: "2021-12-06T17:19:17Z"                                  
    lastUpdateTime: "2021-12-06T17:19:17Z"                                      
    message: Created new replica set "pv-migrate-urnc2-sshd-77cbc986b9"         
    reason: NewReplicaSetCreated                                                
    status: "True"                                                              
    type: Progressing                                                           
  - lastTransitionTime: "2021-12-06T17:19:17Z"                                  
    lastUpdateTime: "2021-12-06T17:19:17Z"                                      
    message: Deployment does not have minimum availability.                     
    reason: MinimumReplicasUnavailable                                          
    status: "False"                                                             
    type: Available                                                             
  - lastTransitionTime: "2021-12-06T17:19:17Z"                                  
    lastUpdateTime: "2021-12-06T17:19:17Z"                                      
    message: 'pods "pv-migrate-urnc2-sshd-77cbc986b9-" is forbidden: error looking
      up service account ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-urnc2-sshd: serviceaccount
      "pv-migrate-urnc2-sshd" not found'                                        
    reason: FailedCreate                                                        
    status: "True"                                                              
    type: ReplicaFailure                                                        
  observedGeneration: 1                                                         
  unavailableReplicas: 1

On commit v0.7.2

./pv-migrate --log-level debug migrate --ignore-mounted --source-kubeconfig pikprod-1.ymlW --source-namespace ff9a6d29fd5041998ddebc6edc8c0fc8 --dest-kubeconfig dst --dest-namespace ff9a6d29fd5041998ddebc6edc8c0fc8 --strategies lbsvc ff9a6d29fd5041998ddebc6edc8c0fc8-www ff9a6d29fd5041998ddebc6edc8c0fc8-www
🚀  Starting migration
💡  PVC ff9a6d29fd5041998ddebc6edc8c0fc8-www is mounted to node pikprod-1-worker-30, ignoring...
💡  PVC ff9a6d29fd5041998ddebc6edc8c0fc8-www is mounted to node piktest-27-a-worker-2, ignoring...
💭  Will attempt 1 strategies: lbsvc
🚁  Attempting strategy: lbsvc
🔑  Generating SSH key pair
creating 4 resource(s)
beginning wait for 4 resources with timeout of 1m0s
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ff9a6d29fd5041998ddebc6edc8c0fc8/pv-migrate-bxqdd-sshd. 0 out of 1 expected pods are ready
creating 3 resource(s)
beginning wait for 3 resources with timeout of 1m0s
🧹  Cleaning up
uninstall: Deleting pv-migrate-bxqdd
Starting delete for "pv-migrate-bxqdd-sshd" Service
Starting delete for "pv-migrate-bxqdd-sshd" Deployment
Starting delete for "pv-migrate-bxqdd-sshd" Secret
Starting delete for "pv-migrate-bxqdd-sshd" ServiceAccount
beginning wait for 4 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-bxqdd
uninstall: Deleting pv-migrate-bxqdd
Starting delete for "pv-migrate-bxqdd-rsync" Job
Starting delete for "pv-migrate-bxqdd-rsync" Secret
Starting delete for "pv-migrate-bxqdd-rsync" ServiceAccount
beginning wait for 3 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-bxqdd
✨  Cleanup done
🔶  Migration failed with this strategy, will try with the remaining strategies
❌  Error: all strategies have failed

Version

  • Source and destination Kubernetes versions: src 1.18 - dst 1.21
  • Source and destination container runtimes: src docker://19.3.13 - dst cri-o://1.21.3

Do you need any more info ?

Thanks and cheers !

New strategy: svcmesh

Is your feature request related to a problem? Please describe.
We have 2 clusters already connected using submariner.io. But "svc" strategy requires that both clusters are same Host ((

sameCluster := s.ClusterClient.RestConfig.Host == d.ClusterClient.RestConfig.Host
))

Describe the solution you'd like
When we run pv-migrate on Multi-cluster "mesh", where pods can already cross-communicate between clusters, strategy "svc" does not cover our scenario - as we understand "svc" strategy would run only when source and dest are in the same cluster.

Our requirement is similar to "svc" but must not check "host1=host2"

Both source and destination clusters are connected via "submariner.io". We fix service name with "--helm-set fullnameOverride=fixedname12345"

We than run "subctl export service --namespace default servicename" to expose DNS service name to be available in other cluster too.

In this case : both clusters can communicate with each other using clusterips. Cross cluster Pod Ips can be called using "curl" and services using special notation. Example: from nodes in cluster 1 we can connect to cluster 2 using "cluster2name.servicename.namespace.clusterset.local" like dns records.

Describe alternatives you've considered
--helm-set fullnameOverride=pvmig12345
--dest-host-override lju1.pvmig234.default.service.cluster.local
--helm-set sshd.service.type=ClusterIp

Additional context
k3s 1.23

a problem with lbsvc migration strategy | ("Migration failed with this strategy, will try with the remaining strategies" error="timed out waiting for the condition" strategy=lbsvc)

Describe the bug
Hi there, thanks for creating these good tools. but, it looks like there's something wrong when I'm using it. I plan to migrate our persistent volume to a new cluster. It looks like there's a problem when the new cluster SSH remote to pods created in the old cluster. I'll break down the cluster specs below. Is there something problem with my parameter or there's something else?
Looking forward to your responses, thanks! 😊

To Reproduce
Steps to reproduce the behavior:

./pv-migrate.exe migrate \
  --source-context old-public-cluster \
  --source-namespace default \
  --dest-context new-private-cluster \
  --dest-namespace artifact \
  archiva archiva-pvc

Expected behavior
Success

Console output

time="2021-05-26T01:18:58+07:00" level=info msg="Will attempt 3 strategies" dest=artifact/archiva-pvc source=default/archiva strategies="mnt2,svc,lbsvc"
time="2021-05-26T01:18:58+07:00" level=info msg="Attempting strategy" strategy=lbsvc
time="2021-05-26T01:18:58+07:00" level=info msg="Generating RSA SSH key pair"
time="2021-05-26T01:18:58+07:00" level=info msg="Creating secret for the public key"
time="2021-05-26T01:18:58+07:00" level=info msg="Creating sshd pod" podName=pv-migrate-sshd-hry4t
time="2021-05-26T01:18:58+07:00" level=info msg="Waiting for the sshd pod to start running" podName=pv-migrate-sshd-hry4t
time="2021-05-26T01:19:16+07:00" level=info msg="Sshd pod running" podName=pv-migrate-sshd-hry4t
time="2021-05-26T01:19:22+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=5 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:19:27+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=10 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:19:31+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=15 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:19:37+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=20 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:19:42+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=25 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:19:47+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=30 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:19:52+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=35 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:19:57+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=40 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:20:02+07:00" level=info msg="Waiting for LoadBalancer IP" elapsedSecs=45 intervalSecs=5 service=pv-migrate-sshd-hry4t timeoutSecs=120
time="2021-05-26T01:20:07+07:00" level=info msg="Creating secret for the private key"
time="2021-05-26T01:20:07+07:00" level=info msg="Connecting to the rsync server" targetHost=xx.xxx.xxx.xxx
time="2021-05-26T01:22:10+07:00" level=info msg="Cleaning up"
time="2021-05-26T01:22:11+07:00" level=warning msg="Migration failed with this strategy, will try with the remaining strategies" error="timed out waiting for the condition" strategy=lbsvc
time="2021-05-26T01:22:11+07:00" level=info msg="Attempting strategy" strategy=mnt2
time="2021-05-26T01:22:11+07:00" level=info msg="Strategy cannot handle this migration, will try the next one" strategy=mnt2
time="2021-05-26T01:22:11+07:00" level=info msg="Attempting strategy" strategy=svc
time="2021-05-26T01:22:11+07:00" level=info msg="Strategy cannot handle this migration, will try the next one" strategy=svc
time="2021-05-26T01:22:11+07:00" level=fatal msg="all strategies have failed"

Version

  • Source Kubernetes versions 1.18.17-gke.100
  • Destination Kubernetes versions1.18.17-gke.100 (when this command executed)(current: 1.18.17-gke.700)
  • Source and destination container runtimes docker://19.3.14
  • pv-migrate version and architecture pv-migrate_0.5.5_windows_x86_64
  • Installation method binary download
  • Source PVC Specs:
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 50Gi
  • Destination PVC Specs:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi

Additional context

  • The source cluster configured in the public networking cluster
  • The destination cluster configured in the private networking cluster
  • Using the GKE Platform

Update README

  • Add the instructions for the build using goreleaser
  • Any other updates that are needed

pv-migrate aborts after ~2m

Describe the bug
I am trying to migrate several GB of data and see pv-migrate abort every time after ~2 minutes. This is fairly consistent across different PVCs and using different strategies. Re-running the command seems to complete the rsync though.

To Reproduce
Steps to reproduce the behavior:
pv-migrate --log-level debug migrate --source-context <CTX> --source-namespace <Namespace> --source-path "/" --dest-context <CTX> --dest-namespace <Namespace> --dest-path "/" --strategies "svc" --ignore-mounted --dest-delete-extraneous-files <Source_PVC> <Destination_PVC>

Expected behavior
The Job runs longer, potentially up to several hours.

Console output

🚀  Starting migration
❕  Extraneous files will be deleted from the destination
💭  Will attempt 1 strategies: svc
🚁  Attempting strategy: svc
🔑  Generating SSH key pair
creating 7 resource(s)
beginning wait for 7 resources with timeout of 1m0s
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: ame/pv-migrate-i36aw-sshd. 0 out of 1 expected pods are ready
📂  Copying data...  84% |█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                           | (72.779 MB/s) [1m53s:25s]🧹  Cleaning up
uninstall: Deleting pv-migrate-i36aw
📂  Copying data...  84% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                           | (70.286 MB/s) [2m0s:25s]Starting delete for "pv-migrate-i36aw-sshd" Service
Starting delete for "pv-migrate-i36aw-rsync" Job
Starting delete for "pv-migrate-i36aw-sshd" Deployment
Starting delete for "pv-migrate-i36aw-sshd" Secret
Starting delete for "pv-migrate-i36aw-rsync" Secret
Starting delete for "pv-migrate-i36aw-sshd" ServiceAccount
Starting delete for "pv-migrate-i36aw-rsync" ServiceAccount
beginning wait for 7 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-i36aw
✨  Cleanup done
🔶  Migration failed with this strategy, will try with the remaining strategies
❌  Error: all strategies have failed

Version

  • Source and destination Kubernetes versions v1.21.4
  • Source and destination container runtimes 'containerd'
  • pv-migrate_0.7.2_linux_x86_64.tar.gz
  • Installation method: binary download
  • Source and destination PVC type, size and accessModes [e.g. ReadWriteOnce, 50G, kubernetes.io/gce-pd -> pd.csi.storage.gke.io ]

Additional context
Source and Destination are in the same cluster and in the same namespace. All accessing pods were stopped at the time of migration.

Ability to pass annotations to service

Is your feature request related to a problem? Please describe.
I use metallb as load balancer and it has a feature to share a single IP between several services via annotations:

apiVersion: v1
kind: Service
metadata:
  annotations:
    metallb.universe.tf/allow-shared-ip: "key-to-share-1.2.3.4"

With metallb it's common situation that Kubernetes cluster has no free IP addresses pool to assign them to new temporary services, so pv-migrate fails in that situation because all IP are busy.

Describe the solution you'd like
To solve this problem we can add ability to pass annotations to Service, created by pv-migrate via extending Helm chart and adding values like this:

sshd:
  service:
    annotations: 
      metallb.universe.tf/allow-shared-ip: "my-key"

What do you think about this way of implementing this feature?

Cannot get resource "persistentvolumeclaims" in API group

Describe the bug
Problem when migrating a PV between two Google Cloud projects.

To Reproduce
Steps to reproduce the behavior:

I login at first project and save the config file into kube_src:

gcloud login auth
gcloud container clusters get-credentials cluster-1 --zone europe-west4-a --project project-1
cat ~/.kube/config > ./kube_src

I login at second project and save the kube config file into kube_dst:

gcloud login auth
gcloud container clusters get-credentials cluster-1 --zone europe-west4-a --project project-2
cat ~/.kube/config > ./kube_dst

I run the pv-migrate:

pv-migrate --log-level trace migrate
--source-kubeconfig ./kube_src
--source-namespace ef-backend
--dest-kubeconfig ./kube_dst
--dest-namespace ef-backend
--strategies lbsvc
--ignore-mounted
ef-backend ef-backend

Expected behavior
I expect that the PVC ef-backend is migrated from project-1 to project-2. The user I'm using has an Owner role in GCP, so all permissions should be assigned.

Console output
❌ Error: persistentvolumeclaims "ef-backend" is forbidden: User "<user name >" cannot get resource "persistentvolumeclaims" in API group "" in the namespace "ef-backend": requires one of ["container.persistentVolumeClaims.get"] permission(s).

Version

  • Kubectl client version: v1.24.3
  • Kubectl server version: v1.22.8-gke.202
  • pv-migrate version 1.0.0

PV Migration across cluster without LB svc

Is your feature request related to a problem? Please describe.
Use Case: on premise cluster migration.
On premise cluster doesn't have always the ability to create Service Type=Loadbalancer. Currently, pv-migrate only support lbsvc as strategy between multiple clusters.

Describe the solution you'd like
Run rsync through the Kubernetes API

With kubectl exec it's possible to pipe data through the Kubernetes API.

The kubectl builtin command kubectl cp use this, but instead rsync, it supports only tar.

rsync -avurP --blocking-io --rsync-path= --rsh="kubectl exec $POD_NAME -i -- " /dir rsync:/dir

Describe alternatives you've considered

kubectl cp

Additional context
As I know, the openshift cli supports rsync a copy mechanism, see: https://github.com/openshift/origin/blob/release-3.11/pkg/oc/cli/rsync/rsync.go

`chown` fails

Hi, I will try to use pv-migrate (great software...) with aws-efs-csi-driver, this csi fails when you try to do chown, you can see at:

I got this error:

WARNING[2021-04-28T12:15:55+02:00] Migration failed, will try remaining strategies  error="job failed with pod logs: rsync: [generator] chown \"/dest/png\" failed: Operation not permitted (1)\n\r         32.77K   6%    0.00kB/s    0:00:00  \r        520.19K 100%  464.84MB/s    0:00:00 (xfr#1, to-chk=2/4)\nplugins/\npng/\nrsync: [receiver] chown \"/dest/.grafana.db.CpcBci\" failed: Operation not permitted (1)\n\nrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1330) [sender=v3.13.0_rc2-264-g725ac7fb]\nsent 16.89K bytes  received 50 bytes  11.29K bytes/sec\ntotal size is 520.19K  speedup is 30.70\nRsync job failed after 5 retries\n" priority=1000 strategy=mount-both

It could be possible to avoid chown/chmod use?

Thanks

CAP_CAP_SYS_CHROOT causes svc strategy to fail

@Frankkkkk, @utkuozdemir
Commit 610e82c is failing with:

Error: failed to create containerd task: OCI runtime create failed: container_linux.go:367: starting container process caused: unknown capability "CAP_CAP_SYS_CHROOT": unknown

Because apparently the CAP_ prefix is added automatically and is resulting in a duplicate. It appears the value should just be SYS_CHROOT.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.