Giter Site home page Giter Site logo

korb's Introduction

'ello

I build software around Identity and SSO, and also other things sometimes.

I mainly work on authentik, an IDP focused on being easy to use and flexible, and also make a couple tools to test Identity protocols:

Also for some reason I decided to make my own DHCP and DNS Server, Gravity.

I also like to use a lot of IaC workflows for my lab, like infrastructure with Ansible/Puppet/Terraform and k8s with Flux.

korb's People

Contributors

beryju avatar bsherman avatar cubic3d avatar dependabot[bot] avatar fe-ax avatar gabe565 avatar leon-gorissen avatar qjcg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

korb's Issues

PVC shouldnt removed in order to a failed moving data

WARN[0594] Failed to move data component=strategy error="timed out waiting for the condition" strategy=copy-twice-name
INFO[0594] Cleaning up... component=strategy strategy=copy-twice-name

My Source PVC gets terminated although the copy of data was not successful. This should be prevented in order to prevent dataloss.

Found pod which mounts source PVC

Problem description

When looking for the Pods using the PVC in the specified namespace, the target PVC should have a different name.

On line 10, the reference for the target PVC is named pvc.
On line 22, each PVC retrieved from the existing Pods is also named pvc.

suggestion to speed up pvc transfer

A small suggestion on how Korb could be a lot faster: create a tar.gz file on the source PVC, move it to the destination, and then extract it. This method would significantly improve the speed.

Error opening log stream

Description

When using more than one container per Pod (such as containers for mesh), the logs cannot be retrieved from the Pod.

Error message:

WARN[0009] error opening log stream                      component=mover-job error="a container name must be specified for pod korb-job-c9ca96bd-daa3-4a9a-92df-beca480ae83e-nf7wp, choose one of: [mover linkerd-proxy] or one of the init containers: [linkerd-init]"

This can be fixed by specifying the Container name in the options here.

Another option would be to also add the annotation "linkerd.io/inject": "disabled" here.

Unable to move between longhorn and ceph

I really like the idea of the tool, but it seems that it doesn't quite work?
If I manually specify strategy import as I have exported it already, it still doesn't create a new pvc?

Version: 2.2.0, running on Linux x86_64.

$ korb paperless --new-pvc-name paperless-v1 --new-pvc-storage-class fast-ceph-filesystem
DEBU[0000] Created client from kubeconfig                component=migrator kubeconfig=/home/sky/.kube/config
DEBU[0000] Got current namespace                         component=migrator namespace=default
DEBU[0000] Got Source PVC                                component=migrator name=paperless uid=ed281628-436f-4a41-a1f3-00f0aa32a538
DEBU[0000] Compatible Strategies:                        component=migrator
DEBU[0000] Copy the PVC to the new Storage class and with new size and a new name, delete the old PVC, and copy it back to the old name.  component=migrator identifier=copy-twice-name
DEBU[0000] Export PVC content into a tar archive.        component=migrator identifier=export
DEBU[0000] Import data into a PVC from a tar archive.    component=migrator identifier=import
ERRO[0000] No (compatible) strategy selected.            component=migrator

Copy a kubevirt VMs DataVolume/pvc to new storageclass fails

My use case is to move a vm from its old storageclass to a new storageclass but there is an issue I'm encountering.

this is the the command line I used

korb vm-image --new-pvc-storage-class netapp-faster --strategy copy-twice-name

What happens is that Korb copys the data to a new PVC which works great, but when we get to the delete pvc step there is a datavolume that makes sure that the pvc exist so the pvc was re-created before the rename happens.

I would love another strategy that just does the clone... so old pvc to new pvc.

flag -new-pvc-namespace not work

korb version is 1.1.4

# korb  -v
korb version 1.1.4

I will migrate pvc in korb/cephfs-test to korb-dst/cephfs-test:

# korb  --new-pvc-namespace korb-dst cephfs-test --source-namespace korb
DEBU[0000] Created client from kubeconfig                component=migrator kubeconfig=/root/.kube/config
DEBU[0000] Got current namespace                         component=migrator namespace=istio-system
DEBU[0000] Got Source PVC                                component=migrator name=cephfs-test uid=738b79b1-017a-4469-9225-cc10b9a7ec9f
DEBU[0000] No new Name given, using old name             component=migrator
DEBU[0000] Found pod which mounts source PVC             component=migrator pod=cephfs-test-5b5cfd7646-2nxsv
DEBU[0000] Walking owners                                component=migrator meta=cephfs-test-5b5cfd7646-2nxsv
DEBU[0000] Walking owners                                component=migrator meta=cephfs-test-5b5cfd7646
DEBU[0000] Walking owners                                component=migrator meta=cephfs-test
DEBU[0000] Found deployment                              component=migrator
DEBU[0000] Compatible Strategies:                        component=migrator
DEBU[0000] Copy the PVC to the new Storage class and with new size and a new name, delete the old PVC, and copy it back to the old name.  component=migrator
DEBU[0000] Only one compatible strategy, running         component=migrator
DEBU[0000] Set timeout from PVC size                     component=strategy strategy=copy-twice-name timeout=1m0s
WARN[0000] This strategy assumes you've stopped all pods accessing this data.  component=strategy strategy=copy-twice-name
DEBU[0000] creating temporary PVC                        component=strategy stage=1 strategy=copy-twice-name
WARN[0002] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0004] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0006] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0008] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0010] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0012] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0014] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0016] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0018] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0020] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0022] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0024] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0026] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0028] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
WARN[0030] PVC not bound yet, retrying                   component=strategy pvc-name=cephfs-test-copy-1655433610 strategy=copy-twice-name
........

But not found pvc in korb-dst namespace

# k get pvc -n korb-dst
No resources found in korb-dst namespace.

Appear in korb source namespace

# k get pvc -n korb
NAME                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
cephfs-test                   Bound     pvc-738b79b1-017a-4469-9225-cc10b9a7ec9f   1Gi        RWX            rook-cephfs       35m
cephfs-test-copy-1655433610   Pending                                                                        rook-ceph-block   2m25s

korb job pod is pending due to taints

91s         Warning   FailedScheduling             pod/korb-job-ed10d5a1-af1c-4f86-aef5-a70d5c712824-k6788                                                                              0/4 nodes are available: 1 node(s) had volume node affinity conflict, 3 node(s) had untolerated taint {nodes.kube.xxx/type: xx-xxx}. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling..
31s         Warning   FailedScheduling             pod/korb-job-ed10d5a1-af1c-4f86-aef5-a70d5c712824-k6788                                                                              0/4 nodes are available: persistentvolumeclaim "prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0-copy-1698998578" is being deleted. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling..

Korb job stayed in pending state

DEBU[0059] Pod not in correct state yet                  component=mover-job phase=Pending
DEBU[0061] Pod not in correct state yet                  component=mover-job phase=Pending
DEBU[0061] Pod not in correct state yet                  component=mover-job phase=Pending
WARN[0061] failed to wait for pod to be running          component=mover-job error="timed out waiting for the condition"
WARN[0061] Failed to move data                           component=strategy error="pod not in correct state" strategy=copy-twice-name
INFO[0061] Cleaning up...                                component=strategy strategy=copy-twice-name

Could we allow korb job to tolerate all taints?

spec:
  tolerations:
  - operator: "Exists"

Korb can't see my PVC, getting the wrong namespace

Got a PVC "home-home-assistant" in the namespace home and trying to copy it to a different StorageClass (same namespace).

I used this command:

$ korb --kubeConfig=./kubeconfig --new-pvc-name home-assistant-config-v1 --new-pvc-storage-class rook-ceph-block home-home-assistant
DEBU[0000] Created client from kubeconfig                component=migrator kubeconfig=./kubeconfig
DEBU[0000] Got current namespace                         component=migrator namespace=default
PANI[0000] Failed to get Source PVC                      component=migrator error="persistentvolumeclaims \"home-home-assistant\" not found"
panic: (*logrus.Entry) 0xc0003ff8f0

goroutine 1 [running]:
github.com/sirupsen/logrus.(*Entry).log(0xc0003ff880, 0x0, 0xc00049e048, 0x18)
	github.com/sirupsen/[email protected]/entry.go:259 +0x2e5
github.com/sirupsen/logrus.(*Entry).Log(0xc0003ff880, 0xc000000000, 0xc0001af080, 0x1, 0x1)
	github.com/sirupsen/[email protected]/entry.go:293 +0x86
github.com/sirupsen/logrus.(*Entry).Panic(...)
	github.com/sirupsen/[email protected]/entry.go:331
github.com/BeryJu/korb/pkg/migrator.(*Migrator).validateSourcePVC(0xc0003e7f10, 0x8)
	github.com/BeryJu/korb/pkg/migrator/validate.go:31 +0x26e
github.com/BeryJu/korb/pkg/migrator.(*Migrator).Validate(0xc0003e7f10, 0x0, 0x0, 0x0, 0x0)
	github.com/BeryJu/korb/pkg/migrator/validate.go:12 +0x45
github.com/BeryJu/korb/pkg/migrator.(*Migrator).Run(0xc0003e7f10)
	github.com/BeryJu/korb/pkg/migrator/migrator.go:65 +0x45
github.com/BeryJu/korb/cmd.glob..func1(0x2f2b580, 0xc0003c59e0, 0x1, 0x6)
	github.com/BeryJu/korb/cmd/root.go:41 +0x125
github.com/spf13/cobra.(*Command).execute(0x2f2b580, 0xc00003a080, 0x6, 0x6, 0x2f2b580, 0xc00003a080)
	github.com/spf13/[email protected]/command.go:860 +0x2c2
github.com/spf13/cobra.(*Command).ExecuteC(0x2f2b580, 0x2f3d140, 0x0, 0xc00005e778)
	github.com/spf13/[email protected]/command.go:974 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/[email protected]/command.go:902
github.com/BeryJu/korb/cmd.Execute()
	github.com/BeryJu/korb/cmd/root.go:52 +0x31
main.main()
	github.com/BeryJu/korb/main.go:6 +0x25

But as you can see it does exist:

$ k get pvc -n home
NAME                  STATUS   VOLUME                CAPACITY   ACCESS MODES   STORAGECLASS      AGE
home-home-assistant   Bound    home-home-assistant   1Gi        RWO            longhorn-static   22m

Ability to specify wait time

Sometimes the copy-twice-name strategy needs more time to complete because of dynamically provisioned volumes, 60 seconds is often just at the limit of what it needs. Would be nice to be able to specify a bigger wait time like this for example:

korb data-postgresql-0 --source-namespace default --new-pvc-storage-class new-csi-ratain --strategy copy-twice-name --timeout 360

Here's a timeout on my setup:

...
DEBU[0060] Pod not in correct state yet                  component=mover-job phase=Pending
DEBU[0060] Pod not in correct state yet                  component=mover-job phase=Pending
WARN[0060] failed to wait for pod to be running          component=mover-job error="timed out waiting for the condition"
WARN[0060] Failed to move data                           component=strategy error="pod not in correct state" strategy=copy-twice-name
INFO[0060] Cleaning up...                                component=strategy strategy=copy-twice-name

Needs single stage volume copy

I really like the korb utility. I have run into scenarios that the korb utility does not really handle though. One scenario is if the stage1 of a copy-twice-name process completes but the stage2 fails due to something else binding to the volume or if the src volume fails to delete in general. That leaves data intact but it exists with the wrong name for existing deployments. I did not see a way to do a single pvc copy that would be the equivalent of the stage2 copy. I only saw copy-twice-name, export, and import strategies unless I missed something in the code.

The ideal fix for this would be to be able to perform a singe stage copy by specifying source and destination pvc names.

WARN with EOF at the end of the copy

I'm getting the following at the end of the copy... The failed to copy happens 10k+ times...
The copied virtualmachine/pvc works fine.

The failed to delete source is a kubevirt/datavolume issue.

korb vm-image --new-pvc-storage-class netapp-nas --strategy copy-twice-name

WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF WARN[0430] failed to copy component=mover-job error=EOF DEBU[0430] Waiting for PVC Deletion, retrying component=strategy pvc-name=vm-image strategy=copy-twice-name WARN[0430] failed to delete source pvc component=strategy error="context deadline exceeded" strategy=copy-twice-name INFO[0430] Cleaning up... component=strategy strategy=copy-twice-name WARN[0430%

Error moving longhorn pvc to rook-ceph

Command I used:

korb gitea-db --new-pvc-name gitea-db-v1 --new-pvc-storage-class rook-ceph-block
DEBU[0000] Created client from kubeconfig                component=migrator kubeconfig=/Users/will/.kube/config
DEBU[0000] Got current namespace                         component=migrator namespace=default
DEBU[0000] Got Source PVC                                component=migrator name=gitea-db uid=f64818b5-87a3-4443-820a-cf478753aabb
DEBU[0000] Found pod which mounts source PVC             component=migrator pod=seafile-db-85747d569d-qcqtv
DEBU[0000] Found pod which mounts source PVC             component=migrator pod=plex-6b6bf6b5f4-46llk
DEBU[0000] Found pod which mounts source PVC             component=migrator pod=home-assistant-5bd7888f6-lrljs
DEBU[0000] Walking owners                                component=migrator meta=seafile-db-85747d569d-qcqtv
DEBU[0000] Walking owners                                component=migrator meta=seafile-db-85747d569d
DEBU[0000] Walking owners                                component=migrator meta=seafile-db
DEBU[0000] Found deployment                              component=migrator
DEBU[0000] Walking owners                                component=migrator meta=plex-6b6bf6b5f4-46llk
DEBU[0000] Walking owners                                component=migrator meta=plex-6b6bf6b5f4
DEBU[0000] Walking owners                                component=migrator meta=plex
DEBU[0000] Found deployment                              component=migrator
DEBU[0000] Walking owners                                component=migrator meta=home-assistant-5bd7888f6-lrljs
DEBU[0000] Walking owners                                component=migrator meta=home-assistant-5bd7888f6
DEBU[0000] Walking owners                                component=migrator meta=home-assistant
DEBU[0000] Found deployment                              component=migrator
DEBU[0000] Compatible Strategies:                        component=migrator
DEBU[0000] Copy the PVC to the new Storage class and with new size and a new name, delete the old PVC, and copy it back to the old name.  component=migrator
DEBU[0000] Only one compatible strategy, running         component=migrator
DEBU[0000] Set timeout from PVC size                     component=strategy strategy=copy-twice-name timeout=0s
WARN[0000] This strategy assumes you've stopped all pods accessing this data.  component=strategy strategy=copy-twice-name
DEBU[0000] creating temporary PVC                        component=strategy stage=1 strategy=copy-twice-name
DEBU[0002] starting mover job                            component=strategy stage=2 strategy=copy-twice-name
DEBU[0004] Pod not in correct state yet                  component=mover-job phase=Pending
..llots of repeats
WARN[0062] error opening log stream                      component=mover-job error="resource name may not be empty"

The error from the mover job was complaining that it's not able to mount the source PVC in readonly mode, I don't think longhorn supports that.

Unrelated: not sure why it shoes all these other pods and deployments under Found pod which mounts source PVC.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.