fekide / volumerize Goto Github PK
View Code? Open in Web Editor NEWThis project forked from blacklabelops/volumerize
Docker Volume Backups Multiple Backends
Home Page: https://hub.docker.com/r/fekide/volumerize/
License: MIT License
This project forked from blacklabelops/volumerize
Docker Volume Backups Multiple Backends
Home Page: https://hub.docker.com/r/fekide/volumerize/
License: MIT License
Bash scripting is cool, however it is very hard to understand the control flow. Python is much easier to comprehend and it will also provide a much easier way to integrate new features later.
Python is the best choice, since it is also the language of duplicity. So it won't require any new dependencies and can be used to integrate with duplicity even better
Hello,
Thanks for your work with volumerize. ๐
I have an issue with S3 during backup (boto module missing). If i attach to container and install boto manually, next run will work.
> docker exec -it volumerize backup
BackendException: Could not initialize backend: No module named 'boto'
running /postexecute/backup/0-removeoldbackup.sh
Checking if old backups should be removed
running /postexecute/backup/1-replicate.sh
> docker exec -it volumerize pip3 install boto
Collecting boto
Downloading boto-2.49.0-py2.py3-none-any.whl (1.4 MB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1.4 MB 8.1 MB/s
Installing collected packages: boto
Successfully installed boto-2.49.0
> docker exec -it volumerize backup
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Wed Mar 10 23:30:21 2021
--------------[ Backup Statistics ]--------------
StartTime 1615415569.40 (Wed Mar 10 23:32:49 2021)
EndTime 1615415569.56 (Wed Mar 10 23:32:49 2021)
ElapsedTime 0.17 (0.17 seconds)
SourceFiles 298
SourceFileSize 13114843 (12.5 MB)
NewFiles 1
NewFileSize 4096 (4.00 KB)
DeletedFiles 2
ChangedFiles 1
ChangedFileSize 573440 (560 KB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 4
RawDeltaSize 2153 (2.10 KB)
TotalDestinationSizeChange 959 (959 bytes)
Errors 0
-------------------------------------------------
running /postexecute/backup/0-removeoldbackup.sh
Checking if old backups should be removed
running /postexecute/backup/1-replicate.sh
My volumerize is setup with a docker-compose :
version: "3"
services:
# other services ...
volumerize:
container_name: volumerize
volumes:
- volumerize-cache:/volumerize-cache
- ./tobackup:/source
environment:
- VOLUMERIZE_SOURCE=/source
- VOLUMERIZE_TARGET=s3://s3.fr-par.scw.cloud/xxxx
- AWS_ACCESS_KEY_ID=xxxxxx
- AWS_SECRET_ACCESS_KEY=xxxxx
- PASSPHRASE=xxxxxxxxx
- 'VOLUMERIZE_JOBBER_TIME=0 0 3 * * *'
- VOLUMERIZE_FULL_IF_OLDER_THAN=14D
- TZ=Europe/London
image: fekide/volumerize:latest
Is it a misconfiguration on my side or a problem in volumerize?
Thanks in advance for your help ๐
Builds on tags are still not working..
Maybe rather do tag finding in buildscripts/release.sh
I've been testing this docker image with some test volumes and this is working fine so far with a local backup folder.
However, i would like to setup another job with FTPS, but I don't get it working.
Volumerize won't connect with my FritzBox. When searching for the docs for Duplicity, it looks like I need to fill in a env. See:
http://duplicity.nongnu.org/docs.html
However, i don't know where to put this password in duplicity. Duplicati is working fine with my Fritzbox, that's not the problem.
I; ve tried the following for the VOLUMERIZE_TARGET
parameter:
ftps://user:[email protected]:port/duplicity
What is the right configuration to use volumerize with ftps?
Last full backup is too old, forcing full backup
Attempt 1 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Attempt 2 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Attempt 3 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Attempt 4 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Giving up after 5 attempts. RedirectMissingLocation: Redirected but the response
is missing a Location: header.
Seems to be a well known error. Should be fixed relatively easily by switching versions of the underlying drive library. I confirmed that the image tagged as blacklabelops/volumerize:1.5.1 as of today (d9ddc01511af) is working as expected.
This issue was also posted in:
blacklabelops#86 (comment)
Docker swarm is an alternative/ more scalable container orchestration than docker-compose. It should also be supported with corresponding utilities.
It is currently only possible to stop containers. Possible additions for docker swarm or also in general are:
Passcodes are currently stored in environment variables. It should also be possible to use _FILE
environment variables
From CI:
ERROR: Command errored out with exit status 1:
command: /usr/bin/python2 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-46hJ9b/azure-storage/setup.py'"'"'; __file__='"'"'/tmp/pip-install-46hJ9b/azure-storage/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-bvfc7i
cwd: /tmp/pip-install-46hJ9b/azure-storage/
Complete output (19 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-46hJ9b/azure-storage/setup.py", line 55, in <module>
raise RuntimeError(message)
RuntimeError:
Starting with v0.37.0, the 'azure-storage' meta-package is deprecated and cannot be installed anymore.
Please install the service specific packages prefixed by `azure` needed for your application.
The complete list of available packages can be found at:
https://aka.ms/azsdk/python/all
Here's a non-exhaustive list of common packages:
- [azure-storage-blob](https://pypi.org/project/azure-storage-blob) : Blob storage client
- [azure-storage-file-share](https://pypi.org/project/azure-storage-file-share) : Storage file share client
- [azure-storage-file-datalake](https://pypi.org/project/azure-storage-file-datalake) : ADLS Gen2 client
- [azure-storage-queue](https://pypi.org/project/azure-storage-queue): Queue storage client
I would like to restore only a database from my backup. Which command is required to restore my database only?
When running the container with tthe --user flag, i''ve got the following errors in the logs:
touch: /root/.jobber: Permission denied
From security perspective its not an good idea to run containers with root. It would be nice to running this rootless.
Thanks in advance!
it would be nice to receive notifications when a backup failed or completed.
Maybe you can apprise for this? There are docker containers for this.
Thanks in advance!
Does this image support a remote docker host?
A lot of containers allow setting a variable such as:
DOCKER_HOST
When running encryption command from the documentation, I got:
Extra arguments given.
rand: Use -help for summary.
The correct command must be:
docker run --rm fekide/volumerize openssl rand -base64 128
This is because an update of OpenSSL.
The scripts have already been created to use the duplicity methods remove-older-than
, remove-all-but-n-full
and remove-all-inc-of-but-n-full
but they need to be run manually and can not be run as cron job via env variable
Also the --force
option needs to be added to actually delete the backups:
Note that --force will be needed to delete the files instead of just listing them
When searching for a backup tool made for docker. I found another interesting tool:
https://github.com/camptocamp/bivac
This tool reads the docker volumes directly from the docker sock. Every interval it scans for new volumes and then it can make a backup. Is a lot easier to add volumes to my backup, than adding volumes with mount points.
Is is possible to add this function to Volumerize?
Since some data might change more quickly it might be useful to use different timers for different sources
For example with VOLUMERIZE_JOBBER_TIME1
The jobber is not created when multiple sources are specified because it is only checked if VOLUMERIZE_SOURCE is defined
As database backup is quite important the respective images should also be built and pushed
HI All, I'm having trouble connecting to a self-hosted Minio server and I've been digging through the documentation and examples with no luck.
Volumerize does not connect or backup to Minio server.
Instead, I get this console log:
volumerize_1 | root: /etc/volumerize/periodicBackup
volumerize_1 | ExecAndWait: /bin/sh: exit status 23
volumerize_1 | {"fate":1,"job":{"command":"/etc/volumerize/periodicBackup","name":"VolumerizeBackupJob","status":"Good","time":"46 * * * * *"},"startTime":1627573546,"stderr":"running /postexecute/backup/0-removeoldbackup.sh\nChecking if old backups should be removed\n\nrunning /postexecute/backup/1-replicate.sh\n\n","stdout":"running /postexecute/backup/0-removeoldbackup.sh\nChecking if old backups should be removed\n\nrunning /postexecute/backup/1-replicate.sh\n\n","succeeded":false,"user":"root","version":"1.4"}
Volumerize connects to minio and is able to backup volumes.
Here is part of my docker-compose:
volumerize :
image: fekide/volumerize
volumes:
- db:/source:ro
- /var/run/docker.sock:/var/run/docker.sock
- volumerize_cache:/volumerize-cache
environment:
- VOLUMERIZE_SOURCE=/source:ro
- "VOLUMERIZE_TARGET=s3:https://minio.domain.com:443/bucket-name"
- VOLUMERIZE_CONTAINERS=db_node
- "AWS_ACCESS_KEY_ID=AAAAAAAAAAAAAAA"
- "AWS_SECRET_ACCESS_KEY=BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"
- VOLUMERIZE_JOBBER_TIME=46 * * * *
- DEBUG=true
- TZ=America/Los_Angeles
depends_on:
- db_node
restart: always
For reference, using the Minio python package I'm able to connect like this:
client = Minio('minio.domain.com',
access_key='AAAAAAAAAAAAAAA',
secret_key='BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB',
secure=True)
Other info -
I would appreciate some feedback or suggestions. I've tried changing VOLUMERIZE_TARGET from "s3" to "minio" and removing "https" & ":443".
Thanks
I have multiple containers running with postgres and Mariadb databases, and wants to make a single job to dump both my mariadb and postgress databases in a single database.
Now, we have separate tags for both mariadb and postgres. I don't know exactly how to integrate this with my Volumerize.
Is Volumerize's idea to have seprate containers for this?
Maybe we can make single container with the added databases and env's?
Thanks in advance!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.