Giter Site home page Giter Site logo

fekide / volumerize Goto Github PK

View Code? Open in Web Editor NEW

This project forked from blacklabelops/volumerize

7.0 4.0 2.0 311 KB

Docker Volume Backups Multiple Backends

Home Page: https://hub.docker.com/r/fekide/volumerize/

License: MIT License

Shell 92.36% Dockerfile 4.65% JavaScript 0.23% Python 2.76%
docker backup volumes docker-compose

volumerize's Issues

Refactor scripts to python

Bash scripting is cool, however it is very hard to understand the control flow. Python is much easier to comprehend and it will also provide a much easier way to integrate new features later.

Python is the best choice, since it is also the language of duplicity. So it won't require any new dependencies and can be used to integrate with duplicity even better

Error with S3 backend : No module named 'boto'

Hello,

Thanks for your work with volumerize. ๐ŸŽ‰

I have an issue with S3 during backup (boto module missing). If i attach to container and install boto manually, next run will work.

> docker exec -it volumerize backup             
BackendException: Could not initialize backend: No module named 'boto'
running /postexecute/backup/0-removeoldbackup.sh
Checking if old backups should be removed

running /postexecute/backup/1-replicate.sh


> docker exec -it volumerize pip3 install boto  
Collecting boto
  Downloading boto-2.49.0-py2.py3-none-any.whl (1.4 MB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.4 MB 8.1 MB/s 
Installing collected packages: boto
Successfully installed boto-2.49.0

> docker exec -it volumerize backup           
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Wed Mar 10 23:30:21 2021
--------------[ Backup Statistics ]--------------
StartTime 1615415569.40 (Wed Mar 10 23:32:49 2021)
EndTime 1615415569.56 (Wed Mar 10 23:32:49 2021)
ElapsedTime 0.17 (0.17 seconds)
SourceFiles 298
SourceFileSize 13114843 (12.5 MB)
NewFiles 1
NewFileSize 4096 (4.00 KB)
DeletedFiles 2
ChangedFiles 1
ChangedFileSize 573440 (560 KB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 4
RawDeltaSize 2153 (2.10 KB)
TotalDestinationSizeChange 959 (959 bytes)
Errors 0
-------------------------------------------------

running /postexecute/backup/0-removeoldbackup.sh
Checking if old backups should be removed

running /postexecute/backup/1-replicate.sh

My volumerize is setup with a docker-compose :

version: "3"
services:
  # other services ...

  volumerize:
    container_name: volumerize
    volumes:
    - volumerize-cache:/volumerize-cache
    - ./tobackup:/source
    environment:
    - VOLUMERIZE_SOURCE=/source
    - VOLUMERIZE_TARGET=s3://s3.fr-par.scw.cloud/xxxx
    - AWS_ACCESS_KEY_ID=xxxxxx
    - AWS_SECRET_ACCESS_KEY=xxxxx
    - PASSPHRASE=xxxxxxxxx
    - 'VOLUMERIZE_JOBBER_TIME=0 0 3 * * *'
    - VOLUMERIZE_FULL_IF_OLDER_THAN=14D
    - TZ=Europe/London
    image: fekide/volumerize:latest

Is it a misconfiguration on my side or a problem in volumerize?

Thanks in advance for your help ๐Ÿ˜„

How to use ftps?

I've been testing this docker image with some test volumes and this is working fine so far with a local backup folder.
However, i would like to setup another job with FTPS, but I don't get it working.
Volumerize won't connect with my FritzBox. When searching for the docs for Duplicity, it looks like I need to fill in a env. See:

http://duplicity.nongnu.org/docs.html

However, i don't know where to put this password in duplicity. Duplicati is working fine with my Fritzbox, that's not the problem.
I; ve tried the following for the VOLUMERIZE_TARGET parameter:

ftps://user:[email protected]:port/duplicity

What is the right configuration to use volumerize with ftps?

Google Drive: RedirectMissingLocation

Last full backup is too old, forcing full backup
Attempt 1 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Attempt 2 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Attempt 3 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Attempt 4 failed. RedirectMissingLocation: Redirected but the response is missin
g a Location: header.
Giving up after 5 attempts. RedirectMissingLocation: Redirected but the response
 is missing a Location: header.

Seems to be a well known error. Should be fixed relatively easily by switching versions of the underlying drive library. I confirmed that the image tagged as blacklabelops/volumerize:1.5.1 as of today (d9ddc01511af) is working as expected.

This issue was also posted in:
blacklabelops#86 (comment)

Support docker swarm

Docker swarm is an alternative/ more scalable container orchestration than docker-compose. It should also be supported with corresponding utilities.

It is currently only possible to stop containers. Possible additions for docker swarm or also in general are:

  • execute job in container (for example set maintenance mode or execute backup utility in the container)
  • scale service

Passcodes are currently stored in environment variables. It should also be possible to use _FILE environment variables

'azure-storage' meta-package is deprecated

From CI:

ERROR: Command errored out with exit status 1:
     command: /usr/bin/python2 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-46hJ9b/azure-storage/setup.py'"'"'; __file__='"'"'/tmp/pip-install-46hJ9b/azure-storage/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-bvfc7i
         cwd: /tmp/pip-install-46hJ9b/azure-storage/
    Complete output (19 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-46hJ9b/azure-storage/setup.py", line 55, in <module>
        raise RuntimeError(message)
    RuntimeError:
    Starting with v0.37.0, the 'azure-storage' meta-package is deprecated and cannot be installed anymore.
    Please install the service specific packages prefixed by `azure` needed for your application.
    The complete list of available packages can be found at:
    https://aka.ms/azsdk/python/all
    Here's a non-exhaustive list of common packages:
    - [azure-storage-blob](https://pypi.org/project/azure-storage-blob) : Blob storage client
    - [azure-storage-file-share](https://pypi.org/project/azure-storage-file-share) : Storage file share client
    - [azure-storage-file-datalake](https://pypi.org/project/azure-storage-file-datalake) : ADLS Gen2 client
    - [azure-storage-queue](https://pypi.org/project/azure-storage-queue): Queue storage client

make it possible to run the container without root

When running the container with tthe --user flag, i''ve got the following errors in the logs:

touch: /root/.jobber: Permission denied

From security perspective its not an good idea to run containers with root. It would be nice to running this rootless.

Thanks in advance!

provide notifications when a backup failed

it would be nice to receive notifications when a backup failed or completed.

Maybe you can apprise for this? There are docker containers for this.

Thanks in advance!

Update documentation to fix encryption

When running encryption command from the documentation, I got:

Extra arguments given.
rand: Use -help for summary.

The correct command must be:

docker run --rm fekide/volumerize openssl rand -base64 128

This is because an update of OpenSSL.

Add support for automatically deleting old backups

The scripts have already been created to use the duplicity methods remove-older-than, remove-all-but-n-full and remove-all-inc-of-but-n-full but they need to be run manually and can not be run as cron job via env variable

Also the --force option needs to be added to actually delete the backups:

Note that --force will be needed to delete the files instead of just listing them

Problem connecting to Minio

HI All, I'm having trouble connecting to a self-hosted Minio server and I've been digging through the documentation and examples with no luck.

Actual behavior:

Volumerize does not connect or backup to Minio server.
Instead, I get this console log:

volumerize_1      | root: /etc/volumerize/periodicBackup
volumerize_1      | ExecAndWait: /bin/sh: exit status 23
volumerize_1      | {"fate":1,"job":{"command":"/etc/volumerize/periodicBackup","name":"VolumerizeBackupJob","status":"Good","time":"46 * * * * *"},"startTime":1627573546,"stderr":"running /postexecute/backup/0-removeoldbackup.sh\nChecking if old backups should be removed\n\nrunning /postexecute/backup/1-replicate.sh\n\n","stdout":"running /postexecute/backup/0-removeoldbackup.sh\nChecking if old backups should be removed\n\nrunning /postexecute/backup/1-replicate.sh\n\n","succeeded":false,"user":"root","version":"1.4"}

Expected behavior:

Volumerize connects to minio and is able to backup volumes.

Here is part of my docker-compose:

  volumerize :
    image: fekide/volumerize
    volumes: 
      - db:/source:ro
      - /var/run/docker.sock:/var/run/docker.sock
      - volumerize_cache:/volumerize-cache
    environment: 
      - VOLUMERIZE_SOURCE=/source:ro
      - "VOLUMERIZE_TARGET=s3:https://minio.domain.com:443/bucket-name"
      - VOLUMERIZE_CONTAINERS=db_node
      - "AWS_ACCESS_KEY_ID=AAAAAAAAAAAAAAA"
      - "AWS_SECRET_ACCESS_KEY=BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"
      - VOLUMERIZE_JOBBER_TIME=46 * * * *
      - DEBUG=true
      - TZ=America/Los_Angeles
    depends_on: 
      - db_node
    restart: always

For reference, using the Minio python package I'm able to connect like this:

client = Minio('minio.domain.com',
               access_key='AAAAAAAAAAAAAAA',
               secret_key='BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB',
               secure=True)

Other info -

  • Minio server is running behind nginx proxy and only ports 80 & 443 are open.

I would appreciate some feedback or suggestions. I've tried changing VOLUMERIZE_TARGET from "s3" to "minio" and removing "https" & ":443".

Thanks

How to backup Postgres and Mariadb ath the same time with prepost?

I have multiple containers running with postgres and Mariadb databases, and wants to make a single job to dump both my mariadb and postgress databases in a single database.

Now, we have separate tags for both mariadb and postgres. I don't know exactly how to integrate this with my Volumerize.
Is Volumerize's idea to have seprate containers for this?

Maybe we can make single container with the added databases and env's?

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.