Giter Site home page Giter Site logo

someone1 / zfsbackup-go Goto Github PK

View Code? Open in Web Editor NEW
322.0 13.0 42.0 407 KB

Backup ZFS snapshots to cloud storage such as Google, Amazon, Azure, etc. Built with the enterprise in mind.

License: MIT License

Makefile 0.24% Go 98.63% Shell 1.12%
golang zfs backup cloud-storage

zfsbackup-go's Introduction

ZFSBackup Build Status Go Report Card Coverage Status

DISCLAIMER: This is a work in progress in still considered beta though I personally use this in a production environment and have tested it for my own use cases (looking for feedback on other people's experience before considering this "production ready").

Overview

This backup software was designed for the secure, long-term storage of ZFS snapshots on remote storage. Backup jobs are resilient to network failures and can be stopped/resumed. It works by splitting the ZFS send stream (the format for which is committed and can be received on future versions of ZFS as per the man page) into chunks and then optionally compresses, encrypts, and signs each chunk before uploading it to your remote storage location(s) of choice. Backup chunks are validated using SHA256 and CRC32C checksums (along with the many integrity checks builtin to compression algorithms, SSL/TLS transportation protocols, and the ZFS stream format itself). The software is completely self-contained and has no external dependencies.

This project was inspired by the duplicity project.

Highlights

  • Written in Go
  • No external dependencies - Just drop in the binary on your system and you're all set!
  • Backup jobs are resumeable and resilient to network failures
  • Backup files can be compressed and optionally encrypyted and/or signed.
  • Concurrent by design, enable multiple cores for parallel processing
  • Configurable Operation - Limit bandwidth usage, space usage, CPU usage, etc.
  • Backup to multiple destinations at once, just comma separate destination URIs
  • Uses familiar ZFS send/receive options

Supported Backends

  • Google Cloud Storage (gs://)
  • Amazon AWS S3 (s3://) (Glacier supported indirectly via lifecycle rules)
  • Any S3 Compatible Storage Provider (e.g. Minio, StorageMadeEasy, Ceph, etc.)
    • Set the AWS_S3_CUSTOM_ENDPOINT environmental variable to the compatible target API URI
  • Azure Blob Storage (azure://)
    • Auth: Set the AZURE_ACCOUNT_NAME and AZURE_ACCOUNT_KEY environmental variables to the appropiate values or if using SAS set AZURE_SAS_URI to a container authorized SAS URI
    • Point to a custom endpoint by setting the AZURE_CUSTOM_ENDPOINT envrionmental variable
    • Although no durability target is provided, there is an in-depth explanation of their architecture here - Using the Reed-Solomon erasure encoding and user-configurable redundancy settings
  • BackBlaze B2 (b2://)
    • Auth: Set the B2_ACCOUNT_ID and B2_ACCOUNT_KEY environmental variables to the appropiate values
    • 99.999999999% durability - Using the Reed-Solomon erasure encoding
  • Local file path (file://[relative|/absolute]/local/path)
  • SSH/SFTP (ssh://)
    • Auth: username & password, public key or ssh-agent.
    • For username & password set the SSH_USERNAME and SSH_PASSWORD environment variables or use the url format: ssh://username:[email protected]/remote/path.
    • For public key auth set the SSH_KEY_FILE environment variable. By default zfsbackup tries to use common key names from the users home directory.
    • ssh-agent auth is activated when SSH_AUTH_SOCK exists.
    • By default zfsbackup also uses the known hosts file from the users home directory. To disable host key checking set SSH_KNOWN_HOSTS to ignore. You can also specify the path to your own known hosts file.

Compression

The compression algorithm builtin to the software is a parallel gzip (pgzip) compressor. There is support for 3rd party compressors so long as the binary is available on the host system and is compatible with the standard gzip binary command line options (e.g. xz, bzip2, lzma, etc.)

Encryption/Signing

The PGP algorithm is used for encryption/signing. The cipher used is AES-256.

Installation

Download the latest binaries from the releases section or compile your own by:

go get github.com/someone1/zfsbackup-go

The compiled binary should be in your $GOPATH/bin directory.

Usage

"Smart" Backup Options

Use the --full option to auto select the most recent snapshot on the target volume to do a full backup of:

./zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc --full Tank/Dataset gs://backup-bucket-target,s3://another-backup-target

Use the --increment option to auto select the most recent snapshot on the target volume to do an incremental snapshot of the most recent snapshot found in the target destination:

./zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc --increment Tank/Dataset gs://backup-bucket-target,s3://another-backup-target

Use the --fullIfOlderThan option to auto select the most recent snapshot on the target volume to do an incremental snapshot of the most recent snapshot found in the target destination, unless the last full backup is older than the provided duration, in which case do a full backup:

./zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc --fullIfOlderThan 720h Tank/Dataset gs://backup-bucket-target,s3://another-backup-target

"Smart" Restore Options

Add the --auto option to automatically restore to the snapshot if one is given, or detect the latest snapshot for the filesystem/volume given and restore to that. It will figure out which snapshots are missing from the local_volume and select them all to restore to get to the desired snapshot. Note: snapshot comparisons work using the name of the snapshot, if you restored a snapshot to a different name, this application won't think it is available and it will break the restore process.

Auto-detect latest snapshot:

./zfsbackup receive --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc --auto -d Tank/Dataset gs://backup-bucket-target Tank

Auto restore to snapshot provided:

./zfsbackup receive --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc --auto -d Tank/Dataset@snapshot-20170201 gs://backup-bucket-target Tank

Manual Options

Full backup example:

./zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc Tank/Dataset@snapshot-20170101 gs://backup-bucket-target

Incremental backup example:

./zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc -i Tank/Dataset@snapshot-20170101 Tank/Dataset@snapshot-20170201 gs://backup-bucket-target,s3://another-backup-target

Full restore example:

./zfsbackup receive --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc -d Tank/Dataset@snapshot-20170201 gs://backup-bucket-target Tank

Incremental restore example:

./zfsbackup receive --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc -d -F -i Tank/Dataset@snapshot-20170101 Tank/Dataset@snapshot-20170201 gs://backup-bucket-target Tank

Notes:

  • Create keyring files:
gpg2 --gen-key
gpg2 --output public.pgp --armor --export [email protected]
gpg2 --output private.pgp --armor --export-secret-key [email protected]
  • PGP Passphrase will be prompted during execution if it is not found in the PGP_PASSPHRASE environmental variable.
  • --maxFileBuffer=0 will disable parallel uploading for some backends, multiple destinations, and upload hash verification but will use virtually no disk space.
  • For S3: Specify Standard/Bulk/Expedited in the AWS_S3_GLACIER_RESTORE_TIER environmental variable to change Glacier restore option (default: Bulk)
  • A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

Help Output:

$ ./zfsbackup
zfsbackup is a tool used to do off-site backups of ZFS volumes.
It leverages the built-in snapshot capabilities of ZFS in order to export ZFS
volumes for long-term storage.

zfsbackup uses the "zfs send" command to export, and optionally compress, sign,
encrypt, and split the send stream to files that are then transferred to a
destination of your choosing.

Usage:
  zfsbackup [command]

Available Commands:
  clean       Clean will delete any objects in the target that are not found in the manifest files found in the target.
  help        Help about any command
  list        List all backup sets found at the provided target.
  receive     receive will restore a snapshot of a ZFS volume similar to how the "zfs recv" command works.
  send        send will backup of a ZFS volume similar to how the "zfs send" command works.
  version     Print the version of zfsbackup in use and relevant compile information

Flags:
      --encryptTo string           the email of the user to encrypt the data to from the provided public keyring.
  -h, --help                       help for zfsbackup
      --jsonOutput                 dump results as a JSON string - on success only
      --logLevel string            this controls the verbosity level of logging. Possible values are critical, error, warning, notice, info, debug. (default "notice")
      --manifestPrefix string      the prefix to use for all manifest files. (default "manifests")
      --numCores int               number of CPU cores to utilize. Do not exceed the number of CPU cores on the system. (default 2)
      --publicKeyRingPath string   the path to the PGP public key ring
      --secretKeyRingPath string   the path to the PGP secret key ring
      --signFrom string            the email of the user to sign on behalf of from the provided private keyring.
      --workingDirectory string    the working directory path for zfsbackup. (default "~/.zfsbackup")
      --zfsPath string             the path to the zfs executable. (default "zfs")

Use "zfsbackup [command] --help" for more information about a command.

Send Options:

$ ./zfsbackup send
Usage:
  zfsbackup send [flags] filesystem|volume|snapshot uri(s)

Flags:
      --compressionLevel int       the compression level to use with the compressor. Valid values are between 1-9. (default 6)
      --compressor string          specify to use the internal (parallel) gzip implementation or an external binary (e.g. gzip, bzip2, pigz, lzma, xz, etc.) Syntax must be similar to the gzip compression tool) to compress the stream for storage. Please take into consideration time, memory, and CPU usage for any of the compressors used. All manifests utilize the internal compressor. If value is zfs, the zfs stream will be created compressed. See the -c flag on zfs send for more information. (default "internal")
  -D, --deduplication              See the -D flag for zfs send for more information.
      --full                       set this flag to take a full backup of the specified volume using the most recent snapshot.
      --fullIfOlderThan duration   set this flag to do an incremental backup of the most recent snapshot from the most recent snapshot found in the target unless the it's been greater than the time specified in this flag, then do a full backup. (default -1m0s)
  -h, --help                       help for send
      --increment                  set this flag to do an incremental backup of the most recent snapshot from the most recent snapshot found in the target.
  -i, --incremental string         See the -i flag on zfs send for more information
  -I, --intermediary string        See the -I flag on zfs send for more information
      --maxBackoffTime duration    the maximum delay you'd want a worker to sleep before retrying an upload. (default 30m0s)
      --maxFileBuffer int          the maximum number of files to have active during the upload process. Should be set to at least the number of max parallel uploads. Set to 0 to bypass local storage and upload straight to your destination - this will limit you to a single destination and disable any hash checks for the upload where available. (default 5)
      --maxParallelUploads int     the maximum number of uploads to run in parallel. (default 4)
      --maxRetryTime duration      the maximum time that can elapse when retrying a failed upload. Use 0 for no limit. (default 12h0m0s)
      --maxUploadSpeed uint        the maximum upload speed (in KB/s) the program should use between all upload workers. Use 0 for no limit
  -p, --properties                 See the -p flag on zfs send for more information.
  -w, --raw                        See the -w flag on zfs send for more information.
  -R, --replication                See the -R flag on zfs send for more information
      --resume                     set this flag to true when you want to try and resume a previously cancled or failed backup. It is up to the caller to ensure the same command line arguments are provided between the original backup and the resumed one.
      --separator string           the separator to use between object component names. (default "|")
  -s, --skip-missing               See the -s flag on zfs send for more information
      --snapshotPrefix string      Only consider snapshots starting with the given snapshot prefix
      --uploadChunkSize int        the chunk size, in MiB, to use when uploading. A minimum of 5MiB and maximum of 100MiB is enforced. (default 10)
      --volsize uint               the maximum size (in MiB) a volume should be before splitting to a new volume. Note: zfsbackup will try its best to stay close/under this limit but it is not guaranteed. (default 200)

Global Flags:
      --encryptTo string           the email of the user to encrypt the data to from the provided public keyring.
      --jsonOutput                 dump results as a JSON string - on success only
      --logLevel string            this controls the verbosity level of logging. Possible values are critical, error, warning, notice, info, debug. (default "notice")
      --manifestPrefix string      the prefix to use for all manifest files. (default "manifests")
      --numCores int               number of CPU cores to utilize. Do not exceed the number of CPU cores on the system. (default 2)
      --publicKeyRingPath string   the path to the PGP public key ring
      --secretKeyRingPath string   the path to the PGP secret key ring
      --signFrom string            the email of the user to sign on behalf of from the provided private keyring.
      --workingDirectory string    the working directory path for zfsbackup. (default "~/.zfsbackup")
      --zfsPath string             the path to the zfs executable. (default "zfs")

TODOs

  • Make PGP cipher configurable.
  • Refactor
  • Test Coverage
  • Add more backends
  • Add delete feature
  • Appease linters
  • Track intermediary snaps as part of backup jobs
  • Parity archives?

zfsbackup-go's People

Contributors

bobobo1618 avatar dependabot-preview[bot] avatar dismantl avatar kurin avatar mschout avatar pabloyoyoista avatar recunius avatar someone1 avatar sww1235 avatar ydylla avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zfsbackup-go's Issues

cooperate with autosnap generated snapshots?

The "smart" send is likely to send a zfs-auto-snap_frequent one. Later once the zfs-auto-snap_frequent get deleted from local host, zfsbackup will get confused and refuse to backup any further.

[questions] Restore to a different fs

Hi.

Let's say I've backed up mimir/test filesystem. I'd like to restore it to mimir/test-restore filesystem. I couldn't figure out how to do that.

Also, if there any way to support some kind of progress no send/receive?

Any way to clean up local manifests? "There are N manifests found locally that are not on the target destination."

I'm using a bucket policy to delete older backups from S3 automatically. When I use zfsbackup-go list I see the expected list of backups, but also a message "There are N manifests found locally that are not on the target destination." where N is going to keep growing.

Is there any way to cleanup local manifests?

The clean option clean Clean will delete any objects in the target that are not found in the manifest files found in the target. is not applicable here

Usage is always printed on error

I doesn't know if it's a bug or a feature but the command usage aka --help is always printed on error, its annoying for readability and for logging cron task.

No file to download when restore via Google

Dear friend,

I'm using this tool for sending snapshot to google storage, everything ok. But when I restore, the problem appears with noticed.
Commands:

export GOOGLE_APPLICATION_CREDENTIALS=/root/SCRIPTS/google-auth.json

/root/SCRIPTS/zfsbackup-go --zfsPath /sbin/zfs --numCores 1 receive -F -d mongo-pool@2020-04-20_03.00.04--3d gs://tui-backup/mongo-fpt-snapshot/2020-04-20/ mongo-pool

Errors:

2020/04/20 04:25:31 Could not download file manifests|mongo-pool|2020-04-20_03.00.04--3d.manifest.gz to the local cache dir due to error - storage: object doesn't exist.

2020/04/20 04:25:31 Error trying to retrieve manifest volume - open /root/.zfsbackup/cache/487a5833bd28e285ba5ed54c272b5507/1d11c81829394d4a9ab17793bb43a618: no such file or directory

However, when I use command below, I can get list files (include manifests|mongo-pool|2020-04-20_03.00.04--3d.manifest.gz)
gsutil ls gs://tui-backup/mongo-fpt-snapshot/2020-04-20/

Could you please let me know what I'm wrong? Thank you very much

Restore process failed with io: read/write on closed pipe

Hi here.
Iam trying to restore previously created backup using gs backend.
Backup process is pretty easy and successful, but restore part fails with strange errors.
Here is debug level output for zfsbackup receive:

zfsbackup receive --auto -d -F my_pool gs://my_pool_snapshots my_pool --logLevel debug

2023/01/17 14:27:42 Setting number of cores to: 2
2023/01/17 14:27:42 Loaded private key ring
2023/01/17 14:27:42 Loaded public key ring
2023/01/17 14:27:42 Setting working directory to /root/.zfsbackup
2023/01/17 14:27:42 PGP Debug Info:
Loaded Private Keys:

Loaded Public Keys:
2023/01/17 14:27:42 Limiting the number of active files to 5
2023/01/17 14:27:42 Initializing Backend gs://my_pool_snapshots
2023/01/17 14:27:43 Calculating how to restore to zfs-auto-snap_daily-2023-01-09-0625.
2023/01/17 14:27:43 Getting ZFS Snapshots with command "zfs list -H -d 1 -p -t snapshot,bookmark -r -o name,creation,type -S creation my_pool"
2023/01/17 14:27:43 Adding backup job for zfs-auto-snap_daily-2023-01-09-0625 to the restore list.
2023/01/17 14:27:43 Need to restore 1 snapshots.
2023/01/17 14:27:43 Restoring snapshot zfs-auto-snap_daily-2023-01-09-0625 (1/1)
2023/01/17 14:27:43 Initializing Backend gs://my_pool_snapshots
2023/01/17 14:27:43 Enabling the full path (-d) flag on the receive.
2023/01/17 14:27:43 Enabling the forced rollback (-F) flag on the receive.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1.
2023/01/17 14:27:43 Starting zfs receive command: zfs receive -d -F my_pool
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol2.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol3.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol4.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol5.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol2.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol5.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol4.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1.
2023/01/17 14:27:55 Processing my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1.
2023/01/17 14:27:56 Error while trying to read from volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1 - io: read/write on closed pipe
2023/01/17 14:27:56 Error waiting for zfs command to finish - signal: aborted (core dumped): internal error: Unknown error 1037
2023/01/17 14:27:56 Could not kill zfs send command due to error - os: process already finished
2023/01/17 14:27:56 Could not download file my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol3 to the local cache dir due to error - context canceled.
2023/01/17 14:27:56 There was an error during the restore process, aborting: signal: aborted (core dumped)
2023/01/17 14:27:56 Failed to restore snapshot.

zfsbackup version output:

        Program Name:   zfsbackup
        Version:        v0.4
        OS Target:      linux
        Arch Target:    amd64
        Compiled With:  gc
        Go Version:     go1.14.2

zfs version output:

zfs-0.8.3-1ubuntu12.14
zfs-kmod-2.0.2-1ubuntu5

uname -a output:
Linux carbonite-node 5.11.0-1017-gcp #19~20.04.1-Ubuntu SMP Thu Aug 12 05:25:25 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Any suggestion to bypass/debug/fix that?

Thanks in advance!

Parallel processing not working

I pulled the latest version from Github and after fiddling around i got it working with backblaze. The only problem i have is that it dosnt seem to be working in parallel. There is only 1 volsize volume created at a time and after the creation the upload is starting. When the upload finished the next volume is created.

Is there any way to make things go parallel? Also if i change --maxFileBuffer to 0 to avoid creating of the volumes on local storage and upload them to backblaze i get error because --maxParallelUploads must be the same as maxFileBuffer but if i set it to 0 nothing works 🗡️

Maybe you can help with that issue - otherwise i am glad i found your cool piece of software ;)

Restore stucks when it cannot pull file from azure

I use ~9GB sample to test. It fails on different files, but symptoms are the same: no retries and being stuck forever. How can I help with this issue?

root@asgard:~# /home/aensidhe/go/bin/zfsbackup-go receive \
     --encryptTo [email protected] \
     --signFrom [email protected] \
     --publicKeyRingPath /home/aensidhe/public.pgp \
     --secretKeyRingPath /home/aensidhe/private.pgp \
     --auto \
     -d mimir/test \
     azure://test \
     mimir \
     --logLevel debug \
     --maxBackoffTime 5m \
     --maxRetryTime 10m
2020/06/01 06:13:13 Setting number of cores to: 2
2020/06/01 06:13:13 Loaded private key ring /home/aensidhe/private.pgp
2020/06/01 06:13:13 Loaded public key ring /home/aensidhe/public.pgp
2020/06/01 06:13:13 Setting working directory to /root/.zfsbackup
2020/06/01 06:13:13 PGP Debug Info:
Loaded Private Keys:
        EEA5B2C07D008AFD
        map[Anatoly Popov <[email protected]>:0xc000304680]

Loaded Public Keys:
        EEA5B2C07D008AFD
        map[Anatoly Popov <[email protected]>:0xc000304780]
2020/06/01 06:13:13 Limiting the number of active files to 5
2020/06/01 06:13:13 Initializing Backend azure://test
2020/06/01 06:13:14 Trying to determine latest snapshot for volume mimir/test.
2020/06/01 06:13:14 Restoring to snapshot test2.
2020/06/01 06:13:14 Calculating how to restore to test2.
2020/06/01 06:13:14 Getting ZFS Snapshots with command "zfs list -H -d 1 -p -t snapshot,bookmark -r -o name,creation,type -S creation mimir/test"
2020/06/01 06:13:14 Adding backup job for test2 to the restore list.
2020/06/01 06:13:14 Need to restore 1 snapshots.
2020/06/01 06:13:14 Restoring snapshot test2 (1/1)
2020/06/01 06:13:14 Initializing Backend azure://test
2020/06/01 06:13:14 Enabling the full path (-d) flag on the receive.
2020/06/01 06:13:14 Starting zfs receive command: zfs receive -d mimir
2020/06/01 06:13:14 Downloading volume mimir/test|test2.zstream.gz.pgp.vol1.
2020/06/01 06:13:14 Downloading volume mimir/test|test2.zstream.gz.pgp.vol2.
2020/06/01 06:13:14 Downloading volume mimir/test|test2.zstream.gz.pgp.vol3.
2020/06/01 06:13:14 Downloading volume mimir/test|test2.zstream.gz.pgp.vol4.
2020/06/01 06:13:14 Downloading volume mimir/test|test2.zstream.gz.pgp.vol5.
2020/06/01 06:13:52 Downloaded mimir/test|test2.zstream.gz.pgp.vol5.
2020/06/01 06:13:56 Downloaded mimir/test|test2.zstream.gz.pgp.vol1.
2020/06/01 06:13:56 Processing mimir/test|test2.zstream.gz.pgp.vol1.
2020/06/01 06:14:00 Downloaded mimir/test|test2.zstream.gz.pgp.vol2.
2020/06/01 06:14:02 Processed mimir/test|test2.zstream.gz.pgp.vol1.
2020/06/01 06:14:02 Processing mimir/test|test2.zstream.gz.pgp.vol2.
2020/06/01 06:14:02 Downloading volume mimir/test|test2.zstream.gz.pgp.vol6.
2020/06/01 06:14:02 Downloaded mimir/test|test2.zstream.gz.pgp.vol4.
2020/06/01 06:14:02 Downloaded mimir/test|test2.zstream.gz.pgp.vol3.
2020/06/01 06:14:07 Processed mimir/test|test2.zstream.gz.pgp.vol2.
2020/06/01 06:14:07 Processing mimir/test|test2.zstream.gz.pgp.vol3.
2020/06/01 06:14:07 Downloading volume mimir/test|test2.zstream.gz.pgp.vol7.
2020/06/01 06:14:12 Processed mimir/test|test2.zstream.gz.pgp.vol3.
2020/06/01 06:14:12 Processing mimir/test|test2.zstream.gz.pgp.vol4.
2020/06/01 06:14:12 Downloading volume mimir/test|test2.zstream.gz.pgp.vol8.
2020/06/01 06:14:17 Processed mimir/test|test2.zstream.gz.pgp.vol4.
2020/06/01 06:14:17 Processing mimir/test|test2.zstream.gz.pgp.vol5.
2020/06/01 06:14:17 Downloading volume mimir/test|test2.zstream.gz.pgp.vol9.
2020/06/01 06:14:23 Processed mimir/test|test2.zstream.gz.pgp.vol5.
2020/06/01 06:14:23 Downloading volume mimir/test|test2.zstream.gz.pgp.vol10.
2020/06/01 06:14:41 Downloaded mimir/test|test2.zstream.gz.pgp.vol7.
2020/06/01 06:14:44 Downloaded mimir/test|test2.zstream.gz.pgp.vol6.
2020/06/01 06:14:44 Processing mimir/test|test2.zstream.gz.pgp.vol6.
2020/06/01 06:14:49 Processed mimir/test|test2.zstream.gz.pgp.vol6.
2020/06/01 06:14:49 Processing mimir/test|test2.zstream.gz.pgp.vol7.
2020/06/01 06:14:49 Downloading volume mimir/test|test2.zstream.gz.pgp.vol11.
2020/06/01 06:14:58 Downloaded mimir/test|test2.zstream.gz.pgp.vol10.
2020/06/01 06:15:00 Downloaded mimir/test|test2.zstream.gz.pgp.vol9.
2020/06/01 06:15:00 Downloaded mimir/test|test2.zstream.gz.pgp.vol8.
2020/06/01 06:15:24 Processed mimir/test|test2.zstream.gz.pgp.vol7.
2020/06/01 06:15:24 Processing mimir/test|test2.zstream.gz.pgp.vol8.
2020/06/01 06:15:24 Downloading volume mimir/test|test2.zstream.gz.pgp.vol12.
2020/06/01 06:15:27 Downloaded mimir/test|test2.zstream.gz.pgp.vol11.
2020/06/01 06:15:29 Processed mimir/test|test2.zstream.gz.pgp.vol8.
2020/06/01 06:15:29 Processing mimir/test|test2.zstream.gz.pgp.vol9.
2020/06/01 06:15:29 Downloading volume mimir/test|test2.zstream.gz.pgp.vol13.
2020/06/01 06:15:34 Processed mimir/test|test2.zstream.gz.pgp.vol9.
2020/06/01 06:15:34 Processing mimir/test|test2.zstream.gz.pgp.vol10.
2020/06/01 06:15:34 Downloading volume mimir/test|test2.zstream.gz.pgp.vol14.
2020/06/01 06:15:39 Processed mimir/test|test2.zstream.gz.pgp.vol10.
2020/06/01 06:15:39 Processing mimir/test|test2.zstream.gz.pgp.vol11.
2020/06/01 06:15:39 Downloading volume mimir/test|test2.zstream.gz.pgp.vol15.
2020/06/01 06:15:45 Processed mimir/test|test2.zstream.gz.pgp.vol11.
2020/06/01 06:15:45 Downloading volume mimir/test|test2.zstream.gz.pgp.vol16.
2020/06/01 06:16:12 Downloaded mimir/test|test2.zstream.gz.pgp.vol14.
2020/06/01 06:16:13 Downloaded mimir/test|test2.zstream.gz.pgp.vol13.
2020/06/01 06:16:16 Downloaded mimir/test|test2.zstream.gz.pgp.vol12.
2020/06/01 06:16:16 Processing mimir/test|test2.zstream.gz.pgp.vol12.
2020/06/01 06:16:17 Downloaded mimir/test|test2.zstream.gz.pgp.vol15.
2020/06/01 06:16:19 Downloaded mimir/test|test2.zstream.gz.pgp.vol16.
2020/06/01 06:16:21 Processed mimir/test|test2.zstream.gz.pgp.vol12.
2020/06/01 06:16:21 Processing mimir/test|test2.zstream.gz.pgp.vol13.
2020/06/01 06:16:21 Downloading volume mimir/test|test2.zstream.gz.pgp.vol17.
2020/06/01 06:16:26 Processed mimir/test|test2.zstream.gz.pgp.vol13.
2020/06/01 06:16:26 Processing mimir/test|test2.zstream.gz.pgp.vol14.
2020/06/01 06:16:26 Downloading volume mimir/test|test2.zstream.gz.pgp.vol18.
2020/06/01 06:16:48 Downloaded mimir/test|test2.zstream.gz.pgp.vol17.
2020/06/01 06:16:51 Processed mimir/test|test2.zstream.gz.pgp.vol14.
2020/06/01 06:16:51 Processing mimir/test|test2.zstream.gz.pgp.vol15.
2020/06/01 06:16:51 Downloading volume mimir/test|test2.zstream.gz.pgp.vol19.
2020/06/01 06:16:56 Processed mimir/test|test2.zstream.gz.pgp.vol15.
2020/06/01 06:16:56 Processing mimir/test|test2.zstream.gz.pgp.vol16.
2020/06/01 06:16:56 Downloading volume mimir/test|test2.zstream.gz.pgp.vol20.
2020/06/01 06:17:02 Processed mimir/test|test2.zstream.gz.pgp.vol16.
2020/06/01 06:17:02 Processing mimir/test|test2.zstream.gz.pgp.vol17.
2020/06/01 06:17:02 Downloading volume mimir/test|test2.zstream.gz.pgp.vol21.
2020/06/01 06:17:09 Processed mimir/test|test2.zstream.gz.pgp.vol17.
2020/06/01 06:17:09 Downloading volume mimir/test|test2.zstream.gz.pgp.vol22.
2020/06/01 06:17:15 Downloaded mimir/test|test2.zstream.gz.pgp.vol18.
2020/06/01 06:17:15 Processing mimir/test|test2.zstream.gz.pgp.vol18.
2020/06/01 06:17:20 Processed mimir/test|test2.zstream.gz.pgp.vol18.
2020/06/01 06:17:20 Downloading volume mimir/test|test2.zstream.gz.pgp.vol23.
2020/06/01 06:17:20 Downloaded mimir/test|test2.zstream.gz.pgp.vol19.
2020/06/01 06:17:20 Processing mimir/test|test2.zstream.gz.pgp.vol19.
2020/06/01 06:17:25 Processed mimir/test|test2.zstream.gz.pgp.vol19.
2020/06/01 06:17:25 Downloading volume mimir/test|test2.zstream.gz.pgp.vol24.
2020/06/01 06:17:53 Downloaded mimir/test|test2.zstream.gz.pgp.vol20.
2020/06/01 06:17:53 Processing mimir/test|test2.zstream.gz.pgp.vol20.
2020/06/01 06:17:54 Downloaded mimir/test|test2.zstream.gz.pgp.vol21.
2020/06/01 06:17:56 Downloaded mimir/test|test2.zstream.gz.pgp.vol23.
2020/06/01 06:18:06 Downloaded mimir/test|test2.zstream.gz.pgp.vol22.
2020/06/01 06:18:14 Downloaded mimir/test|test2.zstream.gz.pgp.vol24.
2020/06/01 06:18:18 Processed mimir/test|test2.zstream.gz.pgp.vol20.
2020/06/01 06:18:18 Processing mimir/test|test2.zstream.gz.pgp.vol21.
2020/06/01 06:18:18 Downloading volume mimir/test|test2.zstream.gz.pgp.vol25.
2020/06/01 06:18:23 Processed mimir/test|test2.zstream.gz.pgp.vol21.
2020/06/01 06:18:23 Processing mimir/test|test2.zstream.gz.pgp.vol22.
2020/06/01 06:18:23 Downloading volume mimir/test|test2.zstream.gz.pgp.vol26.
2020/06/01 06:18:28 Processed mimir/test|test2.zstream.gz.pgp.vol22.
2020/06/01 06:18:28 Processing mimir/test|test2.zstream.gz.pgp.vol23.
2020/06/01 06:18:28 Downloading volume mimir/test|test2.zstream.gz.pgp.vol27.
2020/06/01 06:18:34 Processed mimir/test|test2.zstream.gz.pgp.vol23.
2020/06/01 06:18:34 Processing mimir/test|test2.zstream.gz.pgp.vol24.
2020/06/01 06:18:34 Downloading volume mimir/test|test2.zstream.gz.pgp.vol28.
2020/06/01 06:18:39 Processed mimir/test|test2.zstream.gz.pgp.vol24.
2020/06/01 06:18:39 Downloading volume mimir/test|test2.zstream.gz.pgp.vol29.
2020/06/01 06:18:49 Downloaded mimir/test|test2.zstream.gz.pgp.vol25.
2020/06/01 06:18:49 Processing mimir/test|test2.zstream.gz.pgp.vol25.
2020/06/01 06:19:02 Downloaded mimir/test|test2.zstream.gz.pgp.vol27.
2020/06/01 06:19:15 Downloaded mimir/test|test2.zstream.gz.pgp.vol28.
2020/06/01 06:19:22 Downloaded mimir/test|test2.zstream.gz.pgp.vol29.
2020/06/01 06:19:23 Could not download file mimir/test|test2.zstream.gz.pgp.vol26 to the local cache dir due to error - context deadline exceeded.
2020/06/01 06:19:28 Processed mimir/test|test2.zstream.gz.pgp.vol25.
2020/06/01 06:19:28 Downloading volume mimir/test|test2.zstream.gz.pgp.vol30.
2020/06/01 06:20:04 Downloaded mimir/test|test2.zstream.gz.pgp.vol30.

Remote snapshot content

Hi,

I was wondering if there is some way to figure out what is in a remote snapshot before you start downloading it again?

Cannot get encryption to work

Encryption does not seem to work at all for me.

I created a new key and exported the pair as follows:

gpg --quick-generate-key [email protected]
# Note: --full-generate-key does not exist in modern GPG
gpg --armor --export [email protected] > /tmp/pubring.asc
gpg --armor --export-secret-keys [email protected] > /tmp/secring.asc

pubring.asc:

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQGNBFySeqQBDADo1Q9Tl1VtH57G1TjniEa4g6sA/tjj1viNWUI4iR/sE+r8duRL
7pkXMVUNQ1FWQH55cTFcauTEzxcKcTaNe9F3eEVzGDGvLeoa9Bl83oFLL6U7srC4
G4bCYWCKOisv68qmWmESt6BshxC6Tw/tv5CqWCf1FUiyXnDD61wTvyjnpXzCALXI
iR9EF/ATPqQ8No1BO3bvr8JHZ/LuPZxogOU9CmqgTatDSwImHWB9psgi2+bDbloE
5/C1vsX071d5/kNGL5WVy1rtzio3j8VebeN/pZAgcXpOnjWdvfeUC51UUFSuzktE
u3XZX2nTXa3hCIkXq1Ufh0fgCX69U9bqSpp4KZCWlzn7nDojtcXKbKwf7euX0EzQ
iUjrYUJZvHht8BG+XIAo+PjSldjToN9ZcZcyjaKZXVJAu90p0aIFHPYX8HcL3snt
2hG1m5B+Kf1gffHZhN6MHT8hc3xNEfN9Ql7kfO08Z/trEdH98nrREhhG4hxS4dph
kglW368L2aXdogEAEQEAAbQiemZzYmFja3VwQGxvdGhsb3JpZW4uc25vd2NyYXNo
LmxhbokB1AQTAQgAPhYhBE5QVbNwL1wge3UsIOSmI5IRYi/TBQJcknqkAhsDBQkD
wmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEOSmI5IRYi/TjsQL/0dj7QHA
4IDH+sD8x/3weKXmhcAaRstBbn8HMgc0ZM3cf2Dz4kJlp2nWgkZ1vrj57COqU2Bm
QnvKg+ta7pI/9yFMoj/zTKta7Xm5Gs0sUwb6JUeU+G2CYMDB9jkNzLOuXFQ9DIsA
TsZExgzvUqiIFLcZRiCQGZ2bjV+GBI5XZD/AvmMIyWkJZnSmL/mMLl6thQ0bz1ml
Fc8Z+kWACXwZ/O1EkspxCNi3yBqyTW0ZbrwHfARaFUh5RaoajwzE2YTR2VVMqxkE
G6ZqqKYmgbhduyZyVZJPsctjY7ggeNV0FvfoaaJN7Xvgfr0WsRLE8h0jqWAgZZ4v
xUnarOEQUXk9zlmk5yLwOST7kR3Uzb2gadcBdpOZe5e3CqTIX2epXUK/gQ+Dm8t3
h5G3YDM69KJfLJOE+KHSwyAAe8o6MR1ojsn0L2HyCYBRaAAmMbrjK6jtbjAZOAfJ
DVoIcbbF50nZFBQkOoceSg4WfuxkXPlKl/Qs3YedbznVE83QtgqIJaO8O7kBjQRc
knqkAQwApsACzuGoe+zsL9Y56/hun7/3IoeJjz4zN7M9U2ve5kmX20kHSAU6IS3p
PcJLr6oTPSrARSycPzAr9H0ZNcwBWd+S80HAGK620HoDFQ1ObrIPSgeJiuuqL37W
XAmxxZKd6SJGV36QR2xNUK6S5ZfHmnwKiw2ZpW7LJe0WnJkgangDEhRoTpOIL6x9
kS1wUYTBDU3UUDX5eG4d6m7yMafkbcweTnYWpcjctONhNSPfoGtkJR8tOZBvGJrF
9zO3osgXJsMUtbaVqFZUkdHB6hQm4VAC9MbKbb0zkAKUCOYSDvKWWWyFXXD1DdV8
iOrYM52Cup0Yv4odrgVnVc+c/SOsF2Ug5WxLQS6rWm/AdfuNxNZTSOPV0GPY293O
oOMNn/Uj8LQlIB6pdfTOmunhnQr2AOYkXyJD22gbepbPz/YN8sM3Vd4iT0htyBT7
y9hN+iDc39w1TzurF2BBRzaKeB5zfr9P016ahwwdjIJUtuzmo/J3iszEa/7qTVB/
uHLbv0WjABEBAAGJAbYEGAEIACAWIQROUFWzcC9cIHt1LCDkpiOSEWIv0wUCXJJ6
pAIbDAAKCRDkpiOSEWIv0y8RC/0ftvXTiYGPP3TsS04h1OvwHZ/UDQPIOCmheQWk
bw7lNPzIHBUIVavW3cGrTnqHMmJ4pIU9jTZby26Bfz6oyVhOzvLeAGehFAqmsGh3
wvxdN/9oQ6/RPGevOOtQ+PnMneUNgTPxdGbA9xBiTP1BwahLciNfNMoYRGM3l6kp
npw1ENVpvZo3uCRW6woxCMGUk0slNxhBj6isygyXcKJIUQzUkaRt0arPUeqZoUFH
rlnCspa1PZhIQciR1yQeemP4RiUXowgzLuEff7s4tt9fEGTS6R91rceS0rdHynmo
nnwHuEVo7/laek6gbxHpxwx1UjO8ZZILj9pB3d0RFfmNKjXNjKCkXU2CqhFhqRQt
P4MT9fHYwgzCczDiQHd3cAb40gfEIr2ebz0B6CSyr1VlQfHTBzA2IW5SDX8b9Kc+
UZBCnW4NEQJ3fFPTZ7bYyvjjQ7rrkdAYtbpFE0FwVYe/9QAUPilqceU9AybCp5aB
OfR5TYHmtNo02yFVFXfwetFlUeo=
=aVYG
-----END PGP PUBLIC KEY BLOCK-----

Start of secring (these are throwaway keys so not really secret)

-----BEGIN PGP PRIVATE KEY BLOCK-----

lQWGBFySeqQBDADo1Q9Tl1VtH57G1TjniEa4g6sA/tjj1viNWUI4iR/sE+r8duRL
7pkXMVUNQ1FWQH55cTFcauTEzxcKcTaNe9F3eEVzGDGvLeoa9Bl83oFLL6U7srC4
G4bCYWCKOisv68qmWmESt6BshxC6Tw/tv5CqWCf1FUiyXnDD61wTvyjnpXzCALXI
iR9EF/ATPqQ8No1BO3bvr8JHZ/LuPZxogOU9CmqgTatDSwImHWB9psgi2+bDbloE
5/C1vsX071d5/kNGL5WVy1rtzio3j8VebeN/pZAgcXpOnjWdvfeUC51UUFSuzktE
u3XZX2nTXa3hCIkXq1Ufh0fgCX69U9bqSpp4KZCWlzn7nDojtcXKbKwf7euX0EzQ
...

Then tried to run a backup:

zfsbackup send --full \
  --encryptTo [email protected] \
  --publicKeyRingPath /tmp/pubring.asc \
  --secretKeyRingPath /tmp/secring.asc \
  zroot/home s3://zfsbackup/

And the error it produces is:

2019/03/20 14:21:47 Could not find public key for [email protected]

OS Version:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

GPG Version:

gpg (GnuPG) 2.2.4

Support rclone as a backend

It would be very nice to see rclone support. It supports many, many (sub)backends among other features.

For examples / guidance on implementation, see Restic's process. The best idea would be to express interest on rclone forums.

[Question] Incremental snapshot backups and usage

Hello: I am running into an issue with incremental snapshots and am wondering if I understand their usage correctly.

I am running FreeBSD 12.2 Release P1 with the 0.4RC FreeBSD binary

/usr/bin/zfsbackup version
       Program Name:   zfsbackup
       Version:        v0.4
       OS Target:      freebsd
       Arch Target:    amd64
       Compiled With:  gc
       Go Version:     go1.14.2

I am using periodic-snapshot to create hourly snapshots of 4 zfs pools, and then running the following commands in a script in /etc/periodic/daily/ to run incremental backups. (These lines are duplicated 4 times for each of the 4 pools)

/usr/bin/zfsbackup list --encryptTo user@domain --signFrom user@domain --publicKeyRingPath /home/user/.gnupg/public.pgp --secretKeyRingPath /home/user/.gnupg/private.pgp b2://the-vault-remote/backups"

/usr/local/bin/flock -xn /root/zfsbackup-backups.lock -c "/usr/bin/zfsbackup send --encryptTo user@domain --signFrom user@domain --publicKeyRingPath /home/user/.gnupg/public.pgp --secretKeyRingPath /home/user/.gnupg/private.pgp --increment the-vault/backups b2://the-vault-remote/backups"

I used the same commands, but with --full instead of --increment to perform an initial backup of the pools, but when the incremental script runs, I get the error message:

Found 0 backup sets:

2021/03/13 03:03:39 Selected incremental snapshot does not exist!

The initial full backup commands were run as root, and /usr/sbin/periodic is being run as root in /etc/crontab.

Any help would be appreciated.

Incremental backup requires GPG decryption key?

Doing a full backup appears to work well with only the encryption key, but doing an incremental backup appears to need to be able to decrypt. I would like to be able to keep the decryption key off the machines being backed up so that if they're compromised the attacker can't read old backups. This could potentially be achieved by keeping a copy of the manifest locally and using the remote copy to verify the local versions by trying to re-encrypt the offline copy and see if it matches.

[enhacement] Pad number in name with 5-6 zeros.

Let's take that name: mimir/private|autosnap_2020-07-09_22:45:01_frequently.zstream.gz.pgp.vol1000.

Format mimir/private|autosnap_2020-07-09_22:45:01_frequently.zstream.gz.pgp.vol_000_001_000 is more readable and sortable.

image

Pipes in filenames cause problems for NTFS/SMB

Hi, I wanted to test this on an NTFS volume (a scratch hard disk lying around) and I found that upon running this it kind of locked up when trying to write to that drive specifically. Running strace showed writes like this:
[pid 1187340] openat(AT_FDCWD, "/media/arian/103E493F3E491F5A/bup-target-test2/zfs-tst-2g/crypt|initial.zstream.gz.vol1", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = -1 EINVAL (Invalid argument)
If I remove windows_names from the NTFS mount options, this doesn't happen and the backup works, so clearly the error is from the pipe in the filename being an invalid Windows character and the driver blocking it.

As well as NTFS, I tried on an external SMB mount and the exact same thing happened, it appeared to do nothing while strace showed invalid argument errors. I know that it's probably not common to be using NTFS on Linux as is, but SMB, while unsupported natively, would be pretty nice if it worked. Plus, as an SMB server on a Windows host, I probably don't have any options here.

Are there any workarounds for this at the moment? I can provide more details if needed. Thanks!

[Feature Request] Use backends in round-robin to add up their capacity.

In practice we are going to have pools that went TBs large, and that would beyond the largest capacity of a single storage some backends could offer. If we can use backends in round robin, we are able to add up their capacity and enable us to do full backups for extremely large pools.

zfsbackup seems to hang on Solaris with B2

I'm running zfsbackup on Solaris 11. It seems to work fine for a small pool of a few MB, but appears to hang when I try on a real pool (still small, around 1GB).

I'm using the B2 backend. The file backend works fine.

I'm wondering if it may have something to do with the way the tasks are parallelized (although setting --maxFileBuffer=0 doesn't make any difference).

Here's where it hangs (pool names etc changed):

% zfsbackup send --maxFileBuffer=0 --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath /var/backups/.gnupg/pubring.gpg.asc --secretKeyRingPath /var/backups/.gnupg/secring.gpg.asc --logLevel debug --full tank/Foobar b2://example-com-tank-git
2019/01/23 10:48:20 Setting number of cores to: 2
2019/01/23 10:48:20 Loaded private key ring /var/backups/.gnupg/secring.gpg.asc
2019/01/23 10:48:20 Loaded public key ring /var/backups/.gnupg/pubring.gpg.asc
2019/01/23 10:48:20 Setting working directory to /var/backups/.zfsbackup
2019/01/23 10:48:20 PGP Debug Info:
...
2019/01/23 10:48:20 The number of parallel uploads (4) is greater than the number of active files allowed (0), this may result in an unachievable max parallel upload target.
2019/01/23 10:48:20 Getting ZFS Snapshots with command "zfs list -H -d 1 -t snapshot -r -o name,creation -S creation tank/Foobar"
2019/01/23 10:48:20 Found ZFS snapshot "tank/Foobar@20181010" from 2018-10-10 18:20:00 +0000 UTC
2019/01/23 10:48:20 Found ZFS snapshot "tank/Foobar@20151003" from 2015-10-03 02:09:00 +0000 UTC
2019/01/23 10:48:20 Utilizing smart option.
2019/01/23 10:48:20 Limiting the number of active files to 0
2019/01/23 10:48:20 Limiting the number of parallel uploads to 4
2019/01/23 10:48:20 Max Backoff Time will be 30m0s
2019/01/23 10:48:20 Max Upload Retry Time will be 12h0m0s
2019/01/23 10:48:20 Upload Chunk Size will be 10MiB
2019/01/23 10:48:20 Will be using encryption key for [email protected]
2019/01/23 10:48:20 Will be signed from [email protected]
2019/01/23 10:48:20 Getting ZFS Snapshots with command "zfs list -H -d 1 -t snapshot -r -o name,creation -S creation tank/Foobar"
2019/01/23 10:48:21 Found ZFS snapshot "tank/Foobar@20181010" from 2018-10-10 18:20:00 +0000 UTC
2019/01/23 10:48:21 Found ZFS snapshot "tank/Foobar@20151003" from 2015-10-03 02:09:00 +0000 UTC
2019/01/23 10:48:21 Initializing Backend b2://example-com-tank-git
2019/01/23 10:48:21 Starting zfs send command: zfs send tank/Foobar@20181010
2019/01/23 10:48:21 Will be using internal gzip compressor with compression level 6.
2019/01/23 10:48:21 Starting volume tank/Foobar|20181010.zstream.gz.pgp.vol1
2019/01/23 10:48:23 b2 backend: Processing volume tank/Foobar|20181010.zstream.gz.pgp.vol1

Here are a few pertinent stack traces from gdb:

(gdb) thr 4
[Switching to thread 4 (LWP    3        )]
#0  0xffff80ffbf59046a in _portfs () from /lib/64/libc.so.1
(gdb) bt
#0  0xffff80ffbf59046a in _portfs () from /lib/64/libc.so.1
#1  0xffff80ffbf5107a4 in port_getn () from /lib/64/libc.so.1
#2  0x000000000045855a in runtime.asmsysvicall6 () at /opt/go/src/runtime/sys_solaris_amd64.s:104
#3  0x00000000004561ff in runtime.asmcgocall () at /opt/go/src/runtime/asm_amd64.s:718
#4  0x0000000000000000 in ?? ()
(gdb) thr 14
[Switching to thread 14 (Thread 3 (LWP 3))]
#0  0xffff80ffbf59046a in _portfs () from /lib/64/libc.so.1
(gdb) bt
#0  0xffff80ffbf59046a in _portfs () from /lib/64/libc.so.1
#1  0xffff80ffbf5107a4 in port_getn () from /lib/64/libc.so.1
#2  0x000000000045855a in runtime.asmsysvicall6 () at /opt/go/src/runtime/sys_solaris_amd64.s:104
#3  0x00000000004561ff in runtime.asmcgocall () at /opt/go/src/runtime/asm_amd64.s:718
#4  0x0000000000000000 in ?? ()
(gdb) thr 1
[Switching to thread 1 (LWP    1        )]
#0  0xffff80ffbf587e97 in __lwp_park () from /lib/64/libc.so.1
(gdb) bt
#0  0xffff80ffbf587e97 in __lwp_park () from /lib/64/libc.so.1
#1  0xffff80ffbf57a6fe in sema_wait_impl () from /lib/64/libc.so.1
#2  0xffff80ffbf57a8b3 in sema_reltimedwait () from /lib/64/libc.so.1
#3  0xffff80ffbf56dda7 in sem_reltimedwait_np () from /lib/64/libc.so.1
#4  0x000000000045855a in runtime.asmsysvicall6 () at /opt/go/src/runtime/sys_solaris_amd64.s:104
#5  0x00000000004561b9 in runtime.asmcgocall () at /opt/go/src/runtime/asm_amd64.s:688
#6  0x000000c42003e000 in ?? ()
#7  0x00000000012d2600 in github.com/someone1/zfsbackup-go/cmd.jobInfo ()
#8  0xffff80ffbffff868 in ?? ()
#9  0x0000000000434432 in runtime.entersyscallblock_handoff () at /opt/go/src/runtime/proc.go:2920
#10 0x000000c42008a180 in ?? ()
#11 0x00000000004300b0 in ?? () at /opt/go/src/runtime/proc.go:1092
#12 0xffff80ff00000000 in ?? ()
#13 0x0000000000454827 in runtime.rt0_go () at /opt/go/src/runtime/asm_amd64.s:258
#14 0x000000000000000f in ?? ()
#15 0xffff80ffbffff8d8 in ?? ()
#16 0x000000000000000f in ?? ()
#17 0xffff80ffbffff8d8 in ?? ()
#18 0x0000000000000000 in ?? ()
(gdb) 

Multiple Concurrent Requests on Receive?

I am looking to speed up the recovery time from S3 to a co-located (at least the same region) EC2 instance. After testing different instance levels, it is my conclusion that the only way to reach the speeds requested is to use multiple concurrent requests to my S3 bucket.

I am aware that concurrent uploads are possible using zfsbackup-go during the "send" phase. I am also aware that aws cli has the ability to make concurrent requests (https://docs.aws.amazon.com/cli/latest/topic/s3-config.html)

max_concurrent_requests - The maximum number of concurrent requests.
max_queue_size - The maximum number of tasks in the task queue.
multipart_threshold - The size threshold the CLI uses for multipart transfers of individual files.
multipart_chunksize - When using multipart transfers, this is the chunk size that the CLI uses for multipart transfers of individual files.

Is it currently possible to do what I am requesting in zfsbackup-go, and if not, am I missing a key issue with implementing this?

not building on ubuntu 20.04

trying to build on an ubuntu 20.04 box (vagrant "generic/ubuntu2004") results in the following errors

vagrant@ubuntu2004:/vagrant$ go get github.com/someone1/zfsbackup-go
# golang.org/x/oauth2/google/internal/externalaccount
/home/vagrant/go/src/golang.org/x/oauth2/google/internal/externalaccount/executablecredsource.go:256:15: undefined: io.ReadAll
# golang.org/x/net/http2
/home/vagrant/go/src/golang.org/x/net/http2/transport.go:426:45: undefined: os.ErrDeadlineExceeded

GPG encryption problem

I'm using newest v0.4-rc1 version (binary from github) and it works fine. Problem occur when I try to encrypt backup with GPG

# zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath public.pgp.asc    --secretKeyRingPath private.pgp.asc --logLevel debug --fullIfOlderThan 5m  tank/backup/tmp s3://BUCKET
...
2021/03/31 17:34:03 PGP Debug Info:
Loaded Private Keys:
        C9BB0591E339F45C
        map[Test1 <[email protected]>:0xc00007c580]

Loaded Public Keys:
        C9BB0591E339F45C
        map[Test1 <[email protected]>:0xc00007c680]
2021/03/31 17:34:04 Getting ZFS Snapshots with command "zfs list -H -d 1 -p -t snapshot,bookmark -r -o name,creation,type -S creation tank/backup/tmp"
2021/03/31 17:34:04 Initializing Backend s3://BUCKET
2021/03/31 17:34:04 s3 backend: will use the Bulk restore tier when trying to restore from Glacier.
2021/03/31 17:34:04 Syncing 1 manifests to local cache.
2021/03/31 17:34:04 Downloaded manifests|tank/hq/test|1.manifest.gz to local cache.
2021/03/31 17:34:04 Error while trying to process smart option - openpgp: invalid data: tag byte does not have MSB set

I've tried this also on old debian jessie, still getting Error while trying to process smart option - openpgp: invalid data: tag byte does not have MSB set error

Here some system info

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.5 LTS
Release:        18.04
Codename:       bionic

# gpg --version
gpg (GnuPG) 2.2.4
libgcrypt 1.8.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Here is how I've created gpg keys

# gpg --generate-key
....
Real name: Test1
Email address: [email protected]
You selected this USER-ID:
    "Test1 <[email protected]>"

# gpg --output public.pgp.asc --armor --export [email protected]
# gpg --output private.pgp.asc --armor --export-secret-key [email protected]

# ls -al *.asc
-rw------- 1 root root 5200 Mar 31 17:32 private.pgp.asc
-rw-r--r-- 1 root root 2444 Mar 31 17:32 public.pgp.asc

# export PGP_PASSPHRASE=test1234

Not sure if it's key creation misuse or some issue in GO lib (like golang/go#29082)

cant get B2 to work

./zfsbackup-go_linux_amd64 send --zfsPath /sbin/zfs --signFrom backup@nas --encryptTo backup@nas --publicKeyRingPath backup.pub --secretKeyRingPath backup.key --fullIfOlderThan 720h -R rpool/backup/mainbox b2://zfsbackup-sean`

panic: runtime error: index out of range

goroutine 1 [running]:
github.com/someone1/zfsbackup-go/backup.ProcessSmartOptions(0x10c8d80, 0xc4200180c8, 0x11166e0, 0xc42005af40, 0x0)
        /build/src/github.com/someone1/zfsbackup-go/backup/backup.go:59 +0xcfe
github.com/someone1/zfsbackup-go/cmd.updateJobInfo(0xc42017e1c0, 0x2, 0xe, 0xc4202200e0, 0x4)
        /build/src/github.com/someone1/zfsbackup-go/cmd/send.go:198 +0x540
github.com/someone1/zfsbackup-go/cmd.validateSendFlags(0x110c380, 0xc42017e1c0, 0x2, 0xe, 0x0, 0x0)
        /build/src/github.com/someone1/zfsbackup-go/cmd/send.go:224 +0x152
github.com/spf13/cobra.(*Command).execute(0x110c380, 0xc42017e0e0, 0xe, 0xe, 0x110c380, 0xc42017e0e0)
        /build/src/github.com/spf13/cobra/command.go:642 +0x4e4
github.com/spf13/cobra.(*Command).ExecuteC(0x110c140, 0xc420076058, 0x0, 0xc42019ff60)
        /build/src/github.com/spf13/cobra/command.go:729 +0x2fe
github.com/spf13/cobra.(*Command).Execute(0x110c140, 0x0, 0x0)
        /build/src/github.com/spf13/cobra/command.go:688 +0x2b
github.com/someone1/zfsbackup-go/cmd.Execute()
        /build/src/github.com/someone1/zfsbackup-go/cmd/root.go:70 +0x2d
main.main()
        /build/src/github.com/someone1/zfsbackup-go/main.go:28 +0x20

[Feature Request] New AWS backend - EBS direct API block storage snapshots

This is new in AWS as of July-2020.

This may also allow use of

  • AWS EBS Data Lifecycle Management policies for snapshot retention
  • KMS for encryption
  • Cross account resource permissions

Links

Allow for sending a local volume name that differs from the remote volume name

I have the need to store a different remote volume name from the one which exists locally. In my case this is because I have multiple source machines with slightly different zfs pool+filesystem layouts even though the filesystems are logically the same.

Does it make sense to allow a --localVolume VOLUME option so a user can specify a different local volume?

If so, I have a PR for review.

Show progress

Is there a feature to show progress? If not can it be added? Is showing progress in the roadmap, if not Can I work on it ?

unexpected ZFS error causes zfsbackup-go process to hang

I encountered a hang of the zfsbackup-go process if the underlying zfs command throws an exception. Specifically I encountered this exception from zfs

Error waiting for zfs command to finish - exit status 1: WARNING: could not send X incremental source Y is not earlier than it

It would be better (especially when run from a script) if the zfsbackup-go process exits.

Chunk Size

Greetings, looking to increase the chunk size. Was there a reason it was running small??

Backblaze B2 backend

So you don't forget :D -- I'll be more than happy to test this out!

Also consider re-using rclone which has backends for many more cloud storage providers.

S3: Upload with storage class

I saw that in the README glacier is said to be supported via lifecycle rules. I was wondering if there's a reason why the upload's storage class is not configurable.

From a quick look at s3manager's API reference it should be quite easy: UploadInput has a StorageClass field, so AWSS3Backend.Upload could easily forward a configurable value to that.

Is there a reason this hasn't been done? Would you be open to a PR to add this?

Issue With Receiving Encrypted Snapshot From S3 - "tag byte does not have MSB set"

Goals:
I have setup zfsbackup-go on my zfs backup server. My intention is to use zfsbackup-go to encrypt snapshots and upload to S3. If my system fails, I plan on restoring from S3. Obvious goals, but figured I should include this.

What Works:
I have successfully uploaded a snapshot to S3. I have also downloaded this snapshot from S3 and restored it locally. This snapshot was NOT encrypted.

Using the same command which worked for sending unencrypted snapshots, but adding in the required encryption information, successfully uploads encrypted snapshots to S3.

What Does NOT Work:
Attempting to restore the encrypted snapshot from S3.

The command used to receive the S3 snapshot follows.

sudo AWS_ACCESS_KEY=XX AWS_REGION=XX AWS_SECRET_ACCESS_KEY=XX GOROOT=/usr/local/go /usr/local/go/zfsbackup-go receive --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath public.pgp.asc --secretKeyRingPath private.pgp.asc --auto -d tank/encryptionTest@testUpload s3://XX.backups tank

The debug output is as follows.

2020/08/17 20:20:08 Setting number of cores to: 2
2020/08/17 20:20:08 Loaded private key ring secring.asc
2020/08/17 20:20:08 Loaded public key ring pubring.asc
2020/08/17 20:20:08 Setting working directory to /root/.zfsbackup
2020/08/17 20:20:08 PGP Debug Info:
Loaded Private Keys:
        XXX
        map[Aleksander Dapkus <a.dapkus@XXX>:0xc000XXX]

Loaded Public Keys:
        XXX
        map[Aleksander Dapkus <a.dapkus@XXX>:0xc000XXX]
Enter passphrase to decrypt encryption key: 
2020/08/17 20:20:10 Limiting the number of active files to 5
2020/08/17 20:20:10 Initializing Backend s3://XXX.backups
2020/08/17 20:20:10 s3 backend: will use the Bulk restore tier when trying to restore from Glacier.
2020/08/17 20:20:10 Could not read manifest /root/.zfsbackup/cache/XXX/XXX due to error - openpgp: invalid data: tag byte does not have MSB set

It seems the issue revolves around this line: Could not read manifest /root/.zfsbackup/cache/XXX/XXX due to error - openpgp: invalid data: tag byte does not have MSB set

I have redacted some info, more than required. Let me know if I have redacted a key piece of information.

Azure Secure Access Signature support?

For my use case (off-site backup for a NAS), I have three threat models in mind:

  • Catastrophic physical failure (house burns down, lightning fries my NAS).
  • NAS stolen in break-in.
  • NAS infected with malware.

In the first two cases, I have to assume that everything on the NAS may be compromised. For encryption I can store only the gpg public key on the machine and store the private key somewhere else, so zfsbackup can write to the backups but can't read them, but that doesn't prevent the Azure account details being compromised and my data being erased (or modified in-situ). I'd like to be able to generate a SAS with minimum required privileges, so zfsbackup can create new blobs, but can't delete or modify old ones.

Issues with --encryptTo

There are several issues with --encyptTo. From the documentation, I expect this to encrypt the backups using the recipient's public key, so that they can be decrypted only by the owner of the private key (and so the machine doing the backups can't read the results if it is compromised).

The first error is superficial, the error message says:

You must specify a private keyring path if you provide an encryptFrom option

There is no --encryptFrom command line option.

Looking at the code

if jobInfo.EncryptKey = helpers.GetPrivateKeyByEmail(jobInfo.EncryptTo); jobInfo.EncryptKey == nil {
it appears that this is just unconventional nomenclature (the secret and public keychains are referred to the wrong way around in the documentation). It's somewhat concerning that a function called GetPrivateKeyByEmail can return a public key, though this does appear to be the case, as this simply wraps getKeyByEmail, which will then get either a public or private key.

Project / Quality Status?

Is this project still considered "work in progress"? Who currently uses this to successfully perform remote backups of their ZFS pools in any serious form? Thanks!

Feature request: Move Azure blobs to archive storage after creation

Azure blob storage is very cheap ($2/TB/month) for archive storage (vs $10/TB/month for cold and $22/TB/month for hot storage), but blobs can't be created in archive mode, they can only be created in hot or cold mode and then moved to archive storage. I can script moving everything to archive storage after creation, but it would be nice if zfsbackup-go could do this automatically.

Error while uploading

This is my log from zfsbackup-go:

2018/06/11 20:16:00 Setting number of cores to: 2 2018/06/11 20:16:00 Loaded private key ring 2018/06/11 20:16:00 Loaded public key ring 2018/06/11 20:16:00 Setting working directory to /home/simon/.zfsbackup 2018/06/11 20:16:06 Limiting the number of active files to 5 2018/06/11 20:16:06 Limiting the number of parallel uploads to 4 2018/06/11 20:16:06 Max Backoff Time will be 30m0s 2018/06/11 20:16:06 Max Upload Retry Time will be 12h0m0s 2018/06/11 20:16:06 Upload Chunk Size will be 10MiB 2018/06/11 20:16:06 Enabling an incremental stream (-i) on the send to snapshot autosnap_2018-06-03_00:00:12_daily 2018/06/11 20:16:06 Starting zfs send command: zfs send -i autosnap_2018-06-03_00:00:12_daily tanker/ccc@autosnap_2018-06-11_20:00:10_hourly 2018/06/11 20:16:06 Will be using internal gzip compressor with compression level 6. 2018/06/12 02:26:57 b2: Error while uploading volume tanker/ccc|autosnap_2018-06-03_00:00:12_daily|to|autosnap_2018-06-11_20:00:10_hourly.zstream.gz.vol50 - b2_finish_large_file: 400: No active upload for: 4_zecf7e61dbf4f6bca53f50519_f205ab2b96d43a8d7_d20180612_m001903_c001_v0001106_t0013 2018/06/12 05:10:08 zfs send completed without error 2018/06/12 05:40:20 All volumes dispatched in pipeline, finalizing manifest file. Done. Total ZFS Stream Bytes: 15632340952 (15 GiB) Total Bytes Written: 15630516981 (15 GiB) Elapsed Time: 9h24m22.237192592s Total Files Uploaded: 76

Do I have a faulty backup or everything went just fine? I'm concerned about that no active upload error.

INSTALL: azure_backend.go:217:34: not enough arguments in call to blobURL.CommitBlockList

Hey team, first time user, went through basic steps, failed on installation of zfsbackup-go, on Ubuntu 18.04.5 LTS

SETUP:

wget https://golang.org/dl/go1.15.5.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.15.5.linux-amd64.tar.gz
.. etc .. (as per GO installation instructions)
$ go version
go version go1.15.5 linux/amd64

So then time to install zfsbackup-go:

$ go get -v github.com/someone1/zfsbackup-go
get "google.golang.org/api/googleapi": found meta tag get.metaImport{Prefix:"google.golang.org/api", VCS:"git", RepoRoot:"https://github.com/googleapis/google-api-go-client"} at //google.golang.org/api/googleapi?go-get=1

... SNIP ...

# github.com/someone1/zfsbackup-go/backends
../go/src/github.com/someone1/zfsbackup-go/backends/azure_backend.go:217:34: not enough arguments in call to blobURL.CommitBlockList
	have (context.Context, []string, azblob.BlobHTTPHeaders, azblob.Metadata, azblob.BlobAccessConditions)
	want (context.Context, []string, azblob.BlobHTTPHeaders, azblob.Metadata, azblob.BlobAccessConditions, azblob.AccessTierType, azblob.BlobTagsMap)

full output:

$ go get -v github.com/someone1/zfsbackup-go
get "google.golang.org/api/googleapi": found meta tag get.metaImport{Prefix:"google.golang.org/api", VCS:"git", RepoRoot:"https://github.com/googleapis/google-api-go-client"} at //google.golang.org/api/googleapi?go-get=1
get "google.golang.org/api/googleapi": verifying non-authoritative meta tag
google.golang.org/api (download)
get "go.opencensus.io/trace": found meta tag get.metaImport{Prefix:"go.opencensus.io", VCS:"git", RepoRoot:"https://github.com/census-instrumentation/opencensus-go"} at //go.opencensus.io/trace?go-get=1
get "go.opencensus.io/trace": verifying non-authoritative meta tag
go.opencensus.io (download)
github.com/golang/groupcache (download)
get "google.golang.org/api/iterator": found meta tag get.metaImport{Prefix:"google.golang.org/api", VCS:"git", RepoRoot:"https://github.com/googleapis/google-api-go-client"} at //google.golang.org/api/iterator?go-get=1
get "google.golang.org/api/iterator": verifying non-authoritative meta tag
get "google.golang.org/api/option": found meta tag get.metaImport{Prefix:"google.golang.org/api", VCS:"git", RepoRoot:"https://github.com/googleapis/google-api-go-client"} at //google.golang.org/api/option?go-get=1
get "google.golang.org/api/option": verifying non-authoritative meta tag
get "golang.org/x/oauth2": found meta tag get.metaImport{Prefix:"golang.org/x/oauth2", VCS:"git", RepoRoot:"https://go.googlesource.com/oauth2"} at //golang.org/x/oauth2?go-get=1
golang.org/x/oauth2 (download)
get "golang.org/x/oauth2/google": found meta tag get.metaImport{Prefix:"golang.org/x/oauth2", VCS:"git", RepoRoot:"https://go.googlesource.com/oauth2"} at //golang.org/x/oauth2/google?go-get=1
get "golang.org/x/oauth2/google": verifying non-authoritative meta tag
get "google.golang.org/api/option/internaloption": found meta tag get.metaImport{Prefix:"google.golang.org/api", VCS:"git", RepoRoot:"https://github.com/googleapis/google-api-go-client"} at //google.golang.org/api/option/internaloption?go-get=1
get "google.golang.org/api/option/internaloption": verifying non-authoritative meta tag
get "google.golang.org/api/storage/v1": found meta tag get.metaImport{Prefix:"google.golang.org/api", VCS:"git", RepoRoot:"https://github.com/googleapis/google-api-go-client"} at //google.golang.org/api/storage/v1?go-get=1
get "google.golang.org/api/storage/v1": verifying non-authoritative meta tag
get "google.golang.org/api/transport/http": found meta tag get.metaImport{Prefix:"google.golang.org/api", VCS:"git", RepoRoot:"https://github.com/googleapis/google-api-go-client"} at //google.golang.org/api/transport/http?go-get=1
get "google.golang.org/api/transport/http": verifying non-authoritative meta tag
github.com/Azure/azure-storage-blob-go (download)
github.com/Azure/azure-pipeline-go (download)
github.com/mattn/go-ieproxy (download)
github.com/google/uuid (download)
github.com/aws/aws-sdk-go (download)
github.com/jmespath/go-jmespath (download)
github.com/kurin/blazer (download)
github.com/pkg/errors (download)
github.com/klauspost/pgzip (download)
github.com/klauspost/compress (download)
github.com/miolini/datacounter (download)
get "golang.org/x/crypto/openpgp": found meta tag get.metaImport{Prefix:"golang.org/x/crypto", VCS:"git", RepoRoot:"https://go.googlesource.com/crypto"} at //golang.org/x/crypto/openpgp?go-get=1
get "golang.org/x/crypto/openpgp": verifying non-authoritative meta tag
golang.org/x/crypto (download)
get "golang.org/x/crypto/openpgp/packet": found meta tag get.metaImport{Prefix:"golang.org/x/crypto", VCS:"git", RepoRoot:"https://go.googlesource.com/crypto"} at //golang.org/x/crypto/openpgp/packet?go-get=1
get "golang.org/x/crypto/openpgp/packet": verifying non-authoritative meta tag
get "golang.org/x/sync/errgroup": found meta tag get.metaImport{Prefix:"golang.org/x/sync", VCS:"git", RepoRoot:"https://go.googlesource.com/sync"} at //golang.org/x/sync/errgroup?go-get=1
get "golang.org/x/sync/errgroup": verifying non-authoritative meta tag
golang.org/x/sync (download)
github.com/cenkalti/backoff (download)
github.com/nightlyone/lockfile (download)
github.com/spf13/cobra (download)
github.com/spf13/pflag (download)
get "golang.org/x/crypto/ssh/terminal": found meta tag get.metaImport{Prefix:"golang.org/x/crypto", VCS:"git", RepoRoot:"https://go.googlesource.com/crypto"} at //golang.org/x/crypto/ssh/terminal?go-get=1
get "golang.org/x/crypto/ssh/terminal": verifying non-authoritative meta tag
get "golang.org/x/term": found meta tag get.metaImport{Prefix:"golang.org/x/term", VCS:"git", RepoRoot:"https://go.googlesource.com/term"} at //golang.org/x/term?go-get=1
golang.org/x/term (download)
google.golang.org/protobuf/internal/flags
github.com/juju/ratelimit
github.com/dustin/go-humanize
github.com/op/go-logging
golang.org/x/net/internal/timeseries
google.golang.org/grpc/backoff
golang.org/x/net/trace
google.golang.org/grpc/internal/grpclog
google.golang.org/grpc/grpclog
google.golang.org/grpc/connectivity
google.golang.org/protobuf/internal/detrand
google.golang.org/protobuf/internal/errors
google.golang.org/protobuf/internal/pragma
google.golang.org/protobuf/internal/set
google.golang.org/protobuf/encoding/protowire
google.golang.org/protobuf/internal/version
google.golang.org/grpc/attributes
google.golang.org/protobuf/reflect/protoreflect
google.golang.org/grpc/serviceconfig
google.golang.org/grpc/internal
google.golang.org/grpc/internal/credentials
google.golang.org/grpc/metadata
google.golang.org/grpc/internal/grpcrand
google.golang.org/grpc/codes
google.golang.org/grpc/encoding
google.golang.org/grpc/internal/backoff
google.golang.org/grpc/internal/balancerload
google.golang.org/grpc/internal/buffer
golang.org/x/sys/internal/unsafeheader
golang.org/x/sys/unix
google.golang.org/protobuf/internal/encoding/messageset
google.golang.org/protobuf/internal/strs
google.golang.org/protobuf/internal/encoding/text
google.golang.org/protobuf/internal/genid
google.golang.org/protobuf/internal/order
google.golang.org/protobuf/reflect/protoregistry
google.golang.org/protobuf/runtime/protoiface
google.golang.org/protobuf/proto
google.golang.org/protobuf/internal/descfmt
google.golang.org/protobuf/encoding/prototext
google.golang.org/protobuf/internal/descopts
google.golang.org/protobuf/internal/encoding/defval
google.golang.org/protobuf/internal/filedesc
google.golang.org/grpc/internal/envconfig
google.golang.org/grpc/internal/grpcsync
golang.org/x/text/transform
golang.org/x/text/unicode/bidi
golang.org/x/text/secure/bidirule
golang.org/x/text/unicode/norm
google.golang.org/protobuf/internal/encoding/tag
google.golang.org/protobuf/internal/impl
golang.org/x/net/idna
golang.org/x/net/http/httpguts
golang.org/x/net/http2/hpack
golang.org/x/net/http2
google.golang.org/protobuf/internal/filetype
google.golang.org/grpc/internal/syscall
google.golang.org/protobuf/runtime/protoimpl
google.golang.org/grpc/keepalive
github.com/golang/protobuf/proto
google.golang.org/protobuf/types/known/anypb
github.com/golang/protobuf/ptypes/any
google.golang.org/protobuf/types/known/durationpb
github.com/golang/protobuf/ptypes/duration
google.golang.org/protobuf/types/known/timestamppb
github.com/golang/protobuf/ptypes/timestamp
google.golang.org/grpc/stats
google.golang.org/grpc/tap
google.golang.org/protobuf/types/descriptorpb
google.golang.org/grpc/credentials
google.golang.org/grpc/resolver
google.golang.org/grpc/balancer
google.golang.org/grpc/balancer/base
google.golang.org/grpc/balancer/roundrobin
google.golang.org/grpc/encoding/proto
github.com/golang/protobuf/ptypes
google.golang.org/grpc/binarylog/grpc_binarylog_v1
google.golang.org/grpc/internal/grpcutil
google.golang.org/genproto/googleapis/rpc/status
google.golang.org/grpc/internal/status
google.golang.org/grpc/internal/channelz
google.golang.org/grpc/status
google.golang.org/grpc/internal/binarylog
google.golang.org/grpc/internal/serviceconfig
google.golang.org/grpc/internal/resolver
google.golang.org/grpc/balancer/grpclb/state
google.golang.org/grpc/internal/resolver/passthrough
google.golang.org/grpc/internal/resolver/dns
google.golang.org/grpc/internal/transport/networktype
google.golang.org/grpc/internal/resolver/unix
google.golang.org/grpc/internal/metadata
google.golang.org/grpc/peer
google.golang.org/genproto/googleapis/api/annotations
google.golang.org/grpc/internal/transport
google.golang.org/genproto/googleapis/type/expr
google.golang.org/api/internal/third_party/uritemplates
google.golang.org/api/googleapi
cloud.google.com/go/internal/optional
github.com/golang/groupcache/lru
go.opencensus.io
go.opencensus.io/internal
go.opencensus.io/trace/internal
go.opencensus.io/trace/tracestate
go.opencensus.io/trace
google.golang.org/genproto/googleapis/rpc/code
cloud.google.com/go/internal/trace
google.golang.org/grpc
cloud.google.com/go/internal/version
google.golang.org/api/iterator
golang.org/x/net/context/ctxhttp
golang.org/x/oauth2/internal
golang.org/x/oauth2
cloud.google.com/go/compute/metadata
golang.org/x/oauth2/jws
golang.org/x/oauth2/jwt
golang.org/x/oauth2/google
google.golang.org/api/internal/impersonate
go.opencensus.io/trace/propagation
go.opencensus.io/plugin/ochttp/propagation/b3
github.com/googleapis/gax-go/v2
google.golang.org/genproto/googleapis/iam/v1
cloud.google.com/go/internal
google.golang.org/api/internal
google.golang.org/api/option
cloud.google.com/go/iam
google.golang.org/api/option/internaloption
google.golang.org/api/internal/gensupport
go.opencensus.io/resource
go.opencensus.io/metric/metricdata
go.opencensus.io/tag
go.opencensus.io/internal/tagencoding
go.opencensus.io/stats/internal
go.opencensus.io/metric/metricproducer
go.opencensus.io/stats
google.golang.org/api/googleapi/transport
go.opencensus.io/stats/view
google.golang.org/api/transport/cert
google.golang.org/api/transport/http/internal/propagation
google.golang.org/api/transport/internal/dca
go.opencensus.io/plugin/ochttp
github.com/mattn/go-ieproxy
github.com/Azure/azure-pipeline-go/pipeline
github.com/google/uuid
github.com/Azure/azure-storage-blob-go/azblob
google.golang.org/api/transport/http
google.golang.org/api/storage/v1
cloud.google.com/go/storage
github.com/aws/aws-sdk-go/aws/awserr
github.com/aws/aws-sdk-go/internal/ini
github.com/aws/aws-sdk-go/internal/shareddefaults
github.com/aws/aws-sdk-go/internal/sync/singleflight
github.com/aws/aws-sdk-go/aws/credentials
github.com/aws/aws-sdk-go/aws/endpoints
github.com/aws/aws-sdk-go/internal/sdkio
github.com/jmespath/go-jmespath
github.com/aws/aws-sdk-go/aws/awsutil
github.com/aws/aws-sdk-go/aws/client/metadata
github.com/aws/aws-sdk-go/internal/sdkrand
github.com/aws/aws-sdk-go/aws/credentials/processcreds
github.com/aws/aws-sdk-go/internal/strings
github.com/aws/aws-sdk-go/internal/sdkmath
github.com/aws/aws-sdk-go/internal/sdkuri
github.com/aws/aws-sdk-go/aws/arn
github.com/aws/aws-sdk-go/internal/s3shared/arn
github.com/kurin/blazer/internal/b2types
github.com/kurin/blazer/internal/blog
github.com/kurin/blazer/base
github.com/kurin/blazer/internal/b2assets
github.com/kurin/blazer/x/window
github.com/kurin/blazer/b2
github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/request
github.com/aws/aws-sdk-go/private/protocol/eventstream
github.com/pkg/errors
github.com/aws/aws-sdk-go/aws/client
github.com/aws/aws-sdk-go/aws/corehandlers
github.com/aws/aws-sdk-go/private/protocol
github.com/aws/aws-sdk-go/aws/csm
github.com/aws/aws-sdk-go/aws/ec2metadata
github.com/aws/aws-sdk-go/private/protocol/rest
github.com/aws/aws-sdk-go/private/protocol/query/queryutil
github.com/aws/aws-sdk-go/aws/signer/v4
github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil
github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds
github.com/aws/aws-sdk-go/private/protocol/json/jsonutil
github.com/aws/aws-sdk-go/private/protocol/query
github.com/aws/aws-sdk-go/service/sts
github.com/aws/aws-sdk-go/aws/credentials/endpointcreds
github.com/aws/aws-sdk-go/aws/defaults
github.com/aws/aws-sdk-go/internal/s3shared
github.com/aws/aws-sdk-go/service/sts/stsiface
github.com/aws/aws-sdk-go/internal/s3shared/s3err
github.com/aws/aws-sdk-go/aws/credentials/stscreds
github.com/aws/aws-sdk-go/private/checksum
github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi
github.com/aws/aws-sdk-go/aws/session
github.com/aws/aws-sdk-go/private/protocol/restxml
github.com/aws/aws-sdk-go/service/s3
github.com/klauspost/compress/flate
github.com/klauspost/pgzip
github.com/miolini/datacounter
github.com/someone1/zfsbackup-go/config
github.com/someone1/zfsbackup-go/log
golang.org/x/crypto/openpgp/errors
golang.org/x/crypto/openpgp/armor
golang.org/x/crypto/cast5
golang.org/x/crypto/openpgp/elgamal
golang.org/x/crypto/openpgp/s2k
golang.org/x/crypto/openpgp/packet
golang.org/x/crypto/openpgp
github.com/someone1/zfsbackup-go/pgp
github.com/someone1/zfsbackup-go/files
golang.org/x/sync/errgroup
github.com/cenkalti/backoff
github.com/nightlyone/lockfile
github.com/someone1/zfsbackup-go/zfs
github.com/spf13/pflag
github.com/spf13/cobra
golang.org/x/term
golang.org/x/crypto/ssh/terminal
github.com/aws/aws-sdk-go/service/s3/s3iface
github.com/aws/aws-sdk-go/service/s3/s3manager
github.com/aws/aws-sdk-go/service/s3/s3manager/s3manageriface
github.com/someone1/zfsbackup-go/backends
# github.com/someone1/zfsbackup-go/backends
../go/src/github.com/someone1/zfsbackup-go/backends/azure_backend.go:217:34: not enough arguments in call to blobURL.CommitBlockList
	have (context.Context, []string, azblob.BlobHTTPHeaders, azblob.Metadata, azblob.BlobAccessConditions)
	want (context.Context, []string, azblob.BlobHTTPHeaders, azblob.Metadata, azblob.BlobAccessConditions, azblob.AccessTierType, azblob.BlobTagsMap)

retry attempt == same error

$ go get -v github.com/someone1/zfsbackup-go
github.com/someone1/zfsbackup-go/backends
# github.com/someone1/zfsbackup-go/backends
../go/src/github.com/someone1/zfsbackup-go/backends/azure_backend.go:217:34: not enough arguments in call to blobURL.CommitBlockList
	have (context.Context, []string, azblob.BlobHTTPHeaders, azblob.Metadata, azblob.BlobAccessConditions)
	want (context.Context, []string, azblob.BlobHTTPHeaders, azblob.Metadata, azblob.BlobAccessConditions, azblob.AccessTierType, azblob.BlobTagsMap)

Support matching snapshots with a regex?

What do you think of supporting snapshot matching by regex? The prefix option isn't flexible enough for my case where I have several different prefixes that I need to match.

FreeBSD Package?

This looks like exactly what I want for off-site backups, but I really don't want my restore procedure to depend on compiling something from source or my backup procedure to rely on remembering to manually update something to make sure that it has the latest security fixes. What are the blockers for having this in the ports tree?

Typo

Not sure if really worth reporting but small typo...

$ pwd
/tmp/zfsbackup-go-master/cmd
$ grep garaun *
send.go:	sendCmd.Flags().Uint64Var(&jobInfo.VolumeSize, "volsize", 200, "the maximum size (in MiB) a volume should be before splitting to a new volume. Note: zfsbackup will try its best to stay close/under this limit but it is not garaunteed.")

Use bookmarks for incremental backup

Recent versions of ZFS have a feature that allows storing the metadata changes, but not the data, for snapshots. This is useful for creating backup checkpoints, where you want to know what has changed since the last backup, but don't want to record the old versions. The typical workflow is:

  • Initial backup:
    1. Create snapshot.
    2. Back up snapshot
    3. Transform snapshot into bookmark
  • Incremental backup:
    1. Create snapshot
    2. Back up the stream between the last bookmark and the current snapshot.
    3. Transform snapshot into bookmark
    4. [Optional] Delete old bookmark.

When you restore, you can either discard intermediate snapshots, or record them, but the local machine requires a negligible amount of space.

Typo in README.md examples

Love this excellent project, noticed a small issue in the README.

Incremental backup example:

```bash./zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc -i Tank/Dataset@snapshot-20170101 Tank/Dataset@snapshot-20170201 gs://backup-bucket-target,s3://another-backup-target
```

Should be:

Incremental backup example:

```bash
  ./zfsbackup send --encryptTo [email protected] --signFrom [email protected] --publicKeyRingPath pubring.gpg.asc --secretKeyRingPath secring.gpg.asc -i Tank/Dataset@snapshot-20170101 Tank/Dataset@snapshot-20170201 gs://backup-bucket-target,s3://another-backup-target
```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.