Giter Site home page Giter Site logo

pyznap's People

Contributors

fmajor avatar yboetz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyznap's Issues

Email if issue?

Hi,
I was wondering if its possible to get an email alert if there is an issue? of the snapshot ?

Thank you

Send exclusion

When sending snapshots to a destination, it seems that pyznap is looping each dataset and sending them one at a time: Would it be possible to add a check to not send some filesystems (even if they are snapshotted) by using either configuration, or using a user property?

[QUESTION] local replication destination dataset properties

How does pyznap deal with local replication to another storage pool?
Does it change some properties of the send datasets?
If I replicate my root dataset to eg strorage/rootdataset then its possible to get a double mount on /.
Does pyznap adjust canmount property or the mountpoint on the destination dataset?

Thanks

send/recv encrypted snaps

ZFS 0.8 has been released and has hit the PPA

#3 (comment):

'raw' send/recv will only be necessary once encryption is available, but I will be very interested in that once it makes its way into Ubuntu LTS. Implementing it would be quite easy I think, just one more flag '-w', which would be trivial.

A couple of issues related to initial zvol replication

I believe there are a couple of issues related to initial Pyznap replication of zvol. These issues could be related to each other...?

Issue no. 1: Zvol sparse-setting is not preserved during initial Pyznap send
ZFS property related to sparse/non-sparse zvol, ‘refreservation’, is somehow not transferred as part of initial Pyznap send.

Using ‘zfs send’ with ‘-R’, the source ‘refreservation’ is preserved. Without ‘-R’ destination local default value seems to be used rather than the value of the source zvol.

When replicating zvols on FreeNAS the ‘refreservation’ is always preserved, so I assume that is the right way to do it…

Simple non-sparse and sparse zvol (1Gb) can be created using:

zfs create -V 1G tank/nonsparse 
zfs create -s -V 1G tank/sparse

Issue no. 2: Initial Pyznap send of zvol only works recursively
When performing the initial replication to a new destination, pyznap doesn’t create a remote Zvol unless the Zvol is in a dataset that is recursively replicated.

Using Pyznap snap/send to replicate the zvol 'nonsparse':

[small/nonsparse]
frequent = 10
snap = yes
clean = yes
dest = ssh:34:[email protected]:BACKUP/test-pyznap/nonsparse

generates the error

root@pve:~# pyznap send
Sep 27 20:20:35 INFO: Starting pyznap...
Sep 27 20:20:35 INFO: Sending snapshots...
Sep 27 20:20:36 WARNING: lzop does not exist on [email protected], continuing without compression...
Sep 27 20:20:36 ERROR: Destination [email protected]:BACKUP/test-pyznap/nonsparse does not exist...
Sep 27 20:20:36 INFO: Finished successfully...

A destination zvol is created automatically on initial replication when using zfs send -R ...:

root@pve:~# zfs send -R small/nonsparse@first | ssh -p 34 [email protected] zfs receive BACKUP/test-pyznap/nonsparse

A workaround to both these issues is to run zfs send -R .... for the initial zvol replication, then use Pyznap.

How to install it?

Just a quick question: I know something about linux and I also know that Python has a package manager called pip but how to install this?

I tried to install it via pip:

pip3 install git+https://github.com/cythoning/pyznap
Collecting git+https://github.com/cythoning/pyznap
  Cloning https://github.com/cythoning/pyznap to /tmp/pip-n051elol-build
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/lib/python3.6/tokenize.py", line 452, in open
        buffer = _builtin_open(filename, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-n051elol-build/setup.py'

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-n051elol-build/

And why does it need pytest? It would be only required for development.

can't run setup

getting the following error:

root@SERVER:~# pyznap setup

Traceback (most recent call last):
  File "/usr/local/bin/pyznap", line 7, in <module>
    from pyznap.main import main
  File "/usr/local/lib/python2.7/dist-packages/pyznap/main.py", line 18, in <module>
    from .utils import read_config, create_config
  File "/usr/local/lib/python2.7/dist-packages/pyznap/utils.py", line 312
    host, *_ = ssh.get_transport().getpeername()
          ^
SyntaxError: invalid syntax
root@SERVER:~#

Get property error

I'm getting the following error on a newly installed system.

(venv) root@wiply:/home/neic/pyznap# pyznap --config ./pyznap.conf snap
Aug 14 15:21:27 INFO: Starting pyznap...
Aug 14 15:21:27 INFO: Taking snapshots...
Aug 14 15:21:27 INFO: Taking snapshot tank/neic@pyznap_2018-08-14_15:21:27_hourly...
Aug 14 15:21:27 INFO: Taking snapshot tank/neic@pyznap_2018-08-14_15:21:27_frequent...
Aug 14 15:21:27 INFO: Cleaning snapshots...
Aug 14 15:21:27 INFO: Finished successfully...

(venv) root@wiply:/home/neic/pyznap# pyznap --config ./pyznap.conf send
Aug 14 15:21:37 INFO: Starting pyznap...
Aug 14 15:21:37 INFO: Sending snapshots...
Aug 14 15:21:37 ERROR: Command '['zfs', 'get', '-H', '-p', '-d', '0', 'type', '/tank/neic']' returned non-zero exit status 1.
Aug 14 15:21:37 INFO: Finished successfully...

zfs get -H -p -d 0 type /tank/neic works fine:

(venv) root@wiply:/home/neic/pyznap# zfs get -H -p -d 0 type /tank/neic
tank/neic       type    filesystem      -

My ~/pyznap.conf:

[tank]
frequent = 4
hourly = 24
daily = 7
weekly = 4
monthly = 6
yearly = 1
snap = no


[tank/neic]
snap = yes
dest = ssh:22:[email protected]:/tank/neic
dest_keys = /root/.ssh/id_rsa

My system info:

(venv) root@wiply:/home/neic/pyznap# pip list
Package       Version
------------- -------
asn1crypto    0.24.0 
bcrypt        3.1.4  
cffi          1.11.5 
configparser  3.5.0  
cryptography  2.3    
idna          2.7    
paramiko      2.4.1  
pip           18.0   
pkg-resources 0.0.0  
pyasn1        0.4.4  
pycparser     2.18   
PyNaCl        1.2.1  
pyznap        1.0.1  
setuptools    39.0.1 
six           1.11.0 
(venv) root@wiply:/home/neic/pyznap# apt show zfsutils-linux 
Package: zfsutils-linux
Version: 0.7.5-1ubuntu16.3
# […]
(venv) root@wiply:/home/neic/pyznap# lsb_release --all
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:        18.04
Codename:       bionic

Noops when executed from cron

I feel like this is likely to be more of a support request than a true bug report, but I'm out of ideas for how to investigate further.

Summary

When executed from a normal root shell, pyznap behaves as expected. When executed from cron, log output confirms that pyznap does in fact execute... but it always decides incorrectly that no snapshots should be taken.

Steps to reproduce

Stand up an Ubuntu 16.04.6 LTS instance, install python3 and python3-pip. Use pip3 to install pyznap.

Create /etc/cron.d/pyznap_cron with the following contents:
0 * * * * root /usr/local/bin/pyznap --config /etc/pyznap/pyznap.conf snap | /usr/bin/logger -t pyznap-snap

After running pyznap setup as root, edit /etc/pyznap/pyznap.conf as follows:

# Primary pools containing live data, snapshot them but don't retain very long
[zfs-pool/files-lan]
hourly = 24
daily = 1
weekly = 1
monthly = 1
snap = yes
clean = yes
dest = backup-pool/zfs-pool-files-lan-backups

[zfs-pool/container-volumes]
hourly = 24                          
daily = 1
weekly = 1
monthly = 1
snap = yes                          
clean = yes                        
dest = backup-pool/zfs-pool-container-volumes-backups

## Destination Pools, don't take snapshots, only receive them. Longer retention
[backup-pool/zfs-pool-files-lan-backups]
hourly = 48
daily = 7
weekly = 4
monthly = 12
snap = no
clean = yes

[backup-pool/zfs-pool-container-volumes-backups]
hourly = 48
daily = 7
weekly = 4
monthly = 12
snap = no
clean = yes

## A couple of directories in the backup-pool that I rsync stuff to from non-zfs filesystems.
# No hourlies, but otherwise snapshot and retain as if they were a destination.
[backup-pool/manual-backups]
daily = 7
weekly = 4
monthly = 12
snap = yes
clean = yes

[backup-pool/srvr-backup]
daily = 7
weekly = 4
monthly = 12
snap = yes
clean = yes

Now wait for cron to kick in and observe the behavior of pyznap.

Expected Behavior

  1. Pyznap runs every hour on the 0th minute.
  2. The output of Pyznap is logged in /var/log/syslog, tagged with pyznap-snap
  3. Pyznap generates at least one snapshot in every run. Generally more due to multiple filesystems and periodicities.

Actual Behavior

Pyznap DOES run every hour on the 0th minute.

I can verify this because it DOES log output in /var/log/syslog:

Mar  7 19:00:01 srvr CRON[1614]: (root) CMD (/usr/local/bin/pyznap --config /etc/pyznap/pyznap.conf snap | /usr/bin/logger -t pyznap-snap)
Mar  7 19:00:01 srvr pyznap-snap: Mar 07 19:00:01 INFO: Starting pyznap...
Mar  7 19:00:01 srvr pyznap-snap: Mar 07 19:00:01 INFO: Taking snapshots...

The above is the complete output. Note that pyznap takes no action and generates no snapshots. This is the part that violates my expectation. I believe that pyznap should determine that a snapshot is necessary on every run.

As a test, if I now run the exact same command from a root shell (manually, outside of cron, but still piping output to logger), having made no config edits since the cron run, I see that pysnap does decide now that snapshots SHOULD be made... as verified by this output in /var/log/syslog:

Mar  7 19:36:25 srvr pyznap-snap: Mar 07 19:36:25 INFO: Starting pyznap...
Mar  7 19:36:25 srvr pyznap-snap: Mar 07 19:36:25 INFO: Taking snapshots...
Mar  7 19:36:25 srvr pyznap-snap: Mar 07 19:36:25 INFO: Taking snapshot zfs-pool/container-volumes@pyznap_2019-03-07_19:36:25_hourly...
Mar  7 19:36:25 srvr pyznap-snap: Mar 07 19:36:25 INFO: Taking snapshot zfs-pool/files-lan@pyznap_2019-03-07_19:36:25_hourly...
Mar  7 19:36:25 srvr pyznap-snap: Mar 07 19:36:25 INFO: Cleaning snapshots...
Mar  7 19:36:25 srvr pyznap-snap: Mar 07 19:36:25 INFO: Finished successfully...
Mar  7 19:36:25 srvr pyznap-snap: 

This is not just a case of a single missed run. Pyznap has been running every hour for 10 hours today prior to this test, with no config edits at all during this time. Pyznap has generated no snapshots during that period at all, when it should have been creating hourlies for two of the filesystems.

Additional Analysis

Note the timestamps, these runs were taken within the same hour. The previous snapshots for these two filesystems are about 9h 28m old, and were created by a pyznap run from the shell as I was manually testing. Pyznap has run every hour during this period, and has issued the same log output each time and declined to produce snapshots each time. In fact, neither pyznap snap nor pyznap send have ever taken any action from cron on this system (though logs confirm they each run when expected), though they work fine from the shell.

Here's the output of zfs list after the most recent successful run from a shell at 19:36 (except for the missing snapshots from 10:00-19:00 today it's pretty much as I expect, some random snapshots from manual runs as I was testing/setting up, plenty of free space)...

root@srvr:~# zfs list
NAME                                                                               USED  AVAIL  REFER  MOUNTPOINT
backup-pool                                                                       5.08T  1.95T   128K  /backup-pool
backup-pool/manual-backups                                                        66.5G  1.95T  66.5G  /backup-pool/manual-backups
backup-pool/manual-backups@pyznap_2019-03-07_10:08:27_monthly                         0      -  66.5G  -
backup-pool/manual-backups@pyznap_2019-03-07_10:08:28_weekly                          0      -  66.5G  -
backup-pool/manual-backups@pyznap_2019-03-07_10:08:29_daily                           0      -  66.5G  -
backup-pool/srvr-backup                                                           2.58T  1.95T  2.58T  /backup-pool/srvr-backup
backup-pool/srvr-backup@pyznap_2019-03-07_10:08:31_monthly                            0      -  2.58T  -
backup-pool/srvr-backup@pyznap_2019-03-07_10:08:32_weekly                             0      -  2.58T  -
backup-pool/srvr-backup@pyznap_2019-03-07_10:08:33_daily                              0      -  2.58T  -
backup-pool/zfs-pool-container-volumes-backups                                    21.6G  1.95T  21.6G  /backup-pool/zfs-pool-container-volumes-backups
backup-pool/zfs-pool-container-volumes-backups@pyznap_2019-03-06_22:53:11_hourly      0      -  21.6G  -
backup-pool/zfs-pool-files-lan-backups                                            2.41T  1.95T  2.41T  /backup-pool/zfs-pool-files-lan-backups
backup-pool/zfs-pool-files-lan-backups@pyznap_2019-03-06_22:53:11_hourly              0      -  2.41T  -
zfs-pool                                                                          2.47T  2.80T  38.1G  /zfs-pool
zfs-pool/container-volumes                                                        21.7G  2.80T  21.6G  /zfs-pool/container-volumes
zfs-pool/container-volumes@pyznap_2019-03-06_22:53:11_hourly                      55.2M      -  21.6G  -
zfs-pool/container-volumes@pyznap_2019-03-07_09:19:20_hourly                        92K      -  21.6G  -
zfs-pool/container-volumes@pyznap_2019-03-07_10:08:34_monthly                         0      -  21.6G  -
zfs-pool/container-volumes@pyznap_2019-03-07_10:08:36_weekly                          0      -  21.6G  -
zfs-pool/container-volumes@pyznap_2019-03-07_10:08:38_daily                           0      -  21.6G  -
zfs-pool/container-volumes@pyznap_2019-03-07_10:08:39_hourly                          0      -  21.6G  -
zfs-pool/container-volumes@pyznap_2019-03-07_19:36:25_hourly                          0      -  21.6G  -
zfs-pool/files-lan                                                                2.41T  2.80T  2.41T  /zfs-pool/files-lan
zfs-pool/files-lan@pyznap_2019-03-06_22:53:11_hourly                              96.5M      -  2.41T  -
zfs-pool/files-lan@pyznap_2019-03-07_09:19:21_hourly                                  0      -  2.41T  -
zfs-pool/files-lan@pyznap_2019-03-07_10:08:39_monthly                                 0      -  2.41T  -
zfs-pool/files-lan@pyznap_2019-03-07_10:08:39_weekly                                  0      -  2.41T  -
zfs-pool/files-lan@pyznap_2019-03-07_10:08:39_daily                                   0      -  2.41T  -
zfs-pool/files-lan@pyznap_2019-03-07_10:08:39_hourly                                  0      -  2.41T  -
zfs-pool/files-lan@pyznap_2019-03-07_19:36:25_hourly                                  0      -  2.41T  -

Apologies for the giant wall of text in this report, but I'm a bit flummoxed and wanted to give you the best chance of being able to debug it in a single viewing. My first inclination was that I had failed to correctly configure the cron job... but pyznap is so clearly executing on the hour every hour as expected... it's just not taking the expected actions during each run. I don't see any way to get debug output, and am not sure where to investigate next.

From the shell, it's been super neato so far. I love the ease of the pip-install, and the sanoid configuration approach is blissfully simple.

"pyznap send" fails with "Host key verification failed" errors

Ever since 16th July (i.e. immediately after the new versions were released and installed on my machine), my pyznap send job fails every night with the error:

ERROR: Error while connecting to root@backup-nas: Host key verification failed....

My config file has these lines for every dataset:

dest = ssh:22:root@backup-nas:backups/<dataset>
dest_keys = /home/user/.ssh/id_rsa

Nothing has changed in terms of configuration, and I can still SSH successfully to the backup-nas machine using the following command from the terminal (which is what I would expect pyznap to do):

ssh root@backup-nas -i /home/user/.ssh/id_rsa

Sounds like a bug with the new SSH implementation?

Allow not take snapshots recursively

It takes snapshots for all childs but I'm using docker in my server and take that snapshots involve taking snapshots for all docker driver stuff.

zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@631710867
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:43:57_monthly
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:44:00_weekly
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:43:54_yearly
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:44:03_daily
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038

This doesn't make sense for me when I only want to backup my zroot volume.

It could be fix with a new config option like recursively = no.

Thanks for your software!

large sync to slow

Hey,

I been struggling with this "large" transfer; I have 17TB on my server, and want to keep this in sync with another server; The problem is, the data sync takes so long that pyznap generates snapshots faster then it could send over; eventually resulting in the sync stopping; Most of this data is stale, but it got imported in one go.

I thought I disable pyznap and run a snap and a send after it; This worked; After that I reran snap & send; but this fails as there is no "common" snapshot anymore the weekly got removed and the remote was lagging behind.

Now I have the data on both servers (but no common snapshots); Can I fix this w/o sending another 17TB over the line ?

no common snapshots: what next?

I've been using pyznap for a while, but one of my offsite backups got stale and now pyznap is giving me the "no common snapshots" error. I'd rather not blow away the existing snapshots on the backup drive. What is my next step to tell pyznap to do a full backup anyway?

python3.6 required?

Is python3.6 absolutely required? I'm running a proxmox host which has a debian 9 stretch base. Debian does not have a python3.6 package, 3.5 is the latest. Thanks!

issue on running?

Hi,
Currently installed on proxmox, but im getting this error

root@prometheus4:/media# ./pyznap.conf 
./pyznap.conf: line 14: [rpool/data]: No such file or directory
./pyznap.conf: line 15: frequent: command not found
./pyznap.conf: line 16: snap: command not found
./pyznap.conf: line 17: clean: command not found
root@prometheus4:/media# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                      269G  3.25T    96K  /rpool
rpool/ROOT                1.06G  3.25T    96K  /rpool/ROOT
rpool/ROOT/pve-1          1.06G  3.25T  1.06G  /
rpool/data                 268G  3.25T    96K  /rpool/data
rpool/data/vm-103-disk-0  16.5G  3.25T  16.5G  -
rpool/data/vm-103-disk-1   207G  3.25T   207G  -
rpool/data/vm-104-disk-0  25.8G  3.25T  25.8G  -
rpool/data/vm-105-disk-0  9.22G  3.25T  9.21G  -
rpool/data/vm-116-disk-0  9.61G  3.25T  8.99G  -

These are the steps i did

apt-get install python3.5
apt-get install python3-pip
pip3 install pyznap
 /usr/local/bin/pyznap setup -p /media/

Remote dataset become unmounted after pyznap send

Not sure if it is intentional or not, but the destination datasets on remote machine become unmounted after every pyznap send. Are there any settings I can use to avoid that?

Source is on Proxmox and destination is on a FreeNAS. Source datasets are always mounted, and the destination datasets can be remounted if I reboot FreeNAS (or done manually I guess). My config for testing purposes looks like this:

[testpool]
frequent = 30
snap = yes
clean = yes
dest = ssh:22:[email protected]:WD10P/test
dest_keys = /home/user/.ssh/id_rsa
compress = gzip

Thanks for making such a awesome piece of software!

Shadow Copy Localtime error?

Hi,
I was looking at the #41
as it helped me configure shadow copies on samba which workes great the only issue is the localtime is incorrect. on the snapshot i get this

  Taking snapshot data@pyznap_2020-01-05_11:46:16_frequent...

and on the windows before version shows date correct but the time is 6:46am there is 5 hours difference, maybe did i miss something?

This is my samba conf

[test]
    path = /data
    browseable = yes
    force create mode = 0660
    force directory mode = 0660
    valid users = @"Domain Users"
    read list =
    write list = @"Domain Users"
    admin users =
    vfs objects = acl_xattr full_audit recycle shadow_copy2
#    full_audit:failure = connect opendir disconnect unlink mkdir rmdir open rename
full_audit:prefix = %u|%I|%S
full_audit:failure = connect
full_audit:success = mkdir rename unlink rmdir pwrite pread connect disconnect
full_audit:facility = local5
full_audit:priority = notice
    recycle: inherit_nt_acl = Yes
    recycle: versions = Yes
    recycle: excludedir = /tmp|/var/tmp
    recycle: keeptree = Yes
    recycle: repository = RecycleBin
    recycle: directory_mode = 0700
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
# Specify snapshot name: frequent, hourly, daily... as desired
shadow: format = _%Y-%m-%d_%H:%M:%S
shadow: snapprefix = ^pyznap
shadow: delimiter = _
shadow:localtime = no

Thank you

A mechanism to only clean source snapshots if send was successful.

Firstly, thank you for writing and maintaining pyznap, it's a great tool.

pyznap appears to return 0 even if there was some sort of error sending snapshots, meaning:

pysnap send && pyznap snap --clean

won't do anything useful.

I'd like to avoid getting into the situation where some failure of the transfer process inadvertently leads to all the common source snapshots being cleaned (something I experienced previously with znapzend).

Might it be possible to make pyznap continue to attempt all operations, even if one fails, but then to return an error code if any operations failed? The error code could even be a count of how many operations failed.

This would obviously make manually sending an email on failure easier too, and any other sort of automation that might wrap calling pyznap in a script.

Cheers.

compression during send ?

Perhaps I missed it, but can I set compression "on" during the zfs send to the backup location ? (from tests it looked like this speeds up the transfer significantly here.) Is there an option to do this ? (I tried checking in the code, and realized my python is bad 😄

Cron Snap or Send? does not remount some pools after finish

Hi there,
i just installed and configured pyznap, great tool btw!, and the question came to mind.
My cronjob looks like this:
*/15 * * * * root { /path/to/pyznap snap ; /path/to/pyznap send ; } >> /var/log/pyznap.log
is it really necessary to do both snap and send when "dest" is another local HDD?

And then there is also some strange thing happening..
i make a new ZFS Subvol zpool-backup/rpool which is the "dest=" in pyznap.conf
before pyznap runs i still have the .zfs folder with "share" and "snapshot" folders in it.
after pyznaps first run, the entire .zfs folder is missing..

what am i missing here?! on the source pool the .zfs dir is present.
Also on another pool, which i backup to zpool-backup/nvme this folder is persistent and does not get deleted

Please help 😄

edit: seems like it can not mount the dest rpool again after finishing the snapshot.. ok i can live with that..
but it does not do that to other backup destinations.
do you have any idea why that is?

How do you control when snapshots are taken

Hi
Right now with my home brew scripts I schedule snapshots at night for VMs and the file system datasets are taken hourly during business hours.

I replicate file system datasets during the day, but the VMs are all synced at night. Can I configure that with pyznap?

Also, can I configure the snapshot name?

Thanks,
Geoff

Invalid id_rsa file

I have been at it all day, and I finally installed pyznap in virtualenv. When I run it, I get the error message.

ERROR: /root/.ssh/id_rsa is not a valid ssh key file...

I am trying to replicate a pool.

I can ssh manually between the two machines, even using keys. The weird thing is no matter what setting I change, the error message doesn't change. For example, if I misspell the path on purpose, say "/rwaoot/.ssh/id_rsa"

I have even regenerated the keys.

Any insights? Would be greatly appreciated.

Thank you!

Pull snapshots from remote source to local destination

According to the example configuration, you can create snapshots on a remote host, but not send them to another destination.

Would it be possible to add the option to 'pull down' snapshots from the remote host to the local one after doing the snapshots?

pyznap send generates warnings;

Just testing out this tool, nice work. I received some warnings on Centos 7, python 3.6.

pyznap send
Feb 21 10:26:55 INFO: Starting pyznap...
Feb 21 10:26:55 INFO: Sending snapshots...
/usr/lib/python3.6/site-packages/paramiko/ecdsakey.py:164: CryptographyDeprecationWarning: Support for unsafe construction of public numbers from encoded data will be removed in a future version. Please use EllipticCurvePublicKey.from_encoded_point
  self.ecdsa_curve.curve_class(), pointinfo
/usr/lib/python3.6/site-packages/paramiko/kex_ecdh_nist.py:39: CryptographyDeprecationWarning: encode_point has been deprecated on EllipticCurvePublicNumbers and will be removed in a future version. Please use EllipticCurvePublicKey.public_bytes to obtain both compressed and uncompressed point encoding.
  m.add_string(self.Q_C.public_numbers().encode_point())
/usr/lib/python3.6/site-packages/paramiko/kex_ecdh_nist.py:96: CryptographyDeprecationWarning: Support for unsafe construction of public numbers from encoded data will be removed in a future version. Please use EllipticCurvePublicKey.from_encoded_point
  self.curve, Q_S_bytes
/usr/lib/python3.6/site-packages/paramiko/kex_ecdh_nist.py:111: CryptographyDeprecationWarning: encode_point has been deprecated on EllipticCurvePublicNumbers and will be removed in a future version. Please use EllipticCurvePublicKey.public_bytes to obtain both compressed and uncompressed point encoding.
  hm.add_string(self.Q_C.public_numbers().encode_point())
Traceback (most recent call last):
  File "/usr/bin/pyznap", line 11, in <module>
    sys.exit(main())
  File "/usr/lib/python3.6/site-packages/pyznap/main.py", line 120, in main
    return _main()
  File "/usr/lib/python3.6/site-packages/pyznap/main.py", line 103, in _main
    send_config(config)
  File "/usr/lib/python3.6/site-packages/pyznap/send.py", line 215, in send_config
    zfs.open(dest_name, ssh=ssh)
  File "/usr/lib/python3.6/site-packages/pyznap/pyzfs.py", line 85, in open
    type = findprops(name, ssh=ssh, max_depth=0, props=['type'])[name]['type'][0]
KeyError: '/winky/brick1'

Current config for reference :

# default settings parent
[tinky]
        # every $cron runtime (~15 minutes)
        frequent = 2
        hourly = 6
        daily = 3
        weekly = 1
        monthly = 0
        yearly = 0
        # take snapshots ?
        snap = no
        # clean snapshots ?
        clean = no

[tinky/brick1]
        snap = yes
        clean = yes
        dest = ssh:22:root@winky:winky/brick1
        dest_keys = /root/.ssh/id_rsa_winky

(edit) It now works, I was using the path instead of the ZFS fs.

Pre and Post sync/send commands

I'd like to run things just before and after snapshots (stop a database, pause a VM, or rsync something), and just before and after sending (ssh into the destination, lock/unlock luks encrypted partitions, and import/export a zfs pool).

The scripts that do the jobs are easy enough. Can there be a hook to call those scripts?

Graceful way to handle configparser requirement on Python 3.6+?

I package pyznap for Arch Linux in the AUR, but I don't really use it. While updating to 1.2.0, I realized I couldn't actually run it... below output is the result. But our Python 3.7.3 has configparser built in. I can run python and then import configparser no problem.

Eventually, I settled on just using sed to remove the line from your requires.txt after the python setup.py install, but it seems like there has to be a smarter way.

0 ✓ fryfrog@apollo ~ $ pyznap
Traceback (most recent call last):
  File "/usr/bin/pyznap", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3241, in <module>
    @_call_aside
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3225, in _call_aside
    f(*args, **kwargs)
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3254, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 583, in _build_master
    ws.require(__requires__)
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 900, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 786, in resolve
    raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'configparser>=3.5.0' distribution was not found and is required by pyznap
1 ✗ fryfrog@apollo ~ $ cat $( which pyznap )
#!/usr/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'pyznap==1.2.0','console_scripts','pyznap'
__requires__ = 'pyznap==1.2.0'
import re
import sys
from pkg_resources import load_entry_point

if __name__ == '__main__':
    sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
    sys.exit(
        load_entry_point('pyznap==1.2.0', 'console_scripts', 'pyznap')()
    )
0 ✓ fryfrog@apollo ~ $ python
Python 3.7.3 (default, Jun 24 2019, 04:54:02)
[GCC 9.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import configparser
>>>

deleting from remote fails : has dependent clones

I logged the output for pyznap send & pyznap snap :

Mar 04 13:15:02 INFO: Starting pyznap...
Mar 04 13:15:02 INFO: Sending snapshots...
Mar 04 13:15:03 INFO: Updating root@winky:winky/brick1 with recent snapshot tinky/brick1@pyznap_2019-03-04_13:15:02_frequent (~1.8K)...
Mar 04 13:15:04 INFO: root@winky:winky/brick1 is up to date...
Mar 04 13:15:04 INFO: Finished successfully...

04 INFO: Deleting snapshot root@winky:winky/brick1@pyznap_2019-03-04_12:45:01_frequent...
Mar 04 13:15:04 ERROR: Error while deleting snapshot root@winky:winky/brick1@pyznap_2019-03-04_12:45:01_frequent: 'cannot destroy 'winky/brick1@pyznap_2019-03-04_12:45:01_frequent': snapshot has dependent clones
use '-R' to destroy the following datasets:
winky/brick1/%recv'...
Mar 04 13:15:04 INFO: Deleting snapshot tinky/brick1@pyznap_2019-03-04_12:45:01_frequent...
Mar 04 13:15:04 INFO: Finished successfully...

/etc/pyznap/pyznap.conf

# default settings parent
[tinky]
        # every $cron runtime (~15 minutes)
        frequent = 2
        hourly = 6
        daily = 3
        weekly = 1
        monthly = 0
        yearly = 0
        # take snapshots ?
        snap = no
        # clean snapshots ?
        clean = no

[tinky/brick1]
        snap = yes
        clean = yes
        dest = ssh:22:root@winky:winky/brick1
        dest_keys = /root/.ssh/id_rsa_winky

# cleanup on backup
[ssh:22:root@winky:winky/brick1]
        frequent = 2
        hourly = 6
        daily = 3
        weekly = 1
        key = /root/.ssh/id_rsa_winky
        clean = yes

the cron :

*/15 * * * * root echo "pyznap send\n" > /var/log/pyznap && /usr/bin/pyznap send > /var/log/pyznap
*/15 * * * * root echo "pyznap snap\n" > /var/log/pyznap && /usr/bin/pyznap snap > /var/log/pyznap

The idea is that we take snapshots local, make sure the data is backed up on the backup location. I must miss something but I'm unsure what?

send snapshots cleaning

What would be the easiest way to clean up "old snapshots" on a backup location ? Can I use pyznap on the remote location to clean up ?

Integrate tests with setuptools

Hi,

I noticed that some python programs have the ability to select which tests to run.
Could you elaborate if this is possible with your program?

The test command I'm running is:
python3.6 setup.py test

and something that is of use as example:
python3.6 setup.py test --pytest-args "-k 'not test_integration'"
python3.6 setup.py test -a "--ignore=tests/test_reader.py"

This has the ability to disable tests. Do you know if this is possible?

I noticed that it fails with
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help

Could you take a look at https://docs.pytest.org/en/latest/goodpractices.html for integration with setup tools? Thanks!

Tests assume the existance of id_rsa

I'm helping to package pyznap for FreeBSD at the moment and decided to run the test suite. Here are the errors I get (see the details below for the whole log).


running pytest
running egg_info
writing pyznap.egg-info/PKG-INFO
writing dependency_links to pyznap.egg-info/dependency_links.txt
writing entry points to pyznap.egg-info/entry_points.txt
writing requirements to pyznap.egg-info/requires.txt
writing top-level names to pyznap.egg-info/top_level.txt
reading manifest file 'pyznap.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'pyznap.egg-info/SOURCES.txt'
running build_ext
========================================================= test session starts ==========================================================
platform freebsd11 -- Python 3.6.6, pytest-3.6.4, py-1.6.0, pluggy-0.6.0 -- /usr/local/bin/python3.6
cachedir: .pytest_cache
rootdir: /wrkdirs/usr/ports/sysutils/py-pyznap/work-py36/pyznap-1.0.2, inifile: setup.cfg
collected 38 items / 18 deselected

tests/test_functions.py::TestUtils::test_read_config PASSED [ 5%]
tests/test_functions.py::TestUtils::test_parse_name PASSED [ 10%]
tests/test_functions.py::TestSnapshot::test_take_snapshot ERROR [ 15%]
tests/test_functions.py::TestSnapshot::test_clean_snapshot ERROR [ 20%]
tests/test_functions.py::TestSnapshot::test_take_snapshot_recursive ERROR [ 25%]
tests/test_functions.py::TestSnapshot::test_clean_recursive ERROR [ 30%]
tests/test_functions.py::TestSending::test_send_full ERROR [ 35%]
tests/test_functions.py::TestSending::test_send_incremental ERROR [ 40%]
tests/test_functions.py::TestSending::test_send_delete_snapshot ERROR [ 45%]
tests/test_functions.py::TestSending::test_send_delete_sub ERROR [ 50%]
tests/test_functions.py::TestSending::test_send_delete_old ERROR [ 55%]
tests/test_functions_ssh.py::TestSnapshot::test_take_snapshot ERROR [ 60%]
tests/test_functions_ssh.py::TestSnapshot::test_clean_snapshot ERROR [ 65%]
tests/test_functions_ssh.py::TestSnapshot::test_take_snapshot_recursive ERROR [ 70%]
tests/test_functions_ssh.py::TestSnapshot::test_clean_recursive ERROR [ 75%]
tests/test_functions_ssh.py::TestSending::test_send_full ERROR [ 80%]
tests/test_functions_ssh.py::TestSending::test_send_incremental ERROR [ 85%]
tests/test_functions_ssh.py::TestSending::test_send_delete_snapshot ERROR [ 90%]
tests/test_functions_ssh.py::TestSending::test_send_delete_sub ERROR [ 95%]
tests/test_functions_ssh.py::TestSending::test_send_delete_old ERROR [100%]

================================================================ ERRORS ================================================================
__________________________________________ ERROR at setup of TestSnapshot.test_take_snapshot ___________________________________________

self = , func = <function call_runtest_hook.. at 0x808d86c80>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:834: in execute
return hook.pytest_fixture_setup(fixturedef=self, request=request)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
-------------------------------------------------------- Captured stderr setup ---------------------------------------------------------
internal error: failed to initialize ZFS library
Oct 11 15:51:00 ERROR: Command '['/sbin/zpool', 'create', 'pyznap_test_source', '/tmp/tmpqqlbpn_d']' returned non-zero exit status 1.
---------------------------------------------------------- Captured log setup ----------------------------------------------------------
test_functions.py 59 ERROR Command '['/sbin/zpool', 'create', 'pyznap_test_source', '/tmp/tmpqqlbpn_d']' returned non-zero exit status 1.
__________________________________________ ERROR at setup of TestSnapshot.test_clean_snapshot __________________________________________

self = , func = <function call_runtest_hook.. at 0x808ea08c8>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
_____________________________________ ERROR at setup of TestSnapshot.test_take_snapshot_recursive ______________________________________

self = , func = <function call_runtest_hook.. at 0x808ea0950>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
_________________________________________ ERROR at setup of TestSnapshot.test_clean_recursive __________________________________________

self = , func = <function call_runtest_hook.. at 0x808e79d08>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
_____________________________________________ ERROR at setup of TestSending.test_send_full _____________________________________________

self = , func = <function call_runtest_hook.. at 0x807904b70>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
_________________________________________ ERROR at setup of TestSending.test_send_incremental __________________________________________

self = , func = <function call_runtest_hook.. at 0x808e798c8>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
_______________________________________ ERROR at setup of TestSending.test_send_delete_snapshot ________________________________________

self = , func = <function call_runtest_hook.. at 0x808de9bf8>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
__________________________________________ ERROR at setup of TestSending.test_send_delete_sub __________________________________________

self = , func = <function call_runtest_hook.. at 0x808de9840>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
__________________________________________ ERROR at setup of TestSending.test_send_delete_old __________________________________________

self = , func = <function call_runtest_hook.. at 0x808de9d08>, when = 'setup'
treat_keyboard_interrupt_as_exception = False

def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False):
    #: context of invocation: one of "setup", "call",
    #: "teardown", "memocollect"
    self.when = when
    self.start = time()
    try:
      self.result = func()

/usr/local/lib/python3.6/site-packages/_pytest/runner.py:199:


/usr/local/lib/python3.6/site-packages/_pytest/runner.py:181: in
lambda: ihook(item=item, **kwds),
/usr/local/lib/python3.6/site-packages/pluggy/init.py:617: in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:222: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/init.py:216: in
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:102: in pytest_runtest_setup
item.session._setupstate.prepare(item)
/usr/local/lib/python3.6/site-packages/_pytest/runner.py:562: in prepare
col.setup()
/usr/local/lib/python3.6/site-packages/_pytest/python.py:1355: in setup
fixtures.fillfixtures(self)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:270: in fillfixtures
request._fillfixtures()
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:409: in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:453: in getfixturevalue
return self._get_active_fixturedef(argname).cached_result[0]
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:476: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:547: in _compute_fixture_value
fixturedef.execute(request=subrequest)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:825: in execute
py.builtin._reraise(*err)
/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:868: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)


fixturefunc = <function zpools at 0x8079189d8>, request = <SubRequest 'zpools' for <Function 'test_take_snapshot'>>, kwargs = {}

def call_fixture_func(fixturefunc, request, kwargs):
    yieldctx = is_generator(fixturefunc)
    if yieldctx:
        it = fixturefunc(**kwargs)
      res = next(it)

E StopIteration

/usr/local/lib/python3.6/site-packages/_pytest/fixtures.py:736: StopIteration
__________________________________________ ERROR at setup of TestSnapshot.test_take_snapshot ___________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
-------------------------------------------------------- Captured stderr setup ---------------------------------------------------------
Oct 11 15:51:02 ERROR: /root/.ssh/id_rsa is not a valid ssh key file...
---------------------------------------------------------- Captured log setup ----------------------------------------------------------
utils.py 84 ERROR /root/.ssh/id_rsa is not a valid ssh key file...
__________________________________________ ERROR at setup of TestSnapshot.test_clean_snapshot __________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
_____________________________________ ERROR at setup of TestSnapshot.test_take_snapshot_recursive ______________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
_________________________________________ ERROR at setup of TestSnapshot.test_clean_recursive __________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
_____________________________________________ ERROR at setup of TestSending.test_send_full _____________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
_________________________________________ ERROR at setup of TestSending.test_send_incremental __________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
_______________________________________ ERROR at setup of TestSending.test_send_delete_snapshot ________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
__________________________________________ ERROR at setup of TestSending.test_send_delete_sub __________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
__________________________________________ ERROR at setup of TestSending.test_send_delete_old __________________________________________

@pytest.fixture(scope='module')
def zpools():
    """Creates two temporary zpools to be called from test functions, source is local and dest on
    remote ssh location. Yields the two pool names and destroys them after testing."""

    zpool = '/sbin/zpool'
    pool0 = 'pyznap_test_source'
    pool1 = 'pyznap_test_dest'

    sftp_filename = '/tmp/' + randomword(10)

    # ssh arguments for zfs functions
  ssh = open_ssh(USER, HOST, port=PORT, key=KEY)

tests/test_functions_ssh.py:56:


user = 'root', host = '127.0.0.1', key = '/root/.ssh/id_rsa', port = 22

def open_ssh(user, host, key=None, port=22):
    """Opens an ssh connection to host.

    Parameters:
    ----------
    user : {str}
        Username to use
    host : {str}
        Host to connect to
    key : {str}, optional
        Path to ssh keyfile (the default is None, meaning the standard location
        '~/.ssh/id_rsa' will be checked)
    port : {int}, optional
        Port number to connect to (the default is 22)

    Raises
    ------
    FileNotFoundError
        If keyfile does not exist
    SSHException
        General exception raised if anything goes wrong during ssh connection

    Returns
    -------
    paramiko.SSHClient
        Open ssh connection.
    """

    logger = logging.getLogger(__name__)

    if not key:
        key = os.path.expanduser('~/.ssh/id_rsa')
    if not os.path.isfile(key):
        logger.error('{} is not a valid ssh key file...'.format(key))
      raise FileNotFoundError(key)

E FileNotFoundError: /root/.ssh/id_rsa

pyznap/utils.py:85: FileNotFoundError
========================================== 2 passed, 18 deselected, 18 error in 2.62 seconds ===========================================

Windows Samba shadow copy & pyznap - can it be done?

Hi, I'm looking at pyznap and it looks solid. One thing that I would love to have as a Windows user though is to enable shadow copies with ZFS. Which basically allows any Samba share file and directory to restore to Previous Versions using a super simple GUI. This is all in Windows - and would be supported by the ZFS snapshot mechanism (and thus, pyznap).

See: https://blog.chaospixel.com/linux/2017/09/zfs-auto-snapshots-and-windows-shadow-copies-with-samba.html

Also this thread on the same issue with zfs-auto-snapshot: zfsonlinux/zfs-auto-snapshot#84

Is this possible with pyznap as well? If so, how would you do it? And if not, would you be able to add code in order to make this work? Would be really great.

Configuration for mbuffer buffer size

Can you add an option to configure the mbuffer size? At the moment it's hard-coded to 512M, which is fine in most cases, but I was using zfs on a VPS with a gig of ram (for the snapshotting and easy backup) and 512M might not always be available. On the other side of the spectrum, you might want to increase the buffer size.

Similarly, it might be a good idea to add a configuration option for the block size since both are effectively used in exactly the same spot.

snapshots not being cleaned on dest

Thanks for the great script! I'm probably doing something wrong but snapshots are not being cleaned on my destination, they're being cleaned just fine on the source datasets though. Below is my super simple pyznap.conf and a snippet from zfs list -t snapshot. This is on a proxmox host (debian stretch) with python 3.5.3.

`[p3600]
hourly = 24
daily = 7
weekly = 1
snap = yes
clean = yes
dest = tank/p3600backup

[tank/Media]
daily = 7
weekly = 1
snap = yes
clean = yes

[tank/backup]
daily = 7
weekly = 1
snap = yes
clean = yes`

tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-16_22:13:18_weekly 0B - 32.0G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-16_22:13:18_daily 0B - 32.0G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-16_22:13:18_hourly 0B - 32.0G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_12:15:56_daily 0B - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_12:15:56_hourly 0B - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_14:00:01_hourly 17.2M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_15:00:01_hourly 15.2M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_16:00:01_hourly 3.64M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_17:00:01_hourly 3.83M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_18:00:01_hourly 13.4M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_19:00:01_hourly 14.9M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_20:00:02_hourly 15.9M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_21:00:01_hourly 16.2M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_22:00:01_hourly 19.8M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-17_23:00:02_hourly 31.0M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_00:00:01_daily 31.4M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_01:00:02_hourly 25.7M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_02:00:01_hourly 7.55M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_03:00:02_hourly 7.18M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_04:00:01_hourly 14.5M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_05:00:01_hourly 14.1M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_06:00:01_hourly 12.2M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_07:00:01_hourly 11.9M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_08:00:01_hourly 11.9M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_09:00:01_hourly 11.8M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_10:00:02_hourly 11.8M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_11:00:01_hourly 11.9M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_12:00:01_hourly 11.9M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_13:00:02_hourly 11.8M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_14:00:01_hourly 11.9M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_15:00:01_hourly 11.9M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_16:00:02_hourly 12.0M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_17:00:01_hourly 25.8M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_18:00:01_hourly 37.8M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_19:00:01_hourly 44.3M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_20:00:01_hourly 38.5M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_21:00:01_hourly 32.6M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_22:00:01_hourly 22.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-18_23:00:02_hourly 16.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_00:00:01_weekly 0B - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_00:00:01_daily 0B - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_01:00:01_hourly 14.6M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_02:00:02_hourly 15.4M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_03:00:01_hourly 16.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_04:00:02_hourly 12.8M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_05:00:02_hourly 11.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_06:00:01_hourly 12.0M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_07:00:01_hourly 11.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_08:00:01_hourly 12.0M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_09:00:01_hourly 11.8M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_10:00:01_hourly 11.8M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_11:00:02_hourly 11.8M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_12:00:01_hourly 11.8M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_13:00:02_hourly 11.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_14:00:01_hourly 12.0M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_15:00:01_hourly 11.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_16:00:02_hourly 11.8M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_17:00:01_hourly 12.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_18:00:02_hourly 17.6M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_19:00:01_hourly 22.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_20:00:02_hourly 27.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_21:00:01_hourly 32.6M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_22:00:01_hourly 27.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-19_23:00:01_hourly 19.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_00:00:01_daily 16.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_01:00:01_hourly 19.6M - 32.5G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_02:00:01_hourly 18.4M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_03:00:01_hourly 42.1M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_04:00:01_hourly 17.4M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_05:00:01_hourly 12.1M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_06:00:01_hourly 11.9M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_07:00:01_hourly 17.9M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_08:00:01_hourly 17.8M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_09:00:01_hourly 3.31M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_10:00:01_hourly 2.60M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_11:00:01_hourly 21.8M - 32.6G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_12:00:01_hourly 12.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_13:00:01_hourly 12.0M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_14:00:01_hourly 21.6M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_15:00:01_hourly 17.5M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_16:00:01_hourly 12.0M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_17:00:02_hourly 12.0M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_18:00:01_hourly 15.3M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_19:00:02_hourly 25.5M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_20:00:01_hourly 27.0M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_21:00:02_hourly 21.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_22:00:01_hourly 8.13M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-20_23:00:01_hourly 3.25M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_00:00:01_daily 12.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_01:00:01_hourly 12.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_02:00:01_hourly 17.6M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_03:00:01_hourly 26.9M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_04:00:02_hourly 13.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_05:00:01_hourly 12.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_06:00:01_hourly 11.5M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_07:00:01_hourly 12.0M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_08:00:01_hourly 31.0M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_09:00:01_hourly 12.4M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_10:00:01_hourly 11.9M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_11:00:01_hourly 11.8M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_12:00:01_hourly 12.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_13:00:01_hourly 12.0M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_14:00:01_hourly 12.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_15:00:02_hourly 12.2M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_16:00:01_hourly 12.2M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_17:00:01_hourly 12.9M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_18:00:01_hourly 27.6M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_19:00:01_hourly 34.1M - 32.7G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_20:00:01_hourly 29.4M - 32.8G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_21:00:01_hourly 26.0M - 32.8G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_22:00:01_hourly 26.6M - 32.8G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-21_23:00:01_hourly 31.5M - 32.8G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_00:00:02_daily 21.7M - 32.8G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_01:00:01_hourly 18.1M - 32.8G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_02:00:01_hourly 24.4M - 32.8G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_03:00:01_hourly 32.9M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_04:00:01_hourly 17.2M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_05:00:01_hourly 12.9M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_06:00:01_hourly 12.2M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_07:00:01_hourly 12.0M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_08:00:02_hourly 2.38M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_09:00:01_hourly 2.39M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_10:00:01_hourly 11.9M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_11:00:01_hourly 12.0M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_12:00:01_hourly 12.1M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_13:00:01_hourly 12.0M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_14:00:01_hourly 12.0M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_15:00:02_hourly 13.0M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_16:00:01_hourly 14.4M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_17:00:01_hourly 15.2M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_18:00:01_hourly 14.3M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_19:00:01_hourly 13.1M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_20:00:02_hourly 31.1M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_21:00:01_hourly 29.5M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_22:00:01_hourly 23.0M - 32.1G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-22_23:00:02_hourly 15.2M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_00:00:01_daily 12.5M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_01:00:01_hourly 12.5M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_02:00:01_hourly 12.9M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_03:00:01_hourly 37.3M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_04:00:01_hourly 12.8M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_05:00:01_hourly 12.1M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_06:00:01_hourly 12.1M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_07:00:01_hourly 12.6M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_08:00:02_hourly 16.4M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_09:00:01_hourly 12.0M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_10:00:02_hourly 12.0M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_11:00:01_hourly 26.0M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_12:00:01_hourly 13.3M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_13:00:02_hourly 12.4M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_14:00:01_hourly 13.8M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_15:00:01_hourly 12.1M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_16:00:01_hourly 12.1M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_17:00:01_hourly 17.2M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_18:00:01_hourly 19.9M - 32.2G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_19:00:01_hourly 41.3M - 32.3G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_20:00:01_hourly 25.6M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_21:00:01_hourly 22.9M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_22:00:01_hourly 20.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-23_23:00:02_hourly 19.6M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_00:00:02_daily 17.0M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_01:00:01_hourly 15.6M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_02:00:01_hourly 15.8M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_03:00:01_hourly 20.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_04:00:01_hourly 21.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_05:00:02_hourly 12.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_06:00:01_hourly 12.2M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_07:00:01_hourly 18.7M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_08:00:01_hourly 12.5M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_09:00:02_hourly 12.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_10:00:01_hourly 24.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_11:00:01_hourly 13.4M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_12:00:01_hourly 12.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_13:00:01_hourly 12.5M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_14:00:01_hourly 14.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_15:00:01_hourly 12.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_16:00:02_hourly 12.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_17:00:01_hourly 12.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_18:00:01_hourly 12.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_19:00:01_hourly 14.3M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_20:00:02_hourly 15.1M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_21:00:01_hourly 15.2M - 32.4G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_22:00:01_hourly 21.7M - 32.5G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-24_23:00:02_hourly 17.1M - 32.5G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-25_00:00:01_daily 0B - 32.5G - tank/p3600backup/subvol-125-disk-1@pyznap_2018-03-25_00:00:01_hourly 0B - 32.5G -

Sending Data set though SSH

Hi,
So currently the snapshots working great with no issue, but now im trying to replace pve-zsync as it does not have compression and its a hassle when i try to send data sets an another site though WAN.
so in the .conf
This is what i have

[rpool/data/vm-107-disk-1]
frequent = 30
snap = yes
clean = yes
dest = ssh:22:[email protected]:rpool/data
dest_keys = /root/.ssh/id_rsa
compress = gzip

so my question when i run

pyznap --config /media/pyznap.conf send

should it not send also the disk and the snapshot? as i only see that it sends the snapshot

Thank you

How best to configure send/receive?

First up, thanks for pyznap :-)... just figuring out how best to use it... so this is more a question than an issue, or maybe the issue is I am not sure how to proceed based on the documentation... I am new to ZFS, so new to pyznap and zfs send/receive.

Say I have a dataset I want to replicate to a different server. I want a number of backups (snapshots) on the source server and a number of backups on the destination server. For simplicity, say on the source server I have daily = 5 and hourly = 24 in the pyznap.conf file.

I have noticed that you don't actually need pyznap on the destination to receive snapshots, so I assume it uses zfs directly.

On the source I call "pyznap snap" regularly and my datasets get snapshoted as expected.

Calling "pyznap send" sends all the snapshots to the backup server, it does not delete old snapshots (i.e. older than 5 days/24 hours) off of the backup server.

I also read somewhere that you could setup the backup server to store say 20 days of snapshots and no hourlies.

So how should I set up the source and destination in this case?

Should the source only send the latest snapshot and then the destination does its own snapshots? i.e. Is it more resource intensive to send lots of snapshots (e.g. all the hourly ones each day) that might not even get used on the destination (e.g. the destination might not keep hourlies at all, only dailies)

Or

Should I continue to just use "pyznap send" to send the all the daily snapshots over to the destination and configure pyznap.conf to keep the number of snapshots I want on the destination and perhaps set snap = no on the destination? Is that the main purpose for snap = no, i.e. to tidy up snapshots on a destination based on settings in the config file.

Does my question even make sense?
Thanks for any feedback. Also, if this is not the right place to ask such questions, happy to be redirected.

Remote snapshots (on FreeNAS) are not deleted according to rule...

Hi again, I'm sorry for making yet another issue.

Source is Proxmox/Debian and remote is FreeNAS/FreeBSD. Dataset "small" on source to dataset "test" on remote. My pyznap.conf:

[small]
frequent = 5
snap = yes
clean = yes
dest = ssh:22:[email protected]:System/test
dest_keys = /home/user/.ssh/id_rsa
compress = lz4

When running Pyznap snap and Pyznap send (run in serial), only the snapshots on source get deleted per the rule. I tried different combinations of zvol and dataset, and it seems to be unrelated to that this time. I do however suspect that it could have something to do with the remote being FreeBSD. What are the commands you use for snapshot cleanup on remote? (I don't know Python, otherwise I would have looked myself).

Thanks.

previous work issues

Skimming through the issues of zfs-auto-snapshot, sanoid and znapzend I met some thoughts that made me think whether pyznap has them covered. For example, temporarily disconnected operation, and there seems to be a limit in number of snaps that zfs can delete at once.

Maybe it holds some insights, if you get the chance to skim through their issues.

Strange send behavior - perhaps documentation

Set up pyznap on a SmartOS (Solaris-like) host. Looks like everything is correct except for the send functionality. I want to backup a local SAMBA/CIFS share mounted as a ZFS volume to a remote SAMBA/CIFS share. Doing a full backup, then incremental backups managed by pyznap. I configured a simple pyznap.conf. When forcing a snap, there seems to be no errors, when forcing a send, there are numerous errors. I likely do not understand how actually doing a volume backup works with pyznap. Any help would be appreciated:

##backup test1
[zones/0f..trunkated..ec/data/petashare1/admin]
hourly = 1
snap = yes
send = yes
clean = yes
dest = ssh:22:[email protected]:zones/90..trunkated..76/data/home/backupblob
dest_keys = .ssh/id_rsa
#compress = gzip

snap output:
[root@SMB1 /opt/local/bin]# ./pyznap snap
Sep 04 20:58:41 INFO: Starting pyznap...
Sep 04 20:58:41 INFO: Taking snapshots...
Sep 04 20:58:41 INFO: Taking snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-04_20:58:41_hourly...
Sep 04 20:58:41 INFO: Cleaning snapshots...
Sep 04 20:58:41 INFO: Deleting snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-04_19:59:46_hourly...
Sep 04 20:58:41 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-04_19:59:46_hourly...
Sep 04 20:58:42 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-04_19:59:46_hourly...
Sep 04 20:58:42 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-04_19:59:46_hourly...
Sep 04 20:58:42 INFO: Finished successfully...

No errors, even though I can't find the snapshots.

send output:

[root@SMB1 /opt/local/bin]# ./pyznap send
Sep 04 21:05:54 INFO: Starting pyznap...
Sep 04 21:05:54 INFO: Sending snapshots...
Sep 04 21:05:57 INFO: No common snapshots on [email protected]:zones/90..trunkated..76/data/home/backupblob, sending oldest snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-04_20:58:41_hourly (~12.6K)...
sh[1]: lzop: not found [No such file or directory]
Sep 04 21:05:59 ERROR: Error while sending to [email protected]:zones/90..trunkated..76/data/home/backupblob: bash: lzop: command not found - cannot receive: failed to read from stream...
Sep 04 21:06:00 INFO: No common snapshots on [email protected]:zones/90..trunkated..76/data/home/backupblob/petashare1, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-04_20:58:41_hourly (~13.6K)...
sh[1]: lzop: not found [No such file or directory]
Sep 04 21:06:00 ERROR: Error while sending to [email protected]:zones/90..trunkated..76/data/home/backupblob/petashare1: bash: lzop: command not found - cannot receive: failed to read from stream...
Sep 04 21:06:01 INFO: No common snapshots on [email protected]:zones/90..trunkated..76/data/home/backupblob/petashare1/admin, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-04_20:58:41_hourly (~8.5M)...
sh[1]: lzop: not found [No such file or directory]
sh: line 1: mbuffer: not found
sh: line 1: pv: not found
Sep 04 21:06:04 ERROR: Error while sending to [email protected]:zones/90..trunkated..76/data/home/backupblob/petashare1/admin: bash: lzop: command not found - cannot receive: failed to read from stream...
Sep 04 21:06:04 INFO: No common snapshots on [email protected]:zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-04_20:58:41_hourly (~12.6K)...
sh[1]: lzop: not found [No such file or directory]
Sep 04 21:06:05 ERROR: Error while sending to [email protected]:zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob: bash: lzop: command not found - cannot receive: failed to read from stream...
Sep 04 21:06:05 INFO: Finished successfully...

Although I was not using the cron in crontabs, for your reference, here it is:
cron/crontab/pyznap:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

!# */15 * * * * root /opt/local/bin/pyznap snap >> /var/log/pyznap.log 2>&1
!# */20 * * * * root /opt/local/bin/pyznap send >> /var/log/pyznap.log 2>&1

This may be a pyznap.conf error? Thanks in advance.

dest config error not considered a top-level failure

Cool project, it's nice to find a Python-based zfs snapshotter.

I just installed the latest pyznap from github. After successfully snapshotting with pyznap, I ran a "pyznap send" to transfer snapshots to the "dest" in my config file (a local backup pool). My config file was wrong and the "dest" did not exist. pyznap correctly logged that it couldn't send anything, but then finished up with a "successful" message. Also, the return code of the pysnap process was 0, indicating success.

I would expect that a problem like that would result in a non-zero return code so higher-level scripts can take action, like sending notifications and things.

znap error to run setup on Ubuntu 18.04 server

Dear,

After installation via pip as root
$ apt install python-pip $ pip --proxy http://172.30.3.39:3128 install pyznap

$ pyznap setup -p /etc/pyznap Traceback (most recent call last): File "/usr/local/bin/pyznap", line 7, in <module> from pyznap.main import main File "/usr/local/lib/python2.7/dist-packages/pyznap/main.py", line 18, in <module> from .utils import read_config, create_config File "/usr/local/lib/python2.7/dist-packages/pyznap/utils.py", line 14, in <module> from subprocess import Popen, PIPE, TimeoutExpired, CalledProcessError ImportError: cannot import name TimeoutExpired

uname -a: Linux lxc850 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

lsb_release -a: Distributor ID: Ubuntu Description: Ubuntu 18.04.2 LTS Release: 18.04 Codename: bionic

Thx

resumable send/receive

ZFS 0.8 has been released and has hit the PPA

#3 (comment):

resumable receive is only supported in zfsonlinux 0.7.x, while all major distributions only ship with 0.6.x. So I cannot really implement it until Ubuntu's zfsonlinux supports it.

How is a simple pyznap send supposed to work?

So I have two datasets: ssd/files and hdd/backup

I want to backup dataset ssd/files to hdd/backup. Preferably a 1-to-1 copy; a full backup. So I type:

pyznap send -s ssd/files -d hdd/backup

Once it's finished, I browse to the hdd/backup folder and I see nothing. I do see the snapshots listed when I do zfs list -t snapshot hdd/backup. Is this expected behavior? How am I supposed to progress this snapshot to an actual backup, with files and all, using pyznap?

Wouldn't it be better if pyznap could do a full incremental backup where the files appear in the dataset? Just wondering what is the recommended procedure for something like this.

Can't run Pyznap

Hello,

After a fresh install using the provided instructions (on github README) I'm seeing the following error:

(env) root@pgh2:~# pyznap
Traceback (most recent call last):
  File "/root/env/bin/pyznap", line 7, in <module>
    from pyznap.main import main
  File "/root/env/lib/python3.4/site-packages/pyznap/main.py", line 18, in <module>
    from .utils import read_config, create_config
  File "/root/env/lib/python3.4/site-packages/pyznap/utils.py", line 15, in <module>
    from .process import run
  File "/root/env/lib/python3.4/site-packages/pyznap/process.py", line 44, in <module>
    class CompletedProcess(sp.CompletedProcess):
AttributeError: 'module' object has no attribute 'CompletedProcess'

Am I missing a package? I tried installing this on two seperate machines with the same result.

send on FreeBDSD 11.3: ValueError: invalid literal for int() with base 10: 'fast@pyznap_<date>_yearly'

Hello! I'm trying pyznap for the first time, and while pyznap snap works great, pyznap send is crashing with what appears to be a parsing issue.

I'm on FreeBSD 11.3-STABLE, and tried installing from pip and the ports collection with no apparent difference.

% sudo pyznap snap
Jul 13 16:22:24 INFO: Sending snapshots...
Traceback (most recent call last):
  File "/usr/local/bin/pyznap", line 11, in <module>
    load_entry_point('pyznap==1.1.2', 'console_scripts', 'pyznap')()
  File "/usr/local/lib/python3.6/site-packages/pyznap/main.py", line 120, in main
    return _main()
  File "/usr/local/lib/python3.6/site-packages/pyznap/main.py", line 103, in _main
    send_config(config)
  File "/usr/local/lib/python3.6/site-packages/pyznap/send.py", line 232, in send_config
    send_filesystem(source, dest, ssh=ssh)
  File "/usr/local/lib/python3.6/site-packages/pyznap/send.py", line 141, in send_filesystem
    .format(dest_name_log, base, bytes_fmt(base.stream_size())))
  File "/usr/local/lib/python3.6/site-packages/pyznap/pyzfs.py", line 345, in stream_size
    return int(out.split(' ')[-1])
ValueError: invalid literal for int() with base 10: 'fast@pyznap_2019-07-13_16:16:15_yearly'

I assume that it's an issue with parsing the output of something, but I'm not quite sure what it might be.

% sudo zfs list -t snapshot | grep yearly
fast@pyznap_2019-07-13_16:16:15_yearly            0      -   424G  -
slow@pyznap_2019-07-13_15:46:33_yearly            0      -  4.07T  -

Here's the config in use:

[slow]
frequent = 3
hourly = 24
daily = 7
weekly = 4
monthly = 12
yearly = 5
snap = yes
clean = yes
dest = backup/slow

[fast]
frequent = 3
hourly = 24
daily = 7
weekly = 4
monthly = 12
yearly = 5
snap = yes
clean = yes
dest = backup/fast

[backup/slow]
hourly = 24
daily = 7
weekly = 4
monthly = 12
yearly = 5
snap = no 
clean = yes

[backup/fast]
hourly = 24
daily = 7
weekly = 4
monthly = 12
yearly = 5
snap = no 
clean = yes

Python 3.8 Breaks Pyznap?

Pyznap no longer works, seemingly due an issue with newer versions of Python perhaps.

pastebin of error: https://pastebin.com/m2SggbGR

[noah@X58NAS ~]$ pyznap --help Traceback (most recent call last): File "/usr/bin/pyznap", line 6, in <module> from pkg_resources import load_entry_point File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3252, in <module> def _initialize_master_working_set(): File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3235, in _call_aside f(*args, **kwargs) File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3264, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 583, in _build_master ws.require(__requires__) File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 900, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 786, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pyznap==1.4.3' distribution was not found and is required by the application
Pyznap throws similar errors when issuing other commands (eg: "sudo pyznap -v snap"). I tried upgrading Python from 3.8.1 to 3.8.1-1 and reinstalling Pyznap, but no joy. I guess I should try downgrading Python?

My system:
Arch Linux 4.19.12.arch1-1
zfs-dkms 0.7.12-1
Python 3.8.1-1
pyznap 1.4.3 (listed as 1.4.3-1 in the AUR https://aur.archlinux.org/packages/pyznap/)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.