Giter Site home page Giter Site logo

ewxrjk / rsbackup Goto Github PK

View Code? Open in Web Editor NEW
15.0 6.0 5.0 2.41 MB

rsync-based backup tool

Home Page: https://www.greenend.org.uk/rjk/rsbackup/

License: GNU General Public License v3.0

Emacs Lisp 0.03% Shell 18.17% Perl 0.12% C++ 73.70% HTML 5.29% Makefile 1.74% M4 0.82% Dockerfile 0.14%

rsbackup's Introduction

rsbackup

Build Status

rsbackup backs up your computer(s) to removable hard disks. The backup is an ordinary filesystem tree, and hard links between repeated backups are used to save space. Old backups are automatically pruned after a set period of time.

Installation

Dependencies

Platforms

On Debian/Ubuntu systems, get rsbackup.deb and install that.

Please see Platform Support for platform-specific notes.

Building

To build from source:

autoreconf -si # only if you got it from git
./configure
make check
sudo make install

Documentation

Read the tutorial manual first.

For reference information, see the man page:

man rsbackup

Bugs

Report bugs via Github.

Licence

Copyright © 2010-2020 Richard Kettlewell

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.

rsbackup's People

Contributors

cjwatson avatar ewxrjk avatar jtn20 avatar optnfast avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

rsbackup's Issues

snapshot-hook should be more flexible about device names

After upgrading to Debian jessie, df reports my root LV as /dev/dm-0 rather than /dev/mapper/something, causing the snapshot hook to fail. Ideally it should recognize any name for a logical volume, regardless of links, bind mounts or any other kind of aliasing.

Feature request: option to backup volumes without compression

I'd like to use rsbackup to backup a volume containing already compressed data (fotos and videos).
In that use case it does not make sense to use the "--compress" rsync option. Unfortunately it seems that this option is hardcoded and cannot be overriden easily.

It would be really usefull to have an option on volume level to suppress the "--compress" passed to rsync.

Thanks,
Denis

We end up running pre-access-hook and not doing any backups a lot

I have a few hosts that are on an 'hourly' backup rotation because they're intermittently turned on/online. I also always have disks that are offsite and use pre-access-hook to opportunistically mount backup devices.

This appears to be a triangle of badness which results in rsbackup being invoked every hour; mounting all its devices; and then deciding it's not going to back anything up.

I think this can only be solved by adding an additional hook to determine which devices are available and then having an execution flow that looks something like:

  1. Determine which devices are available
  2. Iterate through volumes until we find one that needs backing up and is available
  3. Run pre-access-hook for any devices we can back that volume up to
  4. Back up that volume
  5. Continue iterating. If we find any volumes that need backing up onto devices we've not mounted yet we might need to run pre-access-hook for those devices too (should pre-access-hook be idempotent?)
  6. Run post-access-hook on any devices we've mounted

Only one backup per day?

Unsure whether this is a bug or a feature: when running rsbackup multiple times in the same day, it seems to skip further backups (i.e. making only the first backup for a specific date).
In our case, it would be desirable to check at every run whether there are new or modified files. Failing to do so may result in data loss.
Is there an option for this?
Thanks.

Concurrent backup within single hosts

#18 enabled concurrent backup of distinct hosts to distinct devices.

This issue is for concurrent backup of distinct volumes within a single host to distinct devices. This still makes a reasonable amount of sense if the volumes are on separate storage, or on shared storage that is much faster than the backup device (e.g. volumes on SSD, backups on USB spinning rust).

With that in mind it should be under operator control whether two volumes can be backed up concurrently.

It is probably blocked by #17.

Snapshot hook should parse fsck exit status

> rsbackup-snapshot-hook
WARNING: backup of araminta:home to backup5: preBackup: exited with status 1
HOOK: EXEC: umount /snap/home
umount: /snap/home: not mounted
HOOK: EXEC: lvremove --force /dev/mapper/amfast-home.snap
  Logical volume "home.snap" successfully removed
HOOK: EXEC: lvcreate --extents 4710 --name home.snap --snapshot /dev/amfast/home
  Logical volume "home.snap" created
HOOK: EXEC: fsck -a /dev/mapper/amfast-home.snap
fsck from util-linux 2.20.1
/dev/mapper/amfast-home.snap: Clearing orphaned inode 2754454 (uid=1000, gid=100
0, mode=0100644, size=22523)
/dev/mapper/amfast-home.snap: Clearing orphaned inode 2754442 (uid=1000, gid=100
0, mode=0100644, size=22523)
/dev/mapper/amfast-home.snap: Clearing orphaned inode 2754438 (uid=1000, gid=100
0, mode=0100644, size=22523)
/dev/mapper/amfast-home.snap has been mounted 24 times without being checked, ch
eck forced.
/dev/mapper/amfast-home.snap: 609974/6029312 files (0.6% non-contiguous), 796033
3/24117248 blocks
HOOK: EXEC: umount /snap/home
umount: /snap/home: not mounted

The reason is that fsck exits nonzero if errors are corrected (see man page). An exit status of 1 shouldn't be considered fatal (at least with Linux's fsck, but this script uses Linux's LVM...).

Workaround that doesn't involve unmounting: disable snapshots for affected filesystems.

Multiple devices leads to snapshotting the same filesystem more than once

pre-backup-hook and post-backup-hook are executed for each individual backup; where a backup is a backup of an individual volume onto an individual device. This means that if you're backing up to multiple devices they get run once per (volume,device) tuple.

This would only be ideologically annoying except that the (udev,lvm) system has a number of races/leaks; which seem to be exacerbated if you do a lot of snapshotting in a short space of time.

I can see a number of ways of addressing this issue:

  • The backup-hooks could be run only once per volume, before the first backup and after the last.
  • backup-hooks could be made to opportunistically reuse snapshots and then a post-host-hook added to clean up snapshots.
  • A pre- and post-host-hook could be added that can be used to set up and tear down snapshots host-wide (this could also be used to set locks / warnings about running backups and other good things)

The latter two options have the problem that you might end up filling up your lvm with snapshots; whereas if everything is working OK currently you only have one snapshot at once.

Back up xattrs?

After restoring my laptop's root filesystem from backup, I noticed that (e.g.) mtr didn't work because it didn't have the required capabilities. I had to grep for setcap in /var/lib/dpkg/info/*.postinst and reapply all the capabilities there, and of course that leaves me unsure whether I missed anything.

Could rsbackup perhaps start backing up xattrs, using rsync --xattrs, and maybe also --acls? That would imply also adjusting the restore instructions in rsbackup(1).

report emails should include count of minor/major errors in Subject line

I typically only scan the subjects of my system email, so it would be useful if rsbackup's report mails could have an indication in their subject lines of whether there are issues within that need urgent investigation or not.

Suggested implementation: categorize everything in the report by severity (major/minor/info, error/warning/info or similar), and then have the subject be "Backup report ($DATE) [errors: 1 major, 2 minor]" (or perhaps "[1 error, 2 warnings]").

Example major errors:

  • always-up machine not present
  • device and host both present but rsync failed
  • host has not been backed up in $LIMIT days [configurable]

Example minor errors/warnings:

  • Unknown volume $VOL on host $H

Example info:

  • Log pruning
  • Host H skipped because not present

Host availability check isn't good enough

The current way to test whether a host is available is to see if SSHing to it succeeds. If it's not available then no backup is attempted, and this is silent.

However a host may be up but misconfigured in some way that means that SSH fails. In that case the host would not be backed up and the operator would not know there was a problem.

Other possible approaches:

  • ping it
  • connect to some port

This should be configurable.

Invalid read

Now up to date with master, and with -g:

INFO: pruning /backup7/wampoon/root/2015-11-20 because: age 3 > 1 and oldest in bucket 1
> rm -rf /backup7/wampoon/root/2015-11-20
INFO: removing /backup8/araminta/av/2015-10-12.incomplete
INFO: removing /backup7/araminta/av/2015-10-16.incomplete
==16542== Invalid read of size 8
==16542==    at 0x53C6268: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.20)
==16542==    by 0x46DAF7: operator+<char, std::char_traits<char>, std::allocator<char> > (basic_string.h:2424)
==16542==    by 0x46DAF7: Backup::backupPath() const (Backup.cc:27)
==16542==    by 0x452CF8: pruneBackups() (Prune.cc:213)
==16542==    by 0x41D6F7: main (rsbackup.cc:99)
==16542==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
==16542== 
==16542== 
==16542== Process terminating with default action of signal 11 (SIGSEGV)
==16542==  Access not within mapped region at address 0x0
==16542==    at 0x53C6268: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.20)
==16542==    by 0x46DAF7: operator+<char, std::char_traits<char>, std::allocator<char> > (basic_string.h:2424)
==16542==    by 0x46DAF7: Backup::backupPath() const (Backup.cc:27)
==16542==    by 0x452CF8: pruneBackups() (Prune.cc:213)
==16542==    by 0x41D6F7: main (rsbackup.cc:99)
==16542==  If you believe this happened as a result of a stack
==16542==  overflow in your program's main thread (unlikely but
==16542==  possible), you can try to increase the size of the
==16542==  main thread stack using the --main-stacksize= flag.
==16542==  The main thread stack size used in this run was 8388608.
Segmentation fault

Use clang-format for source code formatting

https://github.com/ewxrjk/rsbackup/commits/clang-format somewhat does this already. Before merging it needs:

  • Document a version range for clang-format. Must support Debian stable and Visual Studio Code. ✓
  • check-source must skip if clang-format not present. ✓
  • CI rules must ensure clang-format present on at least one platform (i.e so that check-source is not skipped). ✓
  • Finalise the formatting policy. ✓
  • Reformat code to match it. ✓

Platform support for clang-format:

  • Debian stretch (oldstable: clang-format is 3.8 but clang-format-4.0 is also available
  • Debian stretch backports: clang-format-6.0 is available.
  • Debian buster (stable): clang-format is 7.0
  • macOS Homebrew has 8.0
  • Ubuntu Bionic (LTS) has 6.0
  • Ubuntu Xenial (LTS) has 3.8, but...
  • ...the Travis Xenial toolchain is has 7.0.
  • vscode C/C++ support currently embeds 6.0

Based on this I will target 6.0 (and later). Users of Debian stretch will have to install from backports.

Fails to build with GCC 8

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=897852

[...]
  CXX      parseFloat.o
  CXX      Render.o
In file included from /usr/include/sigc++-2.0/sigc++/sigc++.h:104,
                 from /usr/include/pangomm-1.4/pangomm/layout.h:7,
                 from Render.h:23,
                 from Render.cc:16:
/usr/include/sigc++-2.0/sigc++/signal.h: In static member function ‘static sigc::internal::signal_emit0<void, sigc::nil>::result_type sigc::internal::signal_emit0<void, sigc::nil>::emit(sigc::internal::signal_impl*)’:
/usr/include/sigc++-2.0/sigc++/signal.h:798:56: error: cast between incompatible function types from ‘sigc::internal::hook’ {aka ‘void* (*)(void*)’} to ‘sigc::internal::signal_emit0<void, sigc::nil>::call_type’ {aka ‘void (*)(sigc::internal::slot_rep*)’} [-Werror=cast-function-type]
           (reinterpret_cast<call_type>(slot.rep_->call_))(slot.rep_);
                                                        ^
/usr/include/sigc++-2.0/sigc++/signal.h: In static member function ‘static sigc::internal::signal_emit0<void, sigc::nil>::result_type sigc::internal::signal_emit0<void, sigc::nil>::emit_reverse(sigc::internal::signal_impl*)’:
/usr/include/sigc++-2.0/sigc++/signal.h:825:55: error: cast between incompatible function types from ‘sigc::internal::hook’ {aka ‘void* (*)(void*)’} to ‘sigc::internal::signal_emit0<void, sigc::nil>::call_type’ {aka ‘void (*)(sigc::internal::slot_rep*)’} [-Werror=cast-function-type]
           (reinterpret_cast<call_type>(it->rep_->call_))(it->rep_);
                                                       ^
At global scope:
cc1plus: error: unrecognized command line option ‘-Wno-c++14-extensions’ [-Werror]
cc1plus: all warnings being treated as errors
make[3]: *** [Makefile:1065: Render.o] Error 1
make[3]: Leaving directory '/<<PKGBUILDDIR>>/src'
make[2]: *** [Makefile:419: all-recursive] Error 1
make[2]: Leaving directory '/<<PKGBUILDDIR>>'
make[1]: *** [Makefile:360: all] Error 2
make[1]: Leaving directory '/<<PKGBUILDDIR>>'
make: *** [debian/rules:36: build] Error 2
dpkg-buildpackage: error: debian/rules build-arch subprocess returned exit status 2

Bug is actually in libsigc++: libsigcplusplus/libsigcplusplus#1
But we can work around it for the time being.

Wrong count of backups in email report

Hi,
I've set up backup of five servers of mine and everything works pretty well, except email reports sent right after backup is complete. Count of backups on the device (the only device I have) is greater than actual count of backups. And the difference between actual and reported count grows with every run.

Here is my config:

[root@kronos ~]# cat /etc/rsbackup/config 
store /backup
device backup0
max-age 1
prune-age 31

host newton
  volume all /
    exclude /backup/*

host norris
  pre-backup-hook /etc/rsbackup/norris-dump-db.sh
  volume all /
    exclude /backup/*

host poseidon
  volume all /
    exclude /backup/*

host remus
  volume all /
    exclude /backup/*

host zeus
  pre-backup-hook /etc/rsbackup/zeus-dump-db-and-svn.sh
  volume all /
    exclude /backup/gefest/*

And here are last three reports I've received after rsbackup was run with cron:

Backup report (2013-08-03)
Summary
Host    Volume  Oldest  Total   Devices
backup0
Newest  Count
newton  all 2013-07-31  3   2013-08-03  5
norris  all 2013-08-01  3   2013-08-03  5
poseidon    all 2013-07-31  3   2013-08-03  5
remus   all 2013-08-01  3   2013-08-03  5
zeus    all 2013-08-01  3   2013-08-03  5
Logfiles
Pruning logs
Generated Sat Aug 3 05:02:35 2013


Backup report (2013-08-04)
Summary
Host    Volume  Oldest  Total   Devices
backup0
Newest  Count
newton  all 2013-07-31  4   2013-08-04  7
norris  all 2013-08-01  4   2013-08-04  7
poseidon    all 2013-07-31  4   2013-08-04  7
remus   all 2013-08-01  4   2013-08-04  7
zeus    all 2013-08-01  4   2013-08-04  7
Logfiles
Pruning logs
Generated Sun Aug 4 04:53:08 2013


Backup report (2013-08-05)
Summary
Host    Volume  Oldest  Total   Devices
backup0
Newest  Count
newton  all 2013-07-31  5   2013-08-05  9
norris  all 2013-08-01  5   2013-08-05  9
poseidon    all 2013-07-31  5   2013-08-05  9
remus   all 2013-08-01  5   2013-08-05  9
zeus    all 2013-08-01  5   2013-08-05  9
Logfiles
Pruning logs
Generated Mon Aug 5 07:07:32 2013

Table formatting of reports was partially lost, but it is clearly seen that the last column, count of backups on the backup0 device, differs from total count, and the difference is growing with every run.

This inconsistency happens only when rsbackup send reports right after the backups are made. It generates correct report when run from command line.

This is what rsbackup reports on the command line:

[root@kronos ~]# rsbackup --text -
==== Backup report (2013-08-05) ====

=== Summary ===

|   Host   | Volume |   Oldest   | Total |      Devices      |
|          |        |            |       |      backup0      |
|          |        |            |       |   Newest   | Count|
| newton   | all    | 2013-07-31 | 5     | 2013-08-05 | 5    |
| norris   | all    | 2013-08-01 | 5     | 2013-08-05 | 5    |
| poseidon | all    | 2013-07-31 | 5     | 2013-08-05 | 5    |
| remus    | all    | 2013-08-01 | 5     | 2013-08-05 | 5    |
| zeus     | all    | 2013-08-01 | 5     | 2013-08-05 | 5    |

=== Logfiles ===

== Pruning logs ==

Generated Mon Aug 5 10:08:28 2013

And it seems to be correct:

[root@kronos ~]# find /backup -type d -maxdepth 3 -mindepth 3
/backup/remus/all/2013-08-01
/backup/remus/all/2013-08-02
/backup/remus/all/2013-08-03
/backup/remus/all/2013-08-04
/backup/remus/all/2013-08-05
/backup/norris/all/2013-08-03
/backup/norris/all/2013-08-01
/backup/norris/all/2013-08-02
/backup/norris/all/2013-08-04
/backup/norris/all/2013-08-05
/backup/newton/all/2013-07-31
/backup/newton/all/2013-08-02
/backup/newton/all/2013-08-03
/backup/newton/all/2013-08-04
/backup/newton/all/2013-08-05
/backup/poseidon/all/2013-07-31
/backup/poseidon/all/2013-08-02
/backup/poseidon/all/2013-08-03
/backup/poseidon/all/2013-08-04
/backup/poseidon/all/2013-08-05
/backup/zeus/all/2013-08-01
/backup/zeus/all/2013-08-02
/backup/zeus/all/2013-08-03
/backup/zeus/all/2013-08-04
/backup/zeus/all/2013-08-05

Here is the crontab:

[root@kronos ~]# crontab -l
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin 
0   2   *   *   *   /usr/local/bin/rsbackup --backup --prune --email [email protected]

And here is the log directory, just in case:

[root@kronos ~]# ls -l /var/log/backup 
total 100
-rw-r--r--  1 root  wheel    19 Jul 31 22:41 2013-07-31-backup0-newton-all.log
-rw-r--r--  1 root  wheel    19 Aug  1 06:19 2013-07-31-backup0-poseidon-all.log
-rw-r--r--  1 root  wheel    19 Aug  1 17:14 2013-08-01-backup0-norris-all.log
-rw-r--r--  1 root  wheel    19 Aug  1 06:55 2013-08-01-backup0-remus-all.log
-rw-r--r--  1 root  wheel    19 Aug  1 08:50 2013-08-01-backup0-zeus-all.log
-rw-r--r--  1 root  wheel    19 Aug  2 12:51 2013-08-02-backup0-newton-all.log
-rw-r--r--  1 root  wheel    19 Aug  2 12:59 2013-08-02-backup0-norris-all.log
-rw-r--r--  1 root  wheel    19 Aug  2 13:47 2013-08-02-backup0-poseidon-all.log
-rw-r--r--  1 root  wheel  1139 Aug  2 14:11 2013-08-02-backup0-remus-all.log
-rw-r--r--  1 root  wheel    19 Aug  2 15:47 2013-08-02-backup0-zeus-all.log
-rw-r--r--  1 root  wheel    19 Aug  3 02:30 2013-08-03-backup0-newton-all.log
-rw-r--r--  1 root  wheel    19 Aug  3 02:38 2013-08-03-backup0-norris-all.log
-rw-r--r--  1 root  wheel    19 Aug  3 03:18 2013-08-03-backup0-poseidon-all.log
-rw-r--r--  1 root  wheel    19 Aug  3 03:42 2013-08-03-backup0-remus-all.log
-rw-r--r--  1 root  wheel    19 Aug  3 05:02 2013-08-03-backup0-zeus-all.log
-rw-r--r--  1 root  wheel    19 Aug  4 02:22 2013-08-04-backup0-newton-all.log
-rw-r--r--  1 root  wheel    19 Aug  4 02:30 2013-08-04-backup0-norris-all.log
-rw-r--r--  1 root  wheel    19 Aug  4 03:05 2013-08-04-backup0-poseidon-all.log
-rw-r--r--  1 root  wheel    19 Aug  4 03:32 2013-08-04-backup0-remus-all.log
-rw-r--r--  1 root  wheel    19 Aug  4 04:53 2013-08-04-backup0-zeus-all.log
-rw-r--r--  1 root  wheel    19 Aug  5 04:04 2013-08-05-backup0-newton-all.log
-rw-r--r--  1 root  wheel    19 Aug  5 04:23 2013-08-05-backup0-norris-all.log
-rw-r--r--  1 root  wheel    19 Aug  5 05:12 2013-08-05-backup0-poseidon-all.log
-rw-r--r--  1 root  wheel    19 Aug  5 05:48 2013-08-05-backup0-remus-all.log
-rw-r--r--  1 root  wheel    19 Aug  5 07:07 2013-08-05-backup0-zeus-all.log

Concurrent pruning

When backups on multiple devices are to be pruned, it should be possible (and probably, on by default) to run the removal concurrently.

hooks for mounting / unmounting backup devices

Hi,

It would be nice if rsbackup were configurable to run a command (e.g. rsbackup-mount) to mount and unmount backup devices; the current pre-backuphook infrastructure can't do this, as identifyDevices() is called before the pre backup hook is run.

feature request: implement 'retire host but don't delete its existing backups"

Currently rsbackup --retire host always deletes the existing backups for that host from the mounted backup volume. The manpage suggests a recipe for doing this by manually messing with the backups.db. It would be nice to implement this as an option to the rsbackup program so it's available to users who aren't comfortable with sql...

More efficient backup log

$ cat * | wc -c
282855
$ du -ks .
4100    .

i.e. more than 10 times as much space is being used as is really needed. The logs could be combined into one file, or just a few files (e.g. one per host or one per volume).

There is an argument that old-style logs should still be written for a while even when a new-style combined format is written and read, so that after a partial restore of a backup host that reverts an upgrade but does not revert the backup log, backups can continue to work properly without replaying the upgrade. However, pruning cannot work in such a downgrade situation - the new-style data will become inconsistent with the old-style data. Therefore no attempt will be made to support downgrades.

Concurrent backups of remote hosts

When backing up remote hosts it seems that rsbackup is rarely if ever I/O bound on the target device; would it be possible to have a configuration option to run backups on multiple hosts at once?

rsbackup should allow all valid DNS hostnames as host names

I have a machine I'm backing up whose hostname happens to contain hyphens ("cam-vm-266"). I did the obvious thing and used the hostname as the 'HOST' part of the host stanza:
root@e104462:~# cat /etc/rsbackup/hosts.d/cam-vm-266
host cam-vm-266
hostname cam-vm-266.mydomain.com
volume root /
exclude /tmp/*
exclude /var/lib/schroot/mount/*

and rsbackup complains:
root@e104462:~# rsbackup --backup --dry-run
ERROR: /etc/rsbackup/hosts.d/cam-vm-266:1: invalid host name

Removing all the 'hyphens' from the "host" line placates it, but it would be nice if the allowed character sets lined up.

Backup hooks in dry-run mode

As the man page says:

Backup hooks are currently not executed in --dry-run mode but note that this
will be changed in the future and an RSBACKUP_ACT variable introduced, as
for access hooks.

'The future' will mean release 2.0 (on the grounds that 1.0 shouldn't contain any big surprises for existing users).

Specify order of hosts when backing up.

Would it be possible to have an option to specify the order in which hosts are backed up? Currently they appear to be backed up in alphabetical order of host; so this could be done by strategic selection of host names (but that would reduce the utility of the host name being the hostname). Some of my backed up hosts are outside of my local LAN and I would like to maximize the chance that they get backed up during my cheap "overnight" download period.

Man pages are being installed into wrong place on FreeBSD 9

make install puts man pages into /usr/local/share/man/man1 instead of /usr/share/man/man1 on FreeBSD, making them inaccessible.

[root@kronos ~/rsbackup-master]# make install
Making install in src
 ../config.aux/install-sh -c -d '/usr/local/bin'
  /usr/bin/install -c rsbackup '/usr/local/bin'
Making install in tests
Making install in doc
 ../config.aux/install-sh -c -d '/usr/local/share/man/man1'
 /usr/bin/install -c -m 644 rsbackup.1 rsbackup.cron.1 rsbackup-mount.1 rsbackup-snapshot-hook.1 '/usr/local/share/man/man1'
Making install in tools
 ../config.aux/install-sh -c -d '/usr/local/bin'
 /usr/bin/install -c rsbackup.cron rsbackup-mount rsbackup-snapshot-hook '/usr/local/bin'


[root@kronos ~/rsbackup-master]# man rsbackup
No manual entry for rsbackup


[root@kronos ~/rsbackup-master]# mv /usr/local/share/man/man1/rsbackup* /usr/share/man/man1


[root@kronos ~/rsbackup-master]# man rsbackup | head
rsbackup(1)                            rsbackup(1)



NAME
       rsbackup - rsync-based backup utility

SYNOPSIS
       rsbackup [OPTIONS] [--] [SELECTOR...]
       rsbackup --retire [OPTIONS] [--] [SELECTOR...]

check-mounted option

An easy alternative to check-file would be a check-mounted option that refused to backup a volume unless its root was a mount point.

Misleading 'not available' message

# rsbackup --verbose --backup --store /backup8
[..] 
WARNING: cannot backup araminta:root to backup5 - device not available
WARNING: cannot backup araminta:root to backup6 - device not available
WARNING: cannot backup araminta:root to backup7 - device not available
INFO: backup araminta:root to backup8

In fact backup7 is available, it has just been suppressed by the --store option.

Quieter handling of volumes that are sometimes unavailable

Background: http://www.chiark.greenend.org.uk/pipermail/sgo-software-discuss/2018/000513.html discusses.

It can happen that a volume to be backed up is not mounted when rsbackup runs. check-mounted and check-file can be used to detect this condition, but in some configurations it would be convenient for the volume to be mounted automatically when required, without building policy into rsbackup itself.

pre-backup-hook can be used to do this. But it unavoidably logs on failure, which annoys the user if they already know about the condition (it may be routine) and read their cron mail.

The proposed change is to allow hook scripts to exit with a status indicating a soft failure. By analogy with mail delivery agents the exit status chosen is EX_TEMPFAIL (75).

Line length issues in HTML email

HTML email output may include an embedded image:

<p class=history><img class=history src="data:image/png;base64,iVBORw0KG...lFTkSuQmCC"></p>

In Exim this triggers the following diagnostic in mainlog:

mainlog.1:2018-12-25 17:55:02 1gbquz-0001CY-Tz DKIM: validation error: RSA_LONG_LINE
mainlog.1:2018-12-25 17:55:02 1gbquz-0001CY-Tz DKIM: Error while running this message through validation, disabling signature verification.

...and in panlclog:

2018-12-25 17:55:02 1gbquz-0001CY-Tz DKIM: signing failed: RSA_LONG_LINE

The message is delivered (in this case to GMail), it just doesn't have the DKIM signature added.

This seems like a bug in Exim, that it will accept a message that it cannot itself fully process, but maybe some more investigation is worthwhile.

Resolve deprecations from 4.0

In the changelog for 4.0 (March 2017) I advertized that:

  • the old colors directive is now deprecated and will produce a warning. In some future version it will be removed.
  • the old report-prune-logs directive is now deprecated and will produce a warning. In some future version it will be removed.

I plan to implement these changes in a release no earlier than March 2019.

Resolve deprecations from 3.0

In the changelog for 3.0 (December 2015) I advertized that:

  • the min-backups and prune-age directives are now deprecated in their current form and will produce a warning. In some future version they will be removed. Instead, use prune-parameter min-backups and prune-parameter prune-age.
  • the public, always-up, check-mounted and traverse directives now take an explicit boolean argument. Using them without an argument is now deprecated (but has not changed in meaning). In some future version the argument will become mandatory.

I plan to implement these changes in a release no earlier than December 2017.

Note that always-up will be deprecated by release 5.0 (and removed at some point in the distant future).

'include' should skip emacs recovery files

$ really strace -etrace=open rsbackup --dry-run --backup
open("/etc/ld.so.cache", O_RDONLY)      = 3
open("/lib/x86_64-linux-gnu/librt.so.1", O_RDONLY) = 3
open("/usr/lib/x86_64-linux-gnu/libstdc++.so.6", O_RDONLY) = 3
open("/lib/x86_64-linux-gnu/libm.so.6", O_RDONLY) = 3
open("/lib/x86_64-linux-gnu/libgcc_s.so.1", O_RDONLY) = 3
open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY) = 3
open("/lib/x86_64-linux-gnu/libpthread.so.0", O_RDONLY) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
open("/etc/rsbackup/config", O_RDONLY)  = 3
open("/etc/rsbackup/local", O_RDONLY)   = 4
open("/etc/rsbackup/hosts.d", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4
open("/etc/rsbackup/hosts.d/#lyonesse#", O_RDONLY) = 4
open("/etc/rsbackup/hosts.d/araminta", O_RDONLY) = 4
open("/etc/rsbackup/hosts.d/ascolais", O_RDONLY) = 4
open("/etc/rsbackup/hosts.d/deodand", O_RDONLY) = 4
open("/etc/rsbackup/hosts.d/heceptor", O_RDONLY) = 4
open("/etc/rsbackup/hosts.d/iset", O_RDONLY) = 4
open("/etc/rsbackup/hosts.d/kakajou", O_RDONLY) = 4
open("/etc/rsbackup/hosts.d/lyonesse", O_RDONLY) = 4
ERROR: /etc/rsbackup/hosts.d/lyonesse:1: duplicate host

Files with a "#" at the start of the name should be skipped.

Resolve deprecations from 5.0

In the changelog for 5.0 (February 2018) I advertize that:

  • the old always-up directive is now deprecated and will produce a warning. In some future version it will be removed.

I plan to implement this change in a release no earlier than February 2020.

could rsbackup have a mollyguard check that device-id is actually at the root of a filesystem?

Hi; this is a wishlist suggestion based on a user error I made this week bringing a new backup disk into service. Basically when I created the "device-id" file I forgot to actually mount the backup disk first. The result was that rsbackup happily started to back the machine's root disk up to a subdirectory of itself, until it ran out of disk space.

Would it be possible/sensible for rsbackup to complain if the directory with device-id isn't actually a filesystem root? (I guess you'd want an override for people doing odd things, but the 99% usecase is presumably that it should be the fs root.)

/snap clashes with new 'snap' packaging format

rsbackup-snapshot-hook's default location for LVM snapshots, /snap, has been appropriated by Ubuntu 16.04 (at least) for this thing: http://snapcraft.io/

This might affect other distros too: http://snapcraft.io/docs/core/install

(I noticed because Ubuntu 14.04, which doesn't have this stuff, nevertheless got an update to 'sudo' with changelog entry "debian/sudoers: include /snap/bin in the secure_path (LP: #1595558)". I suppose this could possibly have unfortunate consequences.)

--retire removes lots of directories that don't exist

rsbackup --retire --verbose HOST:VOLUME showed that it invokes rm for backups going back years that have long since been deleted. This may reflect a database expiration issue rather than a bug in the retire code as such.
As far as I can tell this is harmless, but it has poor nonfunctionals.

Expose rsbackup locking pragma

Would it be possible to expose/specify rsbackup's locking mechanism -- I would like to have a convenient way to ensure rsbackup doesn't run concurrently with other high disk usage cronjobs.

Automatically find mount points

Currently the operator has to specify where devices may be found using the store command.

Alternative 1:

Have rsbackup automatically check some or all mount points. store would still be required for some possible use cases but it would simplify management in the common configuration where all backup devices correspond directly to filesystems.

Modern desktop systems automatically mount whatever mass storage is attached to them, so with this feature a physically present attacker may 'steal' a backup by attaching a disk with the right device-id file. Some mitigation for this problem must be included if this approach is adopted..

Alternative 2:

Have the store directive take a glob pattern (or introduce a new directive with glob syntax). This allows much more fine-grained operator control of where stores are found, but still avoids the need to explicitly list them all.

Backup pruning should be smarter

Currently pruning involves removing the largest contiguous chunk of backups, starting at the oldest, which is consistent with the prune-age and min-backups constraints.

This doesn't make for very good use of storage space. An alternative policy would be to thin out backups non-contiguously. For instance the last week could keep daily backups, the rest of the last month could keep weekly backups and the rest of the last year could keep monthly backups.

More generally, an interface could be defined for operators to select completely arbitrary pruning policies.

'exec' pruning policy documentation confusing

       PRUNE_ONDEVICE
              The  list  of  backups on the device, by age in days.  This list
              excludes any that have already been scheduled for  pruning,  and
              includes  the  backup  under  consideration  (i.e.  the value of
              BACKUP_AGE will appear in this list).

BACKUP_AGE appears nowhere else.

Enable travis-ci builds

Currently disabled because travis only apparently only has hopelessly obsolete compilers.

Reform rsbackup cron arrangements

Currently rsbackup has cron scripts to arrange for backups of chosen hosts to happen daily, weekly, etc. It is controlled by a separate configuration from the main config file.

With the introduction of backup policies this configuration can be moved into the main configuration file. The part of rsbackup.cron which performs backups can be reduced to rsbackup --backup, and invoked as frequently as the operator desires. The default would be hourly. --wait would not be used, if a backup is in progress we will just try again in an hour.

That leaves:

  • --prune. There is no reason this couldn't be done at the maximum frequency since there are already prune policies controlling what is pruned.
  • --prune-incomplete. My current thinking is that this would be invoked daily, with --wait.
  • --email. Again my current thinking is that this would be invoked daily, with --wait.

While on the subject of backup policies there should probably be an option to override backup policies and always backup the selected volumes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.