Giter Site home page Giter Site logo

s3ql's Introduction

S3QL

S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3, or OpenStack. S3QL effectively provides a virtual drive of dynamic, infinite capacity that can be accessed from any computer with internet access.

S3QL is a full featured UNIX file system that is conceptually indistinguishable from a local file system like ext4. Furthermore, S3QL has additional features like compression encryption, data de-duplication, immutable trees and snapshotting which make it especially suitable for online backup and archival.

S3QL is designed to favor simplicity and elegance over performance and feature-creep. Care has been taken to make the source code as readable and serviceable as possible. Solid error detection and error handling have been included from the very first line, and S3QL comes with extensive automated test cases for all its components.

Features

  • Transparency. Conceptually, S3QL is indistinguishable from a local file system. For example, it supports hardlinks, symlinks, standard unix permissions, extended attributes and file sizes up to 2 TB.

  • Dynamic Size. The size of an S3QL file system grows and shrinks dynamically as required.

  • Compression. Before storage, all data may be compressed with the LZMA, bzip2 or deflate (gzip) algorithm.

  • Encryption. After compression (but before upload), all data can be AES encrypted with a 256 bit key. An additional SHA256 HMAC checksum is used to protect the data against manipulation.

  • Data De-duplication. If several files have identical contents, the redundant data will be stored only once. This works across all files stored in the file system, and also if only some parts of the files are identical while other parts differ.

  • Immutable Trees. Directory trees can be made immutable, so that their contents can no longer be changed in any way whatsoever. This can be used to ensure that backups can not be modified after they have been made.

  • Copy-on-write snapshots. S3QL can replicate entire directory trees without using any additional storage space. Only if one of the copies is modified, the part of the data that has been modified will take up additional storage space. This can be used to create intelligent snapshots that preserve the state of a directory at different points in time using a minimum amount of space.

  • Performance independent of network latency. All operations that do not write or read file contents (like creating directories or moving, renaming, and changing permissions of files and directories) are very fast because they are carried out without any network transactions.

    S3QL achieves this by saving the entire file and directory structure in a database. This database is locally cached and the remote copy updated asynchronously.

  • Support for low bandwidth connections. S3QL splits file contents into smaller blocks and caches blocks locally. This minimizes both the number of network transactions required for reading and writing data, and the amount of data that has to be transferred when only parts of a file are read or written.

Development Status

S3QL is considered stable and suitable for production use. Starting with version 2.17.1, S3QL uses semantic versioning. This means that backwards-incompatible versions (e.g., versions that require an upgrade of the file system revision) will be reflected in an increase of the major version number.

Supported Platforms

S3QL is developed and tested under Linux. Users have also reported running S3QL successfully on OS-X, FreeBSD and NetBSD. We try to maintain compatibility with these systems, but (due to lack of pre-release testers) we cannot guarantee that every release will run on all non-Linux systems. Please report any bugs you find, and we will try to fix them.

Typical Usage

Before a file system can be mounted, the backend which will hold the data has to be initialized. This is done with the mkfs.s3ql command. Here we are using the Amazon S3 backend, and nikratio-s3ql-bucket is the S3 bucket in which the file system will be stored.

mkfs.s3ql s3://ap-south-1/nikratio-s3ql-bucket

To mount the S3QL file system stored in the S3 bucket nikratio_s3ql_bucket in the directory /mnt/s3ql, enter:

mount.s3ql s3://ap-south-1/nikratio-s3ql-bucket /mnt/s3ql

Now you can instruct your favorite backup program to run a backup into the directory /mnt/s3ql and the data will be stored on Amazon S3. When you are done, the file system has to be unmounted with

umount.s3ql /mnt/s3ql

Need Help?

The following resources are available:

Please report any bugs you may encounter in the GitHub Issue Tracker.

Contributing

The S3QL source code is available on GitHub.

s3ql's People

Contributors

amvoegeli avatar andrewchambers avatar aureliolo avatar colakong avatar cschlick avatar d--j avatar erickbrowngoto avatar greemo avatar halsbox avatar hwertz avatar iphydf avatar jlippuner avatar lorentzkim avatar mkhon avatar nand2 avatar nikratio avatar paulharris avatar powerpaul17 avatar r0ps3c avatar ravenpride avatar redneb avatar rgammans avatar stjosh avatar szepeviktor avatar tkrill avatar unshare avatar uplink03 avatar vthriller avatar xeji avatar xlotlu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3ql's Issues

SSLError when writing files to s3c with self-signed cert (2.26)

I'm testing s3ql using minio as the back-end. I have HTTPS enabled, using a custom cert, and have provided the CA with --backend-options ssl-ca-path=.... OS is Ubuntu with s3ql v2.26. I'm able to
mount it and write several files, but eventually it will get this error, usually on larger files (80mb or so):

2018-10-22 18:39:53.806 3832:MainThread s3ql.metadata.download_metadata: Downloading and decompressing metadata...
2018-10-22 18:39:53.812 3832:MainThread s3ql.metadata.download_metadata: Reading metadata...
2018-10-22 18:39:53.814 3832:MainThread s3ql.metadata.restore_metadata: ..objects..
2018-10-22 18:39:53.816 3832:MainThread s3ql.metadata.restore_metadata: ..blocks..
2018-10-22 18:39:53.817 3832:MainThread s3ql.metadata.restore_metadata: ..inodes..
2018-10-22 18:39:53.818 3832:MainThread s3ql.metadata.restore_metadata: ..inode_blocks..
2018-10-22 18:39:53.819 3832:MainThread s3ql.metadata.restore_metadata: ..symlink_targets..
2018-10-22 18:39:53.819 3832:MainThread s3ql.metadata.restore_metadata: ..names..
2018-10-22 18:39:53.820 3832:MainThread s3ql.metadata.restore_metadata: ..contents..
2018-10-22 18:39:53.821 3832:MainThread s3ql.metadata.restore_metadata: ..ext_attributes..
2018-10-22 18:39:53.825 3832:MainThread s3ql.mount.main: Mounting s3c://192.168.0.5:9000/vbr6/ at /home/vbr6/Documents/s3ql...
2018-10-22 18:39:53.835 3837:MainThread s3ql.daemonize.detach_process_context: Daemonizing, new PID is 3838
2018-10-22 18:41:43.015 3838:Thread-4 root.excepthook: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/mount.py", line 64, in run_with_except_hook
    run_old(*args, **kw)
  File "/usr/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/s3ql/s3ql/block_cache.py", line 711, in _removal_loop
    backend.delete_multi(['s3ql_data_%d' % i for i in ids])
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 251, in delete_multi
    return self.backend.delete_multi(keys, force=force)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 476, in delete_multi
    self.delete(key, force=force)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 219, in delete
    resp = self._do_request('DELETE', '/%s%s' % (self.prefix, key))
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 476, in _do_request
    query_string=query_string, body=body)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 710, in _send_request
    self.conn.send_request(method, path, body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 569, in send_request
    self.timeout)
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 1495, in eval_coroutine
    if not next(crt).poll(timeout=timeout):
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 596, in co_send_request
    self.connect()
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 502, in connect
    self._sock = self.ssl_context.wrap_socket(self._sock, server_hostname=server_hostname)
  File "/usr/lib/python3.6/ssl.py", line 407, in wrap_socket
    _context=self, _session=session)
  File "/usr/lib/python3.6/ssl.py", line 814, in __init__
    self.do_handshake()
  File "/usr/lib/python3.6/ssl.py", line 1068, in do_handshake
    self._sslobj.do_handshake()
  File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:841)

This exact setup was working perfectly fine in an older s3ql version, was reading/writing hundreds of files on it and also a few multi-GB files. I believe it was 2.18 but I can't be sure.

s3ql mount with encryption ignoring google storage prefix

mount.s3ql --fg --log none \
 --allow-other --authfile /state/s3ql/authinfo \
 --metadata-upload-interval 1800 \
 --cachedir /state/s3ql/cache "gs://$bucket/s3ql" /s3qlmnt/

Where authinfo is :

[gs]
storage-url: gs://
fs-passphrase: $s3qlkey
...
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/mount.py", line 125, in main
    backend_factory = get_backend_factory(options)
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/common.py", line 318, in get_backend_factory
    tmp_backend.fetch('s3ql_metadata')
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/backends/common.py", line 354, in fetch
    return self.perform_read(do_read, key)
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/backends/common.py", line 317, in perform_read
    fh = self.open_read(key)
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/backends/comprenc.py", line 174, in open_read
    fh = self.backend.open_read(key)
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/nix/store/x4d6w08pzxc2mn9pmhcrnilnmmkr5f16-s3ql-2.28/lib/python3.6/site-packages/s3ql/backends/s3c.py", line 344, in open_read
    raise NoSuchObject(key)
s3ql.backends.common.NoSuchObject: Backend does not have anything stored under key 's3ql_metadata'

When inspecting the bucket, the objects are clearly stored under the /s3ql prefix.

It seems like the prefix is not being used in this case.

The version is about 50 commits behind master with some of my own patches, but I don't think they are related. Here line: https://github.com/andrewchambers/s3ql/blob/d6503c005a44a06246f30b5eed54971a46a8eb4d/src/s3ql/common.py#L318

Race condition in t5_cache.py

There is a race condition in t5_cache.py:

Traceback (most recent call last):
  File "/usr/src/s3ql-2.32/tests/t5_cache.py", line 155, in test_cache_flush_unclean
    args=['--force-remote'])
  File "/usr/src/s3ql-2.32/tests/t4_fuse.py", line 128, in fsck
    assert proc.wait() == expect_retcode
AssertionError: assert 128 == 0
 +  where 128 = <bound method Popen.wait of <subprocess.Popen object at 0x7f4d46164550>>()
 +    where <bound method Popen.wait of <subprocess.Popen object at 0x7f4d46164550>> = <subprocess.Popen object at 0x7f4d46164550>.wait

This patch "fixes" it:

diff --git a/tests/t5_cache.py b/tests/t5_cache.py
--- a/tests/t5_cache.py
+++ b/tests/t5_cache.py
@@ -19,6 +19,7 @@
 import shutil
 import tempfile
 import subprocess
+import time
 from os.path import join as pjoin

 with open(__file__, 'rb') as fh:
@@ -137,6 +138,7 @@
         # Kill mount
         self.flush_cache()
         self.upload_meta()
+        time.sleep(1)
         self.mount_process.kill()
         self.mount_process.wait()
         self.umount_fuse()

Get rid of retry_iterator

Following the example of the new GS backend, we should retire retry_iterator completely. It adds needless code complexity.

s3ql_verify should check hashes

s3ql_verify should check if the hash of an object matches the hash that is stored in the database.

Currently, we detect corruption by checking against the object's own metadata which protects against accidental or deliberate corruption by anyone who does not have the file system passphrase.

However, there is an additional failure mode if the file system is accidentally mounted simultaneously on multiple systems. In this case, the object may be self consistent, but not match the data in the database.

Only wait for cache expiry if cache would grow

If there is no space available in the cache, currently all read() and write() requests block until some space has been freed. However, there is no reason why requests that don't change the amount of data in the cache should block. read requests for objects that are already in cache should not need to wait, and neither should write requests that overwrite data without extending the file.

TypeError in contrib/benchmark.py

I got the following error:

Test file size: 500.00 MiB
compressing with lzma-6...
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/share/doc/s3ql/contrib/benchmark.py", line 225, in <module>
    main(sys.argv[1:])
  File "/usr/share/doc/s3ql/contrib/benchmark.py", line 163, in main
    backend = ComprencBackend(b'pass', (alg, 6), Backend('local://' + backend_dir, None, None))
TypeError: __init__() takes 2 positional arguments but 4 were given

maybe benchmark.py needs fixing.

used version is: 2.28

regards,
bitwave

backend-options cannot be specified in authfile

Hi

is it possible that tcp-timeout option is ignored with s3c backend?
regardless of what I set my error is the same
e.g. fsck.s3ql --backend-options tcp-timeout=2000 .....

ERROR: Uncaught top-level exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 11, in
load_entry_point('s3ql==2.33', 'console_scripts', 'fsck.s3ql')()
File "/usr/lib/python3.6/site-packages/s3ql-2.33-py3.6-linux-x86_64.egg/s3ql/fsck.py", line 1133, in main
backend = get_backend(options)
File "/usr/lib/python3.6/site-packages/s3ql-2.33-py3.6-linux-x86_64.egg/s3ql/common.py", line 248, in get_backend
return get_backend_factory(options)()
File "/usr/lib/python3.6/site-packages/s3ql-2.33-py3.6-linux-x86_64.egg/s3ql/common.py", line 260, in get_backend_factory
backend = options.backend_class(options)
File "/usr/lib/python3.6/site-packages/s3ql-2.33-py3.6-linux-x86_64.egg/s3ql/backends/s3c.py", line 81, in init
self.conn = self._get_conn()
File "/usr/lib/python3.6/site-packages/s3ql-2.33-py3.6-linux-x86_64.egg/s3ql/backends/s3c.py", line 131, in _get_conn
conn.timeout = int(self.options.get('tcp-timeout', 20))
AttributeError: 'str' object has no attribute 'get'

S3QL 2.33
compiled under alpine
Python 3.6.6

Make s3qlrm more resilient

I think there may be issues when we create (or modify) data in a directory while it is being removed by s3qlrm. We need to look into that.

StateError when running s3ql_verify

When running s3ql_verify with --data, I sooner or later always get a crash with a dugong.StateError:

2019-02-13 19:15:27.545 20664:MainThread root.excepthook: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/bin/s3ql_verify", line 11, in <module>
    load_entry_point('s3ql==2.33', 'console_scripts', 's3ql_verify')()
  File "/usr/lib/s3ql/s3ql/verify.py", line 100, in main
    full=options.data, offset=options.start_with)
  File "/usr/lib/s3ql/s3ql/verify.py", line 148, in retrieve_objects
    t.join_and_raise()
  File "/usr/lib/s3ql/s3ql/common.py", line 395, in join_and_raise
    raise EmbeddedException(exc_info, self.name)
s3ql.common.EmbeddedException: caused by an exception in thread Thread-1.
Original/inner traceback (most recent call last): 
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/common.py", line 373, in run
    self.run_protected()
  File "/usr/lib/s3ql/s3ql/common.py", line 425, in run_protected
    self.target(*self.args, **self.kwargs)
  File "/usr/lib/s3ql/s3ql/verify.py", line 206, in _retrieve_loop
    backend.perform_read(do_read, key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 324, in perform_read
    fh = self.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 179, in open_read
    fh = self.backend.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/gs.py", line 391, in open_read
    gs_meta = self._get_gs_meta(key)
  File "/usr/lib/s3ql/s3ql/backends/gs.py", line 374, in _get_gs_meta
    resp = self._do_request('GET', path)
  File "/usr/lib/s3ql/s3ql/backends/gs.py", line 706, in _do_request
    resp = self.conn.read_response()
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 765, in read_response
    return eval_coroutine(self.co_read_response(), self.timeout)
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 1496, in eval_coroutine
    if not next(crt).poll(timeout=timeout):
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 781, in co_read_response
    raise StateError('Previous response not read completely')
dugong.StateError: Previous response not read completely

Dugong debug logs say:

2019-02-13 19:15:26.589 20664:Thread-1 dugong._co_read_chunked: chunk complete
2019-02-13 19:15:26.589 20664:Thread-1 dugong._co_read_header: start
2019-02-13 19:15:26.589 20664:Thread-1 dugong._co_read_header: done (empty header)
2019-02-13 19:15:26.589 20664:Thread-1 dugong._co_read_chunked: done
2019-02-13 19:15:26.589 20664:Thread-1 dugong.co_readall: got 0 bytes
2019-02-13 19:15:26.589 20664:Thread-1 dugong.co_readall: done (273 bytes)
2019-02-13 19:15:26.590 20664:Thread-1 dugong.disconnect: start
2019-02-13 19:15:26.590 20664:Thread-1 dugong.eval_coroutine: polling
2019-02-13 19:15:26.590 20664:Thread-1 dugong.co_send_request: start
2019-02-13 19:15:26.590 20664:Thread-1 dugong.co_send_request: sending GET /storage/v1/b/nikratio-archive/o/s3ql_data_28955
2019-02-13 19:15:26.590 20664:Thread-1 dugong._co_send: trying to send 316 bytes
2019-02-13 19:15:26.590 20664:Thread-1 dugong._co_send: sent 316 bytes
2019-02-13 19:15:26.603 20664:Thread-1 dugong._co_send: done
2019-02-13 19:15:26.603 20664:Thread-1 dugong.eval_coroutine: polling
2019-02-13 19:15:26.603 20664:Thread-1 dugong.co_read_response: start
2019-02-13 19:15:26.603 20664:Thread-1 dugong.disconnect: start
2019-02-13 19:15:26.603 20664:Thread-1 s3ql.common.run: Thread Thread-1 terminated with exception
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/common.py", line 373, in run
    self.run_protected()
  File "/usr/lib/s3ql/s3ql/common.py", line 425, in run_protected
    self.target(*self.args, **self.kwargs)
  File "/usr/lib/s3ql/s3ql/verify.py", line 206, in _retrieve_loop
    backend.perform_read(do_read, key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 324, in perform_read
    fh = self.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 179, in open_read
    fh = self.backend.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/gs.py", line 391, in open_read
    gs_meta = self._get_gs_meta(key)
  File "/usr/lib/s3ql/s3ql/backends/gs.py", line 374, in _get_gs_meta
    resp = self._do_request('GET', path)
  File "/usr/lib/s3ql/s3ql/backends/gs.py", line 706, in _do_request
    resp = self.conn.read_response()
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 765, in read_response
    return eval_coroutine(self.co_read_response(), self.timeout)
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 1496, in eval_coroutine
    if not next(crt).poll(timeout=timeout):
  File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 781, in co_read_response
    raise StateError('Previous response not read completely')
dugong.StateError: Previous response not read completely

Decouple object compression and upload

For small objects, the upload speed is dominated by the network latency and can be improved significantly by using more concurrent connections. However, increasing the number of compression threads beyond the number of cores in the bets case increases memory consumption without speeding up compression. Therefore, we should decouple these two things. For uploads, it probably makes more sense to use asynchronous I/O than multiple threads.

Mount error with Google Storage

I'm seeing the following error when trying to mount a GS based s3ql FS. Debian 9.5 amd64, kernel 4.16.16-2~bpo9+1. Used command below. If you need additional info, please let me know.

# mount.s3ql --allow-other gs://xxx-backup-1/backup1/ /mnt/gs/

2018-10-27 18:07:35.718 30466:MainThread s3ql.mount.determine_threads: Using 4 upload threads.
2018-10-27 18:07:35.719 30466:MainThread s3ql.mount.main: Autodetected 1048532 file descriptors available for cache entries
2018-10-27 18:07:35.731 30466:MainThread s3ql.backends.gs._get_access_token: Requesting new access token
2018-10-27 18:07:36.597 30466:MainThread s3ql.mount.get_metadata: Using cached metadata.
2018-10-27 18:07:36.610 30466:MainThread s3ql.mount.main: Setting cache size to 3987 MB
2018-10-27 18:07:36.610 30466:MainThread s3ql.mount.main: Mounting gs://xxx-backup-1/backup1/ at /mnt/gs...
2018-10-27 18:07:36.643 30471:MainThread s3ql.daemonize.detach_process_context: Daemonizing, new PID is 30472
2018-10-27 18:07:36.827 30472:MainThread s3ql.mount.unmount: Unmounting file system...
2018-10-27 18:07:36.827 30472:MainThread root.excepthook: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/local/bin/mount.s3ql", line 11, in <module>
    load_entry_point('s3ql==2.31', 'console_scripts', 'mount.s3ql')()
  File "/usr/local/lib/python3.5/dist-packages/s3ql-2.31-py3.5-linux-x86_64.egg/s3ql/mount.py", line 210, in main
    sd_notify('READY=1')
  File "/usr/local/lib/python3.5/dist-packages/systemd/daemon.py", line 39, in notify
    raise TypeError("state must be an instance of Notification")
TypeError: state must be an instance of Notification

Any SSH server as a remote backend?

It would be very useful to add an ability to use an SSH connection to any server as a remote backend, similar to the way sshfs(1) does this. Unfortunately, sshfs(1) has very inferior caching functionality which breaks apart (in terms of user experience) in case there is any significant network latency. The caching and asynchronous nature of this application would make a much better sshfsing experience than sshfs(1) itself.

Option for automatically check fs when recommended

WARNING: Last file system check was more than 1 month ago, running fsck.s3ql is recommended.

It would be nice to have an option in fsck.s3ql that would actually check the fs instead of this warning.

I run fsck.s3ql right before mount.s3ql

Thanks.

Separately track most recent read/write access

At the moment, a block that is written once and then read every few seconds will only be uploaded if the cache gets full. It would be nice to separately track read and write access, so that the block will get uploaded if no write access has happened for n seconds.

Installing 2.28 with Python 3.7

Just a comment, I was unable to get 2.28 to work using Python 3.7 without first running
python3 setup.py build_cython

I am using the tar from the bitbucket site, and using amazon linux 2

How does s3ql interact with S3 Glacier?

I went up and down the Internet to find a definitive answer to this question and found this and that. But I still like to get more information on the implications of using Glacier.

My plan is to use a S3 Lifecycle rule to transfer infrequently used files to Glacier to significantly save costs. I understand that s3ql access to glaciered files is somewhat problematic. But does anybody know more about exactly how problematic this can be? Will the consistency of data be endangered or will s3ql simply trigger warnings without crashing?

Support concurrent read-only mounts

While s3ql file systems can be mounted only once at a time, it would be great if the file systems could be mounted read-only multiple times while it is mounted somewhere else in write mode.
While having only one mount which can change the file system, plenty others could read the data and see newer changes almost immediately. "s3qlctrl" can be used to fetch the latest metadata where it is mounted read-only.
For me, a write once/read many scenario would be a great enhancement.
Thanks

guidance needed - 10TB in selfhosted distributed filesystem

Hi, need some advice

I store more than 10TB photos in S3QL on a self hosted distributed filesystem.
Technically it is LizardFS, so S3QL does not talk to it natively but when mounted locally its POSIX compliant.

I wonder what is the best approach for the fastest backups.
Option 1. I mount LizardFS locally and setup S3QL local://
Option2. I mount LizardFS on one of my remote servers and expose via minio and then do S3QL s3://
Option3. like Option2 but I do S3QL local:// on my remote server and expose S3QL filesystem via minio.

What are the main differences between the three approaches - speed wise?
We are talking home broadband around 10mbps so everything will be slow but maybe one will be less slow?

I assume Option3 will push the most data through since deduplication happens remotely, something like rclone could help here but I think it still will be the slowest.

I believe the choice is between local vs S3/minio

What are your opinions?

Reconsider consistency handling

Both Google Storage and Amazon S3 these days offer strong global consistency for writes of new objects. We should check if it's worth making this assumption for every backend in S3QL to simplify the code.

For data objects, we always create new objects and should thus be safe. For metadata, we could include the sequence number in the object name (and upload a dummy object as a dirty flag). It's not yet clear to me how we can do a race-free removal of old metadata objects though.

(I believe consistency guarantees for Swift and other S3 compatible services are typically not documented at all or strong and "global" just by virtue of being small-scale, so it seems safe to base the decision on what Google and AWS provide)

Can't do anthing(verify, fsck, mount, etc)

I upgrade the ubuntu server from 16.04 to 18.04, and then the s3ql stopped working. Each time I try to fsck or mount even verify it, it comes with tons of errors and then stucked until sending ^C to force quit.
fsck.log:
`2018-06-26 16:09:54.654 21674:MainThread s3ql.backends.common.get_ssl_context: Reading default CA certificates.
2018-06-26 16:09:54.654 21674:MainThread s3ql.backends.swift._do_request: started with 'GET', '/', None, {'limit': 1}, None, None
2018-06-26 16:09:54.655 21674:MainThread s3ql.backends.swift._do_request: no active connection, calling _get_conn()
2018-06-26 16:09:54.655 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:55.103 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:55.113 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx2c67a7ab594647c7aa827-005b31e6c3
Date: Tue, 26 Jun 2018 07:09:55 +0000

An error occurred
2018-06-26 16:09:55.114 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.02 Hz
2018-06-26 16:09:55.114 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 1)...
2018-06-26 16:09:55.136 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:55.328 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:55.338 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx1b8b5bca0def4d8d8aab0-005b31e6c3
Date: Tue, 26 Jun 2018 07:09:55 +0000

An error occurred
2018-06-26 16:09:55.339 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.03 Hz
2018-06-26 16:09:55.339 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 2)...
2018-06-26 16:09:55.383 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:55.651 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:55.660 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx2b4059b5df354536896da-005b31e6c3
Date: Tue, 26 Jun 2018 07:09:55 +0000

An error occurred
2018-06-26 16:09:55.660 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.05 Hz
2018-06-26 16:09:55.660 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 3)...
2018-06-26 16:09:55.755 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:55.986 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:55.996 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: txbb149a9327a243b4babf4-005b31e6c3
Date: Tue, 26 Jun 2018 07:09:56 +0000

An error occurred
2018-06-26 16:09:55.996 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.07 Hz
2018-06-26 16:09:55.997 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 4)...
2018-06-26 16:09:56.177 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:56.360 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:56.370 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx2ba107ee1a3b4107a1f59-005b31e6c4
Date: Tue, 26 Jun 2018 07:09:56 +0000

An error occurred
2018-06-26 16:09:56.371 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.08 Hz
2018-06-26 16:09:56.371 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 5)...
2018-06-26 16:09:56.753 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:56.966 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:56.977 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx2c6c0ccc9e4841f2bdf97-005b31e6c4
Date: Tue, 26 Jun 2018 07:09:56 +0000

An error occurred
2018-06-26 16:09:56.977 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.10 Hz
2018-06-26 16:09:56.977 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 6)...
2018-06-26 16:09:57.666 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:57.907 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:57.918 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx8e45843ed25942369ecc5-005b31e6c5
Date: Tue, 26 Jun 2018 07:09:57 +0000

An error occurred
2018-06-26 16:09:57.918 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.12 Hz
2018-06-26 16:09:57.918 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 7)...
2018-06-26 16:09:59.640 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:09:59.832 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:09:59.841 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx43d4f404491a4bcaa6d6d-005b31e6c7
Date: Tue, 26 Jun 2018 07:09:59 +0000

An error occurred
2018-06-26 16:09:59.841 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.13 Hz
2018-06-26 16:09:59.841 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 8)...
2018-06-26 16:10:03.327 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:10:03.755 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:10:03.766 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx4001e16652a14ec6a97ae-005b31e6cb
Date: Tue, 26 Jun 2018 07:10:03 +0000

An error occurred
2018-06-26 16:10:03.767 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.15 Hz
2018-06-26 16:10:03.767 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 9)...
2018-06-26 16:10:09.623 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:10:09.881 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:10:09.897 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: txf3dd8d88a875427487431-005b31e6d1
Date: Tue, 26 Jun 2018 07:10:09 +0000

An error occurred
2018-06-26 16:10:09.897 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.17 Hz
2018-06-26 16:10:09.897 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 10)...
2018-06-26 16:10:23.040 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:10:23.219 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:10:23.232 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx1f96150e72e442b1875ac-005b31e6df
Date: Tue, 26 Jun 2018 07:10:23 +0000

An error occurred
2018-06-26 16:10:23.232 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.18 Hz
2018-06-26 16:10:23.232 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 11)...
2018-06-26 16:10:46.695 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:10:46.875 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:10:46.889 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: txac7ed591e0c34e628f3bc-005b31e6f6
Date: Tue, 26 Jun 2018 07:10:46 +0000

An error occurred
2018-06-26 16:10:46.889 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.20 Hz
2018-06-26 16:10:46.889 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 12)...
2018-06-26 16:11:46.948 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:11:47.169 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:11:47.181 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx35d994097a9e4cb0b0377-005b31e733
Date: Tue, 26 Jun 2018 07:11:47 +0000

An error occurred
2018-06-26 16:11:47.181 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.02 Hz
2018-06-26 16:11:47.181 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 13)...
2018-06-26 16:13:47.924 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:13:48.175 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:13:48.189 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx78356866cb4343d0bd76f-005b31e7ac
Date: Tue, 26 Jun 2018 07:13:48 +0000

An error occurred
2018-06-26 16:13:48.190 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.02 Hz
2018-06-26 16:13:48.190 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 14)...
2018-06-26 16:17:22.549 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:17:22.764 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:17:22.774 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: txab0772ca0865444b865a8-005b31e882
Date: Tue, 26 Jun 2018 07:17:22 +0000

An error occurred
2018-06-26 16:17:22.775 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.02 Hz
2018-06-26 16:17:22.775 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 15)...
2018-06-26 16:24:24.024 21674:MainThread s3ql.backends.swiftks._get_conn: started
2018-06-26 16:24:24.246 21674:MainThread s3ql.backends.swift._detect_features: GET /info
2018-06-26 16:24:24.257 21674:MainThread s3ql.backends.swift._detect_features: Wrong server response.
500 Internal Error
Content-Length: 17
Content-Type: text/plain
X-Trans-Id: tx7eb97c39a4c4427f9ef6b-005b31ea28
Date: Tue, 26 Jun 2018 07:24:24 +0000

An error occurred
2018-06-26 16:24:24.258 21674:MainThread s3ql.backends.common.wrapped: Average retry rate: 0.02 Hz
2018-06-26 16:24:24.258 21674:MainThread s3ql.backends.common.wrapped: Encountered HTTPError (500 Internal Error), retrying Backend._get_conn (attempt 16)...
2018-06-26 16:30:55.383 21674:MainThread root.excepthook: Uncaught top-level exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 11, in
load_entry_point('s3ql==2.26', 'console_scripts', 'fsck.s3ql')()
File "/usr/lib/s3ql/s3ql/fsck.py", line 1188, in main
backend = get_backend(options)
File "/usr/lib/s3ql/s3ql/common.py", line 260, in get_backend
getattr(options, 'compress', ('lzma', 2)), raw)()
File "/usr/lib/s3ql/s3ql/common.py", line 335, in get_backend_factory
backend_options)
File "/usr/lib/s3ql/s3ql/backends/swiftks.py", line 26, in init
super().init(storage_url, login, password, options)
File "/usr/lib/s3ql/s3ql/backends/swift.py", line 72, in init
self._container_exists()
File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/swift.py", line 87, in _container_exists
self._do_request('GET', '/', query_string={'limit': 1 })
File "/usr/lib/s3ql/s3ql/backends/swift.py", line 231, in _do_request
self.conn = self._get_conn()
File "/usr/lib/s3ql/s3ql/backends/common.py", line 147, in wrapped
time.sleep(interval * random.uniform(1, 1.5))
KeyboardInterrupt
`

Support storing multiple files in the same backend object ("fragments")

[migrated from BitBucket]

Storing lots of small files is very inefficient, since every file requires its own block.

We should add support for fragments, so that multiple files can be stored in the same block.

With the new bucket interface, we should be able to implement this relatively easily:

  • Upload workers get list of cache entries, new blocks may be coalesced into single object
  • CommitThread() and expire() only call to worker threads once they have a reasonably big chunk of data ready
  • We keep objects until reference count of all contained blocks is zero
  • Therefore, blocks may continue to exist with refcount=0 and can possibly be reused
  • s3qladm may need a "cleanup" function to get rid of these blocks
  • When downloading object, db can be used to determine which blocks in the object belong to files (and should be added to cache) and which ones can be discarded
  • Minimum size of cache entries passed to workers could be adjusted dynamically based on upload bandwith, latency, and compression ratio of previous uploads

Support metadata > 5 GB

I think splitting the metadata into multiple objects is the way to go. This would be the minimally invasive change to fix the problem.

Claim mount on other server as a failover

I'm trying to setup some kind of failover for when the server where s3ql is mounted dies.
When I mount on a different server it complains that the backend doesn't like it, which makes sense.

What would be the correct way to claim the mount (so to speak) on the second server when this backup server needs to take over?

Allow google storage access in the presence of bucket 403 errors.

Currently I have service accounts who do not have bucket listing permissions, but do have access to bucket objects if they know the randomly generated bucket name. In this case I get a 403 error, which can be safely ignored and mounting would proceed.

This bucket request

path = '/storage/v1/b/' + urllib.parse.quote(self.bucket_name, safe='')
causes mounting to fail in a situation where it does not need to.

I'm happy for this to be a backend option (skip bucket check?), or perhaps I can keep it as a private patch. It might be a rare enough requirement that It doesn't make sense to be in the code, but I would like your opinion.

Close cache entries if corresponding file is closed

It would be a good idea to close cache entries when there are no open file descriptors for the corresponding file in the S3QL file system. That way, we would need less file descriptors without affecting performance.

S3QL shouldn't crash when attempting to open incompatible version of file system

Okay, maybe this time I have it right :-) When attempting to mount a version of the S3QL file system which is incompatible with the current software, S3QL should report that fact and exit gracefully rather than throwing an exception. I see a two possible senarios for the error message: older software/newer file system (software upgrade required), newer software/older file system (software downgrade or file system upgrade required).

An example of the problem. Using S3QL version 2.21 and attempting to mount a file system from version 2.11.2 gives the following result:

mount.s3ql --cachedir cache local://Test-i686-32bit mnt
Using 10 upload threads.
Autodetected 1048514 file descriptors available for cache entries
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
File "/usr/bin/mount.s3ql", line 11, in
load_entry_point('s3ql==2.21', 'console_scripts', 'mount.s3ql')()
File "/usr/lib/s3ql/s3ql/mount.py", line 129, in main
(param, db) = get_metadata(backend, cachepath)
File "/usr/lib/s3ql/s3ql/mount.py", line 374, in get_metadata
param = backend.lookup('s3ql_metadata')
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 74, in lookup
meta_raw = self.backend.lookup(key)
File "/usr/lib/s3ql/s3ql/backends/local.py", line 66, in lookup
return _read_meta(src)
File "/usr/lib/s3ql/s3ql/backends/local.py", line 254, in _read_meta
raise CorruptedObjectError('Invalid object header: %r' % buf)
s3ql.backends.common.CorruptedObjectError: Invalid object header: b'\x80\x02}q\x00(X\x0e\x00'

Crash during unmount

2019-01-21_04:45:31.71026 File "/nix/store/xqn5dn7ak8b3jb59c889d36i1db6lzm3-s3ql/lib/python3.6/site-package s/s3ql/backends/gs.py", line 255, in is_temp_failure
2019-01-21_04:45:31.71026 self.conn.reset()
2019-01-21_04:45:31.71026 AttributeError: 'HTTPConnection' object has no attribute 'reset'
2019-01-21_04:45:31.71154 Unmounting file system...

Concurrent mount detection not always working right

The following seems unlikely to be caused by propagation delays (region us-west-1, accessed from two computers in the same household).

System A:

2018-08-25 20:20:31.441 2760:MainThread s3ql.metadata.cycle_metadata: Backing up old metadata...
2018-08-25 20:20:48.656 2760:MainThread s3ql.mount.main: Cleaning up local metadata...
2018-08-25 20:20:51.031 2760:MainThread s3ql.mount.main: All done.
2018-08-26 09:44:47.192 8750:MainThread s3ql.mount.get_metadata: Using cached metadata.
2018-08-26 09:44:47.200 8750:MainThread s3ql.mount.main: Setting cache size to 3980 MB
2018-08-26 09:44:47.201 8750:MainThread s3ql.mount.main: Mounting s3://us-west-1/nikratio-backup at /tmp/s3ql_backup_8739...
2018-08-26 09:44:47.208 8764:MainThread s3ql.daemonize.detach_process_context: Daemonizing, new PID is 8766
2018-08-26 09:45:16.235 8766:MainThread s3ql.mount.main: FUSE main loop terminated.
2018-08-26 09:45:16.236 8766:MainThread s3ql.mount.unmount: Unmounting file system...
2018-08-26 09:45:17.584 8766:MainThread s3ql.metadata.dump_and_upload_metadata: Dumping metadata...
2018-08-26 09:45:17.585 8766:MainThread s3ql.metadata.dump_metadata: ..objects..
2018-08-26 09:45:17.627 8766:MainThread s3ql.metadata.dump_metadata: ..blocks..
2018-08-26 09:45:17.691 8766:MainThread s3ql.metadata.dump_metadata: ..inodes..
2018-08-26 09:45:20.227 8766:MainThread s3ql.metadata.dump_metadata: ..inode_blocks..
2018-08-26 09:45:21.420 8766:MainThread s3ql.metadata.dump_metadata: ..symlink_targets..
2018-08-26 09:45:21.436 8766:MainThread s3ql.metadata.dump_metadata: ..names..
2018-08-26 09:45:21.474 8766:MainThread s3ql.metadata.dump_metadata: ..contents..
2018-08-26 09:45:22.840 8766:MainThread s3ql.metadata.dump_metadata: ..ext_attributes..
2018-08-26 09:45:22.840 8766:MainThread s3ql.metadata.upload_metadata: Compressing and uploading metadata...
2018-08-28 10:08:38.604 8766:MainThread s3ql.metadata.upload_metadata: Wrote 37.0 MiB of compressed metadata.
2018-08-28 10:08:38.604 8766:MainThread s3ql.metadata.upload_metadata: Cycling metadata backups...
2018-08-28 10:08:38.604 8766:MainThread s3ql.metadata.cycle_metadata: Backing up old metadata...
2018-08-28 10:08:52.829 8766:MainThread s3ql.mount.main: Cleaning up local metadata...
2018-08-28 10:08:55.372 8766:MainThread s3ql.mount.main: All done.

System B:

2018-08-16 17:14:21.098 4066:MainThread s3ql.mount.main: All done.
2018-08-26 09:44:39.178 16534:MainThread s3ql.mount.get_metadata: Ignoring locally cached metadata (outdated).
2018-08-26 09:44:39.357 16534:MainThread s3ql.metadata.download_metadata: Downloading and decompressing metadata...
2018-08-26 09:44:49.208 16534:MainThread s3ql.metadata.download_metadata: Reading metadata...
2018-08-26 09:44:49.214 16534:MainThread s3ql.metadata.restore_metadata: ..objects..
2018-08-26 09:44:49.448 16534:MainThread s3ql.metadata.restore_metadata: ..blocks..
2018-08-26 09:44:50.229 16534:MainThread s3ql.metadata.restore_metadata: ..inodes..
2018-08-26 09:44:57.500 16534:MainThread s3ql.metadata.restore_metadata: ..inode_blocks..
2018-08-26 09:45:01.934 16534:MainThread s3ql.metadata.restore_metadata: ..symlink_targets..
2018-08-26 09:45:02.012 16534:MainThread s3ql.metadata.restore_metadata: ..names..
2018-08-26 09:45:02.419 16534:MainThread s3ql.metadata.restore_metadata: ..contents..
2018-08-26 09:45:10.349 16534:MainThread s3ql.metadata.restore_metadata: ..ext_attributes..
2018-08-26 09:45:11.626 16534:MainThread s3ql.mount.main: Setting cache size to 7073 MB
2018-08-26 09:45:11.626 16534:MainThread s3ql.mount.main: Mounting s3://us-west-1/nikratio-backup at /tmp/s3ql_backup_15111...
2018-08-26 09:45:11.630 32686:MainThread s3ql.daemonize.detach_process_context: Daemonizing, new PID is 32689
2018-08-26 09:46:42.371 32689:MainThread s3ql.mount.main: FUSE main loop terminated.
2018-08-26 09:46:42.374 32689:MainThread s3ql.mount.unmount: Unmounting file system...
2018-08-26 09:46:43.210 32689:MainThread s3ql.metadata.dump_and_upload_metadata: Dumping metadata...
2018-08-26 09:46:43.210 32689:MainThread s3ql.metadata.dump_metadata: ..objects..
2018-08-26 09:46:43.263 32689:MainThread s3ql.metadata.dump_metadata: ..blocks..
2018-08-26 09:46:43.345 32689:MainThread s3ql.metadata.dump_metadata: ..inodes..
2018-08-26 09:46:46.518 32689:MainThread s3ql.metadata.dump_metadata: ..inode_blocks..
2018-08-26 09:46:48.097 32689:MainThread s3ql.metadata.dump_metadata: ..symlink_targets..
2018-08-26 09:46:48.117 32689:MainThread s3ql.metadata.dump_metadata: ..names..
2018-08-26 09:46:48.166 32689:MainThread s3ql.metadata.dump_metadata: ..contents..
2018-08-26 09:46:49.899 32689:MainThread s3ql.metadata.dump_metadata: ..ext_attributes..
2018-08-26 09:46:49.900 32689:MainThread s3ql.metadata.upload_metadata: Compressing and uploading metadata...
2018-08-26 09:47:59.605 32689:MainThread s3ql.metadata.upload_metadata: Wrote 36.2 MiB of compressed metadata.
2018-08-26 09:47:59.605 32689:MainThread s3ql.metadata.upload_metadata: Cycling metadata backups...
2018-08-26 09:47:59.605 32689:MainThread s3ql.metadata.cycle_metadata: Backing up old metadata...
2018-08-26 09:48:20.100 32689:MainThread s3ql.mount.main: Cleaning up local metadata...
2018-08-26 09:48:23.647 32689:MainThread s3ql.mount.main: All done.
2018-08-28 10:12:22.409 4668:MainThread s3ql.fsck.main: Starting fsck of s3://us-west-1/nikratio-backup
2018-08-28 10:12:24.993 4668:MainThread s3ql.fsck.main: Using cached metadata.
2018-08-28 10:12:25.170 4668:MainThread s3ql.fsck.main: Remote metadata is outdated.
2018-08-28 10:12:25.171 4668:MainThread s3ql.fsck.main: Checking DB integrity...
2018-08-28 10:12:54.910 4668:MainThread s3ql.fsck.check: Creating temporary extra indices...
2018-08-28 10:12:57.750 4668:MainThread s3ql.fsck.check_lof: Checking lost+found...
2018-08-28 10:12:57.762 4668:MainThread s3ql.fsck.check_cache: Checking for dirty cache objects...
2018-08-28 10:12:57.762 4668:MainThread s3ql.fsck.check_names_refcount: Checking names (refcounts)...
2018-08-28 10:12:58.128 4668:MainThread s3ql.fsck.check_contents_name: Checking contents (names)...
2018-08-28 10:12:58.814 4668:MainThread s3ql.fsck.check_contents_inode: Checking contents (inodes)...
2018-08-28 10:12:59.553 4668:MainThread s3ql.fsck.check_contents_parent_inode: Checking contents (parent inodes)...
2018-08-28 10:12:59.764 4668:MainThread s3ql.fsck.check_objects_refcount: Checking objects (reference counts)...
2018-08-28 10:12:59.873 4668:MainThread s3ql.fsck.check_objects_id: Checking objects (backend)...
2018-08-28 10:13:51.786 4668:MainThread s3ql.fsck.log_error: object 469134 only exists in table but not in backend, deleting
2018-08-28 10:13:51.787 4668:MainThread s3ql.fsck.log_error: File may lack data, moved to /lost+found: /thinkpad/.expire_backups.bak
2018-08-28 10:13:51.954 4668:MainThread s3ql.fsck.log_error: object 469135 only exists in table but not in backend, deleting
2018-08-28 10:13:51.955 4668:MainThread s3ql.fsck.log_error: File may lack data, moved to /lost+found: /thinkpad/.expire_backups.dat

System A took two days to write the metadata. On the next fsck, system B then found missing objects. I think this is because system B attached the files to new objects, but this metadata then got lost completely.

AttributeError in contrib/clone_fs.py

Hi, It looks like the changes to the argument parsing code have stopped the very handy contrib utility contrib/clone_fs.py from working.

When I run it fails with an assertion, as it is fed source and destination URLs, not a single storage URL:

contrib/clone_fs.py  local://tmp/a local://tmp/b
Traceback (most recent call last):
  File "contrib/clone_fs.py", line 160, in <module>
    main(sys.argv[1:])
  File "contrib/clone_fs.py", line 98, in main
    options = parse_args(args)
  File "contrib/clone_fs.py", line 57, in parse_args
    return parser.parse_args(args)
  File "/home/user/s3ql-2.28/src/s3ql/parse_args.py", line 215, in parse_args
    assert options.storage_url
AttributeError: 'Namespace' object has no attribute 'storage_url'

Support using Google Cloud service account

It would be nice if the google storage backend, when given no credentials would attempt to use the default service account. i.e. when on a google compute server, use that machines credentials.

backend response error (wasabi)

Hi

I am using version 2.28 with wasabisys as s3 backend.
few times a day s3ql will report such error (see below)
does it mean that there is a problem with wasabisys compatibility?


ERROR: Uncaught top-level exception:
Traceback (most recent call last):
File "/home/kisiel/s3ql-2.28/src/s3ql/mount.py", line 64, in run_with_except_hook
run_old(*args, **kw)
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/home/kisiel/s3ql-2.28/src/s3ql/block_cache.py", line 409, in _upload_loop
self._do_upload(*tmp)
File "/home/kisiel/s3ql-2.28/src/s3ql/block_cache.py", line 436, in _do_upload
% obj_id).get_obj_size()
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/common.py", line 108, in wrapped
return method(*a, **kw)
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/common.py", line 340, in perform_write
return fn(fh)
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/comprenc.py", line 371, in exit
self.close()
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/comprenc.py", line 365, in close
self.fh.close()
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/comprenc.py", line 530, in close
self.fh.close()
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/common.py", line 108, in wrapped
return method(*a, **kw)
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/s3c.py", line 948, in close
headers=self.headers, body=self.fh)
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/s3c.py", line 550, in _do_request
self._parse_error_response(resp)
File "/home/kisiel/s3ql-2.28/src/s3ql/backends/s3c.py", line 572, in _parse_error_response
raise HTTPError(resp.status, resp.reason, resp.headers)
s3ql.backends.s3c.HTTPError: 400 Bad Request: Network error.

Multiple clients connecting to same S3 instance? (But not at the same time)

Hi @Nikratio / All,

Firstly, S3QL is incredible, such great work thanks to all contributors!

So it's likely I'm just doing this wrong, but I can't tell as I can't see to find my answer in the documentation or on Stack Overflow. I currently have S3QL set-up with an instance of Plex - which has been working really well...

I'm now trying to set this up through Docker, as of course, why pay for an average server 24/7, when you can pay the same for a better spec server, during the hours when you'll actually be using it...

So I have unmounted S3QL on my initial server, as i know S3QL can only have one connection at a time, however using the same authfile details, I don't seem to be able to connect from my docker server. Here is the command i'm running (as root)...

/usr/bin/mount.s3ql --fg --cachedir=/var/cache/s3ql/harvflix/ --authfile=/etc/s3ql.authinfo --allow-other --compress=none --cachesize=976563 s3c://s3.wasabisys.com/<bucket_name> /mnt/harvflixs3ql

And this is the response i get...

Using 6 upload threads.
Autodetected 1048526 file descriptors available for cache entries
MD5 mismatch in metadata for s3ql_passphrase
MD5 mismatch in metadata for s3ql_passphrase
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 3)...
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 4)...
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 5)...
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 6)...
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 7)...
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 8)...
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 9)...
MD5 mismatch in metadata for s3ql_passphrase
Encountered BadDigestError (BadDigest: Meta MD5 for s3ql_passphrase does not match), retrying Backend.open_read (attempt 10)...
^CUncaught top-level exception:
Traceback (most recent call last):
  File "/usr/bin/mount.s3ql", line 9, in <module>
    load_entry_point('s3ql==2.15', 'console_scripts', 'mount.s3ql')()
  File "/usr/lib/s3ql/s3ql/mount.py", line 120, in main
    options.authfile, options.compress)
  File "/usr/lib/s3ql/s3ql/common.py", line 340, in get_backend_factory
    backend.fetch('s3ql_passphrase')
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 351, in fetch
    return self.perform_read(do_read, key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 314, in perform_read
    fh = self.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 144, in wrapped
    time.sleep(interval)
KeyboardInterrupt

Which sure - makes absolute sense, i haven't used mkfs.s3ql on this server, which I guess is possibly the issue... but I'm worried that makefs.s3ql might make a fresh set-up, and overwrite all the s3ql data in the bucket (as this happened initially when i first set-up the bucket with s3ql).

I'm not sure if there is a way to make the server think it's the same server, i.e. give it the master key maybe? But doesn't seem right.

Please can you advise if there is a proper way to do this or if this is indeed an issue? I guess the process is a little bit like trying to recover an s3ql file system, but i couldn't find any guides for that either. :-/

Thanks,

Harvey.

systemd readyness notification is not working

[migrated from BitBucket]

I have the same issue.

Job for s3ql-mountpoint-storage.service failed because the service did not take the steps required by its unit configuration.

journalctl -xe shows

Nov 01 20:37:02 peschue1 systemd[1]: s3ql-mountpoint-storage.service: Failed with result 'protocol'.
Nov 01 20:37:02 peschue1 systemd[1]: Failed to start S3QL mount 'storage'.
-- Subject: Unit s3ql-mountpoint-storage.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- Unit s3ql-mountpoint-storage.service has failed.
-- 
-- The result is RESULT.

"mount" still shows the mount, but trying to use the filesystem shows

$ ls /mnt/s3ql/storage
ls: cannot access '/mnt/s3ql/storage': Transport endpoint is not connected

Systemd knows that the mount was not successful so I cannot use "systemctl stop " to unmount cleanly.

The mount.log shows

2018-11-01 20:37:02.128 12208:MainThread s3ql.mount.main: Setting cache size to 7886 MB
2018-11-01 20:37:02.129 12208:MainThread s3ql.block_cache.__init__: Initializing
2018-11-01 20:37:02.132 12208:MainThread s3ql.mount.main: Mounting s3://eu-west-1/XXXX/ at /mnt/s3ql/storage...

systemctl status s3ql-mountpoint-storage

shows

โ— s3ql-mountpoint-storage.service - S3QL mount 'storage'
   Loaded: loaded (/etc/systemd/system/s3ql-mountpoint-storage.service; enabled; vendor preset: enabled)
   Active: failed (Result: protocol) since Thu 2018-11-01 20:37:02 CET; 2min 1s ago
  Process: 12208 ExecStart=/usr/bin/mount.s3ql --debug --allow-other --threads=1 --max-cache-entries=5 --compress=zlib s3://eu-west-1/XXXX
 Main PID: 12208 (code=exited, status=0/SUCCESS)

Interestingly, "umount /mnt/s3ql/storage" exits with no error and trying to fsck the seemingly crashed volume terminates with an exception

Starting fsck of s3://eu-west-1/XXXX/
Using cached metadata.
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/bin/fsck.s3ql", line 11, in <module>
    load_entry_point('s3ql==2.26', 'console_scripts', 'fsck.s3ql')()
  File "/usr/lib/s3ql/s3ql/fsck.py", line 1212, in main
    assert not os.path.exists(cachepath + '-cache') or param['needs_fsck']

mounting the same thing manually afterwards works nicely, so it actually did not crash or mark it as dirty

EDIT: with --fg and Type=notify it works nicely (unless I use the filesystem while doing systemctl stop ... then systemd immediately terminates the mount.s3ql process and does not cleanly unmount because umount.s3ql did not wait but display an error message)

Local backend file format should be architecture independent

I'm currently copying a large S3QL file system (6 TB raw data, 800 GB deduplicated/compressed) from a 32 bit x86 system to a 64 bit ARM, both using the "local" backend. Initially, I naively simply copied the S3QL directory tree between the two systems, but the ARM system was unable to open the copied file system. After a bit of investigation, I realized that some aspects of common Python binary file formats are architecture dependent.

This may not be practical to implement, and presumably depends on whether the format of the compressed data blocks are processor architecture independent (some compression formats are). It also may not be worth doing as it is presumably a rare use-case, however, if you don't ask ... :-)

I am currently transferring the file system the hard way (fortunately, they are on a local network so CPU speed on the receiving end is the primary limiting factor). I estimate this will take a few weeks to a few months depending on how performance changes as the received data shifts from new data to primarily duplicate data.

Ctrl-C does not cleanly unmount

Version: 2.32 (built from sources in a docker container)

FROM python:3-slim
RUN apt-get update -qq && apt-get install -y curl gnupg2 jq bzip2 build-essential pkg-config libfuse-dev libsqlite3-dev libfuse2 psmisc procps
RUN pip install --upgrade --no-cache-dir setuptools pycrypto defusedxml requests apsw llfuse dugong
RUN TAG=$(curl -s "https://api.github.com/repos/s3ql/s3ql/releases/latest"|jq -r .tag_name -) \
 && FILE=$(echo "$TAG"|sed s/release/s3ql/) \
 && curl -L "https://github.com/s3ql/s3ql/releases/download/$TAG/$FILE.tar.bz2" | tar -xj \
 && cd $FILE \
 && python3 setup.py build_ext --inplace \
 && python3 setup.py install

I mount a S3QL FS (I created an authfile2 file meanwhile)

mount.s3ql --fg swiftks://auth.cloud.ovh.net/GRA1:test-s3ql /s3ql
Using 2 upload threads.
Autodetected 1048538 file descriptors available for cache entries
Detected Swift features for storage.gra1.cloud.ovh.net:443: copy via COPY, Bulk delete 1000 keys at a time, maximum meta value length is 255 bytes
Using cached metadata.
Setting cache size to 7150 MB
Mounting swiftks://auth.cloud.ovh.net/GRA1:test-s3ql/ at /s3ql...

Then CTRL+C and...

Unmounting file system...
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/local/bin/mount.s3ql", line 11, in <module>
    load_entry_point('s3ql==2.32', 'console_scripts', 'mount.s3ql')()
  File "/usr/local/lib/python3.7/site-packages/s3ql-2.32-py3.7-linux-x86_64.egg/s3ql/mount.py", line 221, in main
    raise RuntimeError('Received signal %d, terminating' % (ret,))
RuntimeError: Received signal 2, terminating

It is OK to umount cleanly after mounting without --fg:

mount.s3ql swiftks://auth.cloud.ovh.net/GRA1:test-s3ql /s3ql
Using 2 upload threads.
Autodetected 1048538 file descriptors available for cache entries
Detected Swift features for storage.gra1.cloud.ovh.net:443: copy via COPY, Bulk delete 1000 keys at a time, maximum meta value length is 255 bytes
Using cached metadata.
Setting cache size to 7149 MB
Mounting swiftks://auth.cloud.ovh.net/GRA1:test-s3ql/ at /s3ql...
umount.s3ql /s3ql

It works with version 2.21 (debian stretch package)

mount.s3ql --fg swiftks://auth.cloud.ovh.net/GRA1:test-s3ql /s3ql
Using 2 upload threads.
Autodetected 1048538 file descriptors available for cache entries
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Setting cache size to 7340 MB
Mounting swiftks://auth.cloud.ovh.net/GRA1:test-s3ql/ at /s3ql...

CTRL+C

FUSE main loop terminated.
Unmounting file system...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 262 bytes of compressed metadata.
Cycling metadata backups...
Backing up old metadata...
Cleaning up local metadata...
All done.

Don't depend on `ps` supporting `-p`

Hi @Nikratio

First of all, many thanks for this incredibly useful project! I'm currently doing some experiments running s3ql on Alpine Linux / BusyBox. Basically, this seems to work just fine, but there's one little caveat: umount.s3ql is relying on calling ps -p, which is not supported in the BusyBox ps implementation. Since this does not work and umount.s3ql then just assumes that the daemon has stopped, I have to make sure myself that the main s3ql daemon has really finished after calling umount.s3ql (e.g., by calling wait <pid> after calling umount.s3ql).

My first thought was that getting the command line could also be accomplished with looking at /proc/<PID>/cmdline, but I guess that would not be compatible with OS X anymore? Maybe umount.s3ql could first look for /proc and only use ps -p as a backup if no procfs is available?

Alternatively, I would request to redirect stderr of the ps call to /dev/null, so BusyBox does at least not clutter my s3ql logs with log messages saying it does not support the -p flag.

ETag parsing error after upgrade

I've been using s3ql 2.21 about a year with custom server (minio) and it worked quite well.

Today I've upgraded to s3ql 2.28 and s3ql can't mount FS anymore. First it complained about possible version mismatch, so I've performed s3qladm upgrade - completed successfully as I recall. Now I can't do anything with s3ql. For instance

fsck.s3ql --force  --backend-options no-ssl s3c://custom.host:customport/custom/path
WARNING: Object closed prematurely, can't check MD5, and have to reset connection
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/bin/fsck.s3ql", line 11, in <module>
    load_entry_point('s3ql==2.28', 'console_scripts', 'fsck.s3ql')()
  File "/usr/lib/python3.6/site-packages/s3ql/fsck.py", line 1203, in main
    backend = get_backend(options)
  File "/usr/lib/python3.6/site-packages/s3ql/common.py", line 248, in get_backend
    return get_backend_factory(options)()
  File "/usr/lib/python3.6/site-packages/s3ql/common.py", line 265, in get_backend_factory
    backend.fetch('s3ql_passphrase')
  File "/usr/lib/python3.6/site-packages/s3ql/backends/common.py", line 354, in fetch
    return self.perform_read(do_read, key)
  File "/usr/lib/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/python3.6/site-packages/s3ql/backends/common.py", line 319, in perform_read
    res = fn(fh)
  File "/usr/lib/python3.6/site-packages/s3ql/backends/common.py", line 351, in do_read
    data = fh.read()
  File "/usr/lib/python3.6/site-packages/s3ql/backends/s3c.py", line 857, in read
    etag = self.resp.headers['ETag'].strip('"')
AttributeError: 'NoneType' object has no attribute 'strip'

I have not looked closer, and it seems this might be server-side bug, but still - it used to work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.