Giter Site home page Giter Site logo

apt-boto-s3's People

Contributors

imjoshholloway avatar jjudd avatar pauldraper avatar tmccombs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apt-boto-s3's Issues

Segfault on md5.update() when using 4.4.0-1002-fips kernel

When using the Canonical 4.4.0-1002-fips kernel, the s3.py is segfaulting on the call to md5.update()

# apt-get update
Reading package lists... Done
E: Method s3 has died unexpectedly!
E: Sub-process s3 received a segmentation fault.

This is happening because of a weird ordering issue with the imports. The s3.py file has this for its first handful of imports:

#!/usr/bin/env python2
import boto3
import botocore
import collections
import hashlib

Testing it with the interactive python shell:

# /usr/bin/env python2
Python 2.7.12 (default, Nov 12 2018, 14:36:49)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
>>> import hashlib
>>> md5 = hashlib.md5()
>>> bytes = "moo"
>>> md5.update(bytes)
Segmentation fault (core dumped)

I was able to resolve this by having the boto3 library load after hashlib:

# /usr/bin/env python2
Python 2.7.12 (default, Nov 12 2018, 14:36:49)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib
>>> import boto3
>>> md5 = hashlib.md5()
>>> bytes = "moo"
>>> md5.update(bytes)
>>>

I manually updated /usr/lib/apt/methods/s3 with this change and apt-get update worked again

We aren't seeing this issue anywhere other than the Canonical FIPS kernel.

Failed to fetch packages (403 Forbidden)

Installation notes mention for the apt repo :

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61
echo deb http://dl.bintray.com/lucidsoftware/apt/ lucid main > /etc/apt/sources.list.d/lucidsoftware-bintray.list

apt-get update

The above yields :

W: Failed to fetch http://dl.bintray.com/lucidsoftware/apt/dists/lucid/main/binary-amd64/Packages  403  Forbidden [IP: 18.158.131.58 80]

Error thrown when InRelease file missing

I have a repo that has Release and Release.gpg files but no InRelease file. This causes apt-boto-s3 to throw the following error:

E: Method s3 General failure: Could not connect to the endpoint URL: "https://s3-us-east-1.amazonaws.com/bucketname/dists/dbplc/InRelease" (/usr/lib/apt/methods/s3, line 98)

List has no attribute split

Hello, i just gave this another shot, and am getting this error:

yann@yann-desktop:~/$ sudo apt-get update
0% [Working]Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(_self.__args, *_self.__kwargs)
File "/usr/lib/apt/methods/s3", line 82, in handle_message
self.handle_message(message)
File "/usr/lib/apt/methods/s3", line 162, in handle_message
s3_access_key, s3_access_secret = s3_uri.credentials()
File "/usr/lib/apt/methods/s3", line 116, in credentials
user_parts = user.split(':', 1)
AttributeError: 'list' object has no attribute 'split'

I think it has to do that self.user_host returns an array - and we run the split on the array two lines later. Maybe user, _ = self.user_host() on line 114 should be user, _ = self.user_host()[0]?

I didnt submit a PR as it doesnt work much better when I did this correction:

Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in **bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(_self.__args, _self.__kwargs)
File "/usr/lib/apt/methods/s3", line 82, in handle_message
self.handle_message(message)
File "/usr/lib/apt/methods/s3", line 183, in handle_message
s3_response = s3_object.get(
_s3_request)
File "/usr/local/lib/python2.7/dist-packages/boto3/resources/factory.py", line 518, in do_action
response = action(self, args, _kwargs)
File "/usr/local/lib/python2.7/dist-packages/boto3/resources/action.py", line 83, in __call

response = getattr(parent.meta.client, operation_name)(*_params)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 537, in _make_api_call
operation_model, request_dict)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 117, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 142, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 127, in create_request
prepared_request = self.prepare_request(request)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 138, in prepare_request
return request.prepare()
File "/usr/local/lib/python2.7/dist-packages/botocore/awsrequest.py", line 361, in prepare
p.prepare_url(self.url, self.params)
File "/usr/local/lib/python2.7/dist-packages/botocore/vendored/requests/models.py", line 357, in prepare_url
raise InvalidURL(_e.args)

InvalidURL: Failed to parse: myaccesskey:myprivatekey

Any help much appreciated, thanks!

v4 considerations

We should either document the workaround (boto/boto3#329) or make the code use v4 all the time. This latter one may affect what URLs are acceptable.

Related to #1

Seeing https instead of s3?

I am seeing error message that shows s3 is trying to use https?

ubuntu@ip-172-17-0-237:/etc/apt/sources.list.d$ cat s3repo.list 
deb s3://test/ stable main


ubuntu@ip-172-17-0-237:/etc/apt/sources.list.d$ sudo apt-get update -u
...
E: Method s3 General failure: Could not connect to the endpoint URL: "https://test/dists/stable/InRelease" (/usr/lib/apt/methods/s3, line 98)

IOError: [Errno 32] Broken pipe

Not sure it's a bug or just a connection glitch, but this error "IOError: [Errno 32] Broken pipe" happen sometimes and retrying will work most of the time. The servers and the buckets are in the same region.

Here's the exception:

12:32:44: Get: 1 s3://s3.amazonaws.com/***
12:32:45: Get: 2 s3://s3.amazonaws.com/***
Fetched 6143 kB in 2s (2297 kB/s)
12:32:48: Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/apt/methods/s3", line 135, in f
self.output(message)
File "/usr/lib/apt/methods/s3", line 87, in send_one
output.flush()
IOError: [Errno 32] Broken pipe
12:32:50: E
12:32:50: : Method s3 General failure: configparser (/usr/lib/apt/methods/s3, line 98)
IOError: [Errno 32] Broken pipe

'thread.lock' object has no attribute 'lock'

Hello! The project looks super interesting, sadly I am not getting it to work...

yann@ubuntu:~/$ sudo apt-get update
0% [Working]Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(_self.__args, *_self.__kwargs)
File "/usr/lib/apt/methods/s3", line 76, in handle_message
with interrupt_lock.lock():
AttributeError: 'thread.lock' object has no attribute 'lock'

Thanks for the effort!

Since installing, dpkg/apt locaks are always held.

Since installing, on ALL Ubuntu 16.04 machines that use this, the various locks regarding package installation are always being help by process python2 /usr/lib/apt/methods/s3.

sudo apt-get update
Reading package lists... Done
E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)
E: Unable to lock directory /var/lib/apt/lists/
sudo pkill -9 python2
sudo apt-get update
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
...

Apt-boto-s3 fails when translations are disabled

When disabling the translations, apt-boto-s3 produce errors

ubuntu@xxxx:/etc/apt/apt.conf.d$ cat 99translations
Acquire::Languages "none";

Ign s3://s3.eu-central-1.amazonaws.com abcd InRelease
Ign s3://s3.eu-central-1.amazonaws.com abcd Release.gpg
Hit s3://s3.eu-central-1.amazonaws.com abcd Release
Ign s3://s3.eu-central-1.amazonaws.com abcd/abcd amd64 Packages/DiffIndex
Err s3://s3.eu-central-1.amazonaws.com abcd/abcd amd64 Packages

Err s3://s3.eu-central-1.amazonaws.com abcd/abcd amd64 Packages

Err s3://s3.eu-central-1.amazonaws.com abcd/abcd amd64 Packages

Hit s3://s3.eu-central-1.amazonaws.com abcd/abcd amd64 Packages

Permission denied

Hey, I created a new private repository, and apt-boto-s3 fails to download the package index, while the AWS cli has no problems with it:

[...]
E: Failed to fetch http://s3.eu-central-1.amazonaws.com/MYBUCKET/ubuntu/dists/1234/1234/binary-amd64/Packages 403 Forbidden
E: Some index files failed to download. They have been ignored, or old ones used instead.

ubuntu@hostname:~$ aws s3 cp --region eu-central-1 s3://MYBUCKET/ubuntu/dists/1234/1234/binary-amd64/Packages .
download: s3://planet-ber-apt/ubuntu/dists/1234/1234/binary-amd64/Packages to ./Packages

Any idea what could be wrong? Does apt-boto-s3 use more permissions than the aws cli? Thanks!

PS: The policy I ve made a bit wider for testing, but still getting 403:

    {
        "Action": [
            "s3:*"
        ],
        "Resource": "arn:aws:s3:::*",
        "Effect": "Allow"
    },
    {
        "Action": [
            "s3:*"
        ],
        "Resource": "arn:aws:s3:::MYBUCKET",
        "Effect": "Allow"
    },
    {
        "Action": [
            "s3:*"
        ],
        "Resource": [
            "arn:aws:s3:::MYBUCKET/*"
        ],
        "Effect": "Allow"
    }

1.4 deb package built as 1.3 version

Architecture: all
Depends: python-pip
Description: The fast and simple S3 transport for apt
Maintainer: Lucid Software [email protected]
Package: apt-boto-s3
Priority: optional
Section: base
Version: 1.3
Filename: pool/main/a/apt-boto-s3/apt_boto_s3_1.4.deb
SHA1: 25481954d9e6f7dfaa78e36a7d46ce09a605130a
SHA256: 00b9ade87ddefc4a332e088683d4d3ff7234d0ee8132978843c3c53c60bd2aa9
Size: 4242

Architecture: all
Depends: python-pip
Description: The fast and simple S3 transport for apt
Maintainer: Lucid Software [email protected]
Package: apt-boto-s3
Priority: optional
Section: base
Version: 1.3
Filename: pool/main/a/apt-boto-s3/apt_boto_s3_1.3.deb
SHA1: 62dfae417529b8b20ff0863b2fad0542bdeea5f5
SHA256: 4a8da0b0fde8e8421f508efa8f4432d4fadb4c449e48deabf96c7c52917a379f
Size: 3824

So apparently you guys built TWO 1.3 packages. Things have been breaking for the last couple of days. We have seen the apt update process hang as well as "invalid authorization headers" even though everything was working fine for months.

Can we get a valid 1.4 built as well as the malformed 1.3 package removed to ensure we can better identify WHICH package is broken all of a sudden?

Create Python 3 compatible script

Hi there,

Is there any possibility of porting this to work with Python3?

I tried testing it after converting it with 2to3 but unfortunately there's a few silent errors and I'm a bit stumped in where its going wrong.

Thanks for all your hard work

python-boto3

python-boto3 has been available for Debian since Stretch and Ubuntu since Xenial. Will you accept a patch to replace the pip dependency with python-boto3?

Please add armhf to your architecure

I installed it manually on Raspi3

#!/bin/sh
apt-get install python python-pip
pip install boto3
wget https://raw.githubusercontent.com/lucidsoftware/apt-boto-s3/master/s3.py
cp s3.py /usr/lib/apt/methods/s3
chmod 755 /usr/lib/apt/methods/s3

and it works like a charm - Thanks!

Random threading issues

I randomly get this error:

Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(_self.__args, *_self.__kwargs)
File "/usr/lib/apt/methods/s3", line 87, in handle_message
self._send_error(ex)
File "/usr/lib/apt/methods/s3", line 57, in _send_error
self.send(Message(MessageHeaders.GENERAL_FAILURE, (('Message', message),)))
File "/usr/lib/apt/methods/s3", line 54, in send
self.pipes.output.flush()
IOError: [Errno 32] Broken pipe

The more other APT repositories configured, the less likely it is to fail. If the s3 repo is the only repository configured, then it works nearly every time. If i reenable the rest of the sources.list, if fails nearly every time (but still works sometimes).

Connect/ReadTimeout of boto3 results in hanging

Hi! When the internet connection isn´t available during connection time boto3 throws an Connection Error. Unfortunately apt only hangs during apt-get update & apt-get install (on my ubuntu16 machine).

If you add

            except botocore.exceptions.EndpointConnectionError as e:
                self.send(Message(MessageHeaders.URI_FAILURE, (
                        ('Message', e),
                        ('URI', uri),
                )))
                raise e

to the try/except of *s3_response = s3_object.get(*s3_request) it will fail better.
By the Way, is there a better way than raising the Exception there?

Furthermore if the internet connection breaks during package download, apt also hangs up
bytes = s3_response['Body'].read(16 * 1024) throws an ReadTimeout
If you put there:

                       try:
                            bytes = s3_response['Body'].read(16 * 1024)
                       except Exception as e:
                            self.send(Message(MessageHeaders.URI_FAILURE, (
                                ('Message', e),
                                ('URI', uri),
                            )))
                            raise e

it will fail (whats correctly, if the internet is broken?!

Wrong signature type returned by s3_uri.signature_version

I believe the signature_version method does not return s3v4 when it should. Specifically:
elif self.virtual_host_bucket() == '':
return 's3v4'

I believe this never get executed. Some debugging showed that in my case self.virtual_host_bucket would return None instead of empty string, therefore I patched it to :
elif not self.virtual_host_bucket()

On a side note, to get this working I had to hardcode the region in the connection initialisation:

163 s3 = boto3.resource(
164 's3',
165 aws_access_key_id=s3_access_key,
166 aws_secret_access_key=s3_access_secret,
167 endpoint_url=s3_uri.endpoint_url(),
168 region_name="eu-central-1",
169 config=botocore.client.Config(signature_version=s3_uri.signature_version())
170 )

This is currently just a hack to get things working as I am in a rush - but I might be able to provide a PR in a few weeks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.