Giter Site home page Giter Site logo

s3-parallel-put's Introduction

s3-parallel-put Parallel uploads to Amazon AWS S3

s3-parallel-put speeds the uploading of many small keys to Amazon AWS S3 by executing multiple PUTs in parallel.

Dependencies

Installation

# Debian family
apt-get update && apt-get -y install python-pip
pip install boto
pip install python-magic
wget -O /usr/bin/s3-parallel-put https://raw.githubusercontent.com/mishudark/s3-parallel-put/master/s3-parallel-put
chmod +x /usr/bin/s3-parallel-put

Usage

The program reads your credentials from the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

s3-parallel-put --bucket=BUCKET --prefix=PREFIX SOURCE

Keys are computed by combining PREFIX with the path of the file, starting from SOURCE. Values are file contents.

Options:

  • -h, --help — show help message
S3 options:
  • --bucket=BUCKET — set bucket
  • --bucket_region=BUCKET_REGION — set bucket region if not in us-east-1 (default new bucket region)
  • --host=HOST — set AWS host name
  • --secure and --insecure control whether a secure connection is used
Source options:
  • --walk=MODE — set walk mode (filesystem or tar)
  • --exclude=PATTERN — exclude files matching PATTERN
  • --include=PATTERN — don't exclude files matching PATTERN
  • --ignore-files-older-than-days=DAYS — ignore files older than defined DAYS, only sync newer files
Put options:
  • --content-type=CONTENT-TYPE — sets the Content-Type header, set to "guess" to guess based on file name or "magic" to guess by filename and libmagic

  • --gzip — compresses common text files and sets the Content-Encoding header to gzip

  • --gzip-type=GZIP_TYPE — if --gzip is set, sets what content-type to gzip, defaults to a list of known text content types, "all" will gzip everything. Specify multiple times for multiple content types (eg.--gzip-type=guess --gzip-type="image/svg+xml") [default: "guess"]

  • --put=MODE — sets the heuristic used for deciding whether to upload a file or not. Valid modes are:

    • add set the key's content if the key is not already present.
    • stupid always set the key's content.
    • update set the key's content if the key is not already present and its content has changed (as determined by its ETag).
    • copy
      The default heuristic is update. If you know that the keys are not already present then stupid is fastest (it avoids an extra HEAD request for each key). If you know that some keys are already present and that they have the correct values, then add is faster than update (it avoids calculating the MD5 sum of the content on the client side).
  • --prefix=PREFIX — set key prefix

  • --resume=FILENAME — resume from log file

  • --grant=GRANT — A Canned ACL policy to be applied to each file uploaded. Choices: private, public-read, public-read-write, authenticated-read, bucket-owner-read, bucket-owner-full-control, log-delivery-write

  • --header=HEADER:VALUE — adds an arbitrary header to the S3 file. This option can be specified multiple times.

  • --encrypt-key — use server side encryption

Logging options:
  • --log-filename=FILENAME — set log filename
  • -q, --quiet — less output
  • -v, --verbose — more output to be printed, including progress of individual files
Debug and performance tuning options:
  • --dry-run — don't write to S3. Causes the program to print what it would do, but not to upload any files. It is strongly recommended that you test the program with this option before transferring any real data.
  • --limit=N — set maximum number of keys to put. Causes the program to upload no more than N files. Combined with --dry-run, this is also useful for testing.
  • --processes=N — sets the number of parallel upload processes

Architecture

  • A walker process generates (filename, key_name) pairs and inserts them in put_queue.
  • Multiple putter processes consume these pairs in parallel, uploading the files to S3 and sending file-by-file statistics to stat_queue.
  • A statter process consumes these file-by-file statistics and generates summary statistics.

Bugs

  • Limited error checking.

To Do

  • Automatically parallelize uploads of large files by splitting into chunks.

Related projects

Licence

The MIT License (MIT)

Copyright (c) 2011-2014 Tom Payne

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

vim: set spell spelllang=en textwidth=76:

s3-parallel-put's People

Contributors

adamlwgriffiths avatar aplumb-fiksu avatar aseba avatar cbarbour avatar crccheck avatar draskolnikova avatar jguzman-splunk avatar konklone avatar leonid-s-usov avatar matrad avatar matt-foreflight avatar maxymvlasov avatar mikeatlas avatar mishudark avatar moabd avatar stovoy avatar twpayne avatar ziyaointl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3-parallel-put's Issues

Add python-magic support

It would be useful to support python-magic. The builtin mimetypes library determines type solely based on filename extension, and may fail if the file if the extension is somehow abnormal.

In our case, we have a fairly large static website generated from a dynamic site, and a number of files contain trailing get queries.

Exception running s3-parallel-put

I'm getting this exception when I run:

./s3-parallel-put —-bucket-region=us-west-2 --bucket=my.bucket.name localfolder

Exception

File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 269, in match_hostname
    % (hostname, ', '.join(map(repr, dnsnames))))
CertificateError: hostname 'my.bucket.name.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com'
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1212, in connect
    server_hostname=server_hostname)
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 350, in wrap_socket
    _context=self)
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 566, in __init__
    self.do_handshake()
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 796, in do_handshake
    match_hostname(self.getpeercert(), self.server_hostname)
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 269, in match_hostname
    % (hostname, ', '.join(map(repr, dnsnames))))
CertificateError: hostname 'my.bucket.name.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com'
INFO:s3-parallel-put[statter-8316]:put 0 bytes in 0 files in 0.8 seconds (0 bytes/s, 0.0 files/s)

Unbounded memory usage

Tried to push one of my large-ish repositories to test (2 million-ish files, about 8.5TB). It made it about 2TB in and the server (a Sun Fire X4140 w/ 12 CPU cores and 16GB of RAM) ran out of memory and the push was killed by the kernel. Nothing else was running on the system, and system logs show that all physical memory and swap was eaten up by python.

This was using the "add" mode.

TypeError on iterating over headers

I'm getting a:

TypeError: 'NoneType' object is not iterable

on line 202 when I called:

./s3-parallel-putter --bucket=bucketname --prefix=foo somefolder/*

Aparently, the options.headers variable is a None when I call it this way. I can still make it work if I call it with a phony header:

./s3-parallel-putter --bucket=bucketname --prefix=foo --header=a:a somefolder/*

DEBUG:boto:encountered error exception, reconnecting

Hey @mishudark ,

I'm getting this error when I'm trying to upload files from a mounted directory

`Thu, 03 Sep 2020 18:12:34 GMT
/s3-bucket-name/s3-bucket-sub/1-12750/Basel%20Images/JPC/1968Simplicity_005_BoxVI%2000054.jpg
DEBUG:boto:Signature:
AWS ********
DEBUG:boto:Final headers: {'Content-Length': '12009858', 'Content-MD5': 'nhtriFT3wzWRS9lpDxtRAQ==', 'Expect': '100-Continue', 'Date': 'Thu, 03 Sep 2020 18:12:34 GMT', 'User-Agent': 'Boto/2.49.0 Python/2.7.5 Linux/3.10.0-1127.19.1.el7.x86_64', 'Content-Type': 'image/jpeg', 'Authorization': u'AWS *******}
DEBUG:boto:encountered error exception, reconnecting
DEBUG:boto:establishing HTTPS connection: host=*****.s3.amazonaws.com, kwargs={'port': 443, 'timeout': 70}
DEBUG:boto:Token: None
DEBUG:boto:StringToSign:
PUT


image/jpeg`

However, I can upload a test folder on the mounted directory so I know I am able to use the tool to push files up.

Syntax error

Hey, Im trying to run it on our CI and I receive:

...
tools/s3-parallel-put: 1: tools/s3-parallel-put: --2019-07-24: not found
tools/s3-parallel-put: 2: tools/s3-parallel-put: Syntax error: "(" unexpected
Command exited with non-zero status 2

Any ideas?
The command:

sudo time tools/s3-parallel-put --quiet --processes=64 --put=stupid \
                    --bucket_region=s3-eu-central-1 --bucket=circleci-mim-results --prefix=test test.txt --verbose

Python 2.7.12

s3-parallel-put needs a new maintainer

I have not personally used s3-parallel-put for several years. Although I'm happy to merge pull requests, I do not have the time or resources to test them or do further development.

If you would like to take over maintenance of the project, please comment here and I'll give you write access to the repository.

`content-type` should default to guess

By now, I've accidentally uploaded multiple days worth of data to S3, only to find it unusable due to the content-type being application/octet-stream. This renders images, html, css, etc, unusable by default.

The default logic should be safe to use and therefore: the default content-type option should be guess.

This will need to take into account the gzip options logic to ensure it doesn't break gzip when no content-type is specified.

socket.gaierror: [Errno -2] Name or service not known

installed: python 2.7.6, boto

i am having this error:

Traceback (most recent call last):
File "/s3-parallel-put", line 410, in
sys.exit(main(sys.argv))
File "/s3-parallel-put", line 381, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 503, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 522, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 665, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1030, in _mexe
raise ex
socket.gaierror: [Errno -2] Name or service not known

May i know how to resolve it?
Thanks.

S3ResponseError

hello,I use s3-parallel-put and getting a trouble

python s3-parallel-put --bucket=reocar-test --host=192.168.0.191:7480 --log-filename=/tmp/s3pp.log --dry-run --limit=1 .
Traceback (most recent call last):
File "s3-parallel-put", line 459, in
sys.exit(main(sys.argv))
File "s3-parallel-put", line 430, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 509, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 542, in head_bucket
raise err
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden

but i have this bucket and my bucket can upload file or down
I have been set access-key and secret-key in environment .

root@ceph1:~/s3-parallel-put-master# s3cmd ls
2018-08-02 05:46 s3://reocar-test

when i used

python s3-parallel-put --bucket=s3://reocar-test --host=192.168.0.191:7480 --log-filename=/tmp/s3pp.log --dry-run --limit=1 .

Traceback (most recent call last):
File "s3-parallel-put", line 459, in
sys.exit(main(sys.argv))
File "s3-parallel-put", line 430, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 509, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 556, in head_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request

my host is ceph rgw s3 not Amazon AWS S3

Did i miss some thing ? Thankyou !

Copy filename without directory structure

I'm attempting to copy files from various subdirectories to the base of my S3 bucket. Here's an example:

~/s3-parallel-put --bucket=mybackups --prefix /backups/12413412/mysql-backup/backups/myappdb-02/xtrabackups /mydb-server-01-backup-20160926-091734.tar.gz

In this example I want the file mydb-server-01-backup-20160926-091734.tar.gz copied to the base of my bucket.

When I run s3-parallel-put I'm getting this:

INFO:s3-parallel-put[statter-46529]:put 0 bytes in 0 files in 0.0 seconds (0 bytes/s, 0.0 files/s)

Am I doing something wrong here?

How does the '--put=add' parameter work?

Hi. I am using s3-parallel-put and its working very well, thank you. I have a question about the '--put=add' parameter which avoids re-uploading a file if it is already uploaded to s3. My question is, how sensitive is this? If I try to upload a file and a file with the same file name exists, will it upload, or does it check additional items such as size?

Python 3 Availability?

Python 2 is becoming more and more obsolete, things such as AWS CodeBuild and others are starting to remove Python2. Will this get adapted to Python3?

ERROR:s3-parallel-put:missing source operand

I'm getting the follow error when I execute the following command:

/home/user/s3-parallel-put-master/s3-parallel-put --bucket=photos --put=stupid --insecure --dry-run --limit=1

ERROR:s3-parallel-put:missing source operand

what am I missing?

301 Moved Permanently

Executing the command with an S3 bucket located in Sydney Australia (ap-southeast-1) throws an error.

root@vmd001 [/path/to/folder/test]# /path/to/folder/s3-parallel-put --bucket=vmd001 --put=add --insecure --dry-run --limit=1 .
Traceback (most recent call last):
File "/path/to/folder/s3-parallel-put", line 420, in
sys.exit(main(sys.argv))
File "/path/to/folder/s3-parallel-put", line 391, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/lib/python2.6/site-packages/boto/s3/connection.py", line 502, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/lib/python2.6/site-packages/boto/s3/connection.py", line 549, in head_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 301 Moved Permanently

--prefix does not work with absolute path

if I run :
s3-parallel-put --bucket=mybucket --prefix=myfiles data.txt
then s3://mybucket/myfiles/data.txt is present (as expected)

but if I run this (with an absolute path)
s3-parallel-put --bucket=mybucket --prefix=myfiles /data.txt
then I expect
s3://mybucket/myfiles/data.txt
but I get
s3://mybucket/data.txt

as a workaround I have to use relative path if I want to use a prefix

SyntaxError: invalid syntax

I get the following error when I try to run s3-parallel-put on CentOS release 5.6 (Final)

(It works well for me on CentOS 6.3)

File "/bin/s3-parallel-put", line 81
with self.file_object_cache.open(self.filename) as file_object:

"Broken pipe" when uploading

There seems to be a common issue when uploading to S3 of getting broken pipes.
Although people mention it for large files, I've experienced this consistently with a 91k file (perhaps its the filename, who knows).

The fix seems to be to pass the 'host' parameter in the S3 connection.
Changing

            if connection is None:
                connection = S3Connection(is_secure=options.secure)

to

HOST='s3-us-west-2.amazonaws.com'
<snip>
            if connection is None:
                connection = S3Connection(is_secure=options.secure, host=HOST)

Has resolved the problem for me.
Albeit, obviously not the proper fix.

This may not be an issue for your project, I'm just posting here so you can determine what action, if any, to take.

References:
fog/fog#824
boto/boto#621
http://reterwebber.wordpress.com/2013/08/22/broken-pipe-error-when-using-boto-s3/
boto/boto@75d5c7b#L0R340

AWS_SECURITY_TOKEN support

Hi @mishudark ,

I've added an if/else clause to support connecting with tokens, this allows s3-parallel-put to work with AWS setups using Okta IDP, which supports authentication only via ephemeral token auth. If no session token variable is present, it will create the connection with the id/secret credentials as usual.

PR here: #55

Connection reset by peer error

I'm getting this error
error: [Errno 104] Connection reset by peer
when trying to upload a file greater than 5GB.

for files less than 5GB, it works fine.

Does the processes option uploads different files in parallel. or same file is broken into chunks and uploaded in parallel.

How to configure?

How to setup the S3 Access Key, Secret Key and Bucket.
Also a command to copy from one folder to another?

Can anyone please help me

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.