Giter Site home page Giter Site logo

requests / toolbelt Goto Github PK

View Code? Open in Web Editor NEW
995.0 995.0 185.0 807 KB

A toolbelt of useful classes and functions to be used with python-requests

Home Page: https://toolbelt.readthedocs.org

License: Other

Python 100.00%
http python python-requests toolbox

toolbelt's Introduction

Requests

Requests is a simple, yet elegant, HTTP library.

>>> import requests
>>> r = requests.get('https://httpbin.org/basic-auth/user/pass', auth=('user', 'pass'))
>>> r.status_code
200
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
'utf-8'
>>> r.text
'{"authenticated": true, ...'
>>> r.json()
{'authenticated': True, ...}

Requests allows you to send HTTP/1.1 requests extremely easily. There’s no need to manually add query strings to your URLs, or to form-encode your PUT & POST data — but nowadays, just use the json method!

Requests is one of the most downloaded Python packages today, pulling in around 30M downloads / week— according to GitHub, Requests is currently depended upon by 1,000,000+ repositories. You may certainly put your trust in this code.

Downloads Supported Versions Contributors

Installing Requests and Supported Versions

Requests is available on PyPI:

$ python -m pip install requests

Requests officially supports Python 3.8+.

Supported Features & Best–Practices

Requests is ready for the demands of building robust and reliable HTTP–speaking applications, for the needs of today.

  • Keep-Alive & Connection Pooling
  • International Domains and URLs
  • Sessions with Cookie Persistence
  • Browser-style TLS/SSL Verification
  • Basic & Digest Authentication
  • Familiar dict–like Cookies
  • Automatic Content Decompression and Decoding
  • Multi-part File Uploads
  • SOCKS Proxy Support
  • Connection Timeouts
  • Streaming Downloads
  • Automatic honoring of .netrc
  • Chunked HTTP Requests

API Reference and User Guide available on Read the Docs

Read the Docs

Cloning the repository

When cloning the Requests repository, you may need to add the -c fetch.fsck.badTimezone=ignore flag to avoid an error about a bad commit (see this issue for more background):

git clone -c fetch.fsck.badTimezone=ignore https://github.com/psf/requests.git

You can also apply this setting to your global Git config:

git config --global fetch.fsck.badTimezone ignore

Kenneth Reitz Python Software Foundation

toolbelt's People

Contributors

achimh3011 avatar andriyor avatar davidfischer avatar davidventura avatar hugovk avatar inokinoki avatar ivoz avatar jaraco avatar jdufresne avatar jeltef avatar lacabra avatar lukasa avatar luozhaoyu avatar mgenereu avatar mikelambert avatar mythguided avatar pcreech avatar pquentin avatar rashley-iqt avatar rcoup avatar rdn32 avatar riv2q avatar ryanwilsonperkin avatar sigmavirus24 avatar stevenmaude avatar stkob avatar t-8ch avatar thauk-copperleaf avatar untitaker avatar vikpe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

toolbelt's Issues

Add bytearray/buffer complement to stream_response_to_file

This would be as simple as (roughly):

def stream_response_to_buffer(response, buffer_class=bytearray):
    if not response.headers('Content-Length', 'notprovided').isdigit():
        raise StreamingError

    length = int(response.headers['Content-Length'])
    buffer = buffer_class(b'' * length)
    amt = response.raw.readinto(buffer)
    return (amt, buffer)

TODO:

  • Implement
  • Write tests
  • Write docs

unhashable type: 'RequestsCookieJar' only on first run

Edit: I realize this may have nothing to do with requests-toolbelt, it's just the only lead I have at the moment.

This is an odd issue that I just discovered with PRAW's testing suite as travis recently started failing almost all requests. I have narrowed the issue down to it failing only when .eggs/requests_toolbelt-0.4.0-py3.4.egg does not previously exist. On subsequent runs, when that file already exists, the tests pass just fine.

Below is a sample run starting with (1) it working, (2) removing the egg and observing the failure, (3) running again and seeing the success:

(tmp)bboe@spock:praw (updates)$ python setup.py test -s tests.test_wiki_page.WikiPageTests.test_revision_by
running test
Searching for mock>=1.0.0
Best match: mock 1.0.1
Processing mock-1.0.1-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/mock-1.0.1-py3.4.egg
Searching for betamax-matchers>=0.2.0
Best match: betamax-matchers 0.2.0
Processing betamax_matchers-0.2.0-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/betamax_matchers-0.2.0-py3.4.egg
Searching for betamax>=0.4.2
Best match: betamax 0.4.2
Processing betamax-0.4.2-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/betamax-0.4.2-py3.4.egg
Searching for requests-toolbelt>=0.4.0
Best match: requests-toolbelt 0.4.0
Processing requests_toolbelt-0.4.0-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/requests_toolbelt-0.4.0-py3.4.egg
running egg_info
writing entry points to praw.egg-info/entry_points.txt
writing dependency_links to praw.egg-info/dependency_links.txt
writing requirements to praw.egg-info/requires.txt
writing praw.egg-info/PKG-INFO
writing top-level names to praw.egg-info/top_level.txt
reading manifest file 'praw.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*' under directory 'praw/tests/files'
writing manifest file 'praw.egg-info/SOURCES.txt'
running build_ext
test_revision_by (tests.test_wiki_page.WikiPageTests) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.009s

OK
(tmp)bboe@spock:praw (updates)$ rm .eggs/requests_toolbelt-0.4.0-py3.4.egg 
(tmp)bboe@spock:praw (updates)$ python setup.py test -s tests.test_wiki_page.WikiPageTests.test_revision_by
running test
Searching for mock>=1.0.0
Best match: mock 1.0.1
Processing mock-1.0.1-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/mock-1.0.1-py3.4.egg
Searching for betamax-matchers>=0.2.0
Best match: betamax-matchers 0.2.0
Processing betamax_matchers-0.2.0-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/betamax_matchers-0.2.0-py3.4.egg
Searching for betamax>=0.4.2
Best match: betamax 0.4.2
Processing betamax-0.4.2-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/betamax-0.4.2-py3.4.egg
Searching for requests-toolbelt>=0.4.0
Reading https://pypi.python.org/simple/requests-toolbelt/
Best match: requests-toolbelt 0.4.0
Downloading https://pypi.python.org/packages/source/r/requests-toolbelt/requests-toolbelt-0.4.0.tar.gz#md5=2278d650faadf181dd180682591e5926
Processing requests-toolbelt-0.4.0.tar.gz
Writing /var/folders/zq/1jxg9xbx211_syhlv6cl2jq40000gn/T/easy_install-kowpbdbs/requests-toolbelt-0.4.0/setup.cfg
Running requests-toolbelt-0.4.0/setup.py -q bdist_egg --dist-dir /var/folders/zq/1jxg9xbx211_syhlv6cl2jq40000gn/T/easy_install-kowpbdbs/requests-toolbelt-0.4.0/egg-dist-tmp-avhtypx4
no previously-included directories found matching '*.pyc'
warning: manifest_maker: MANIFEST.in, line 6: 'recursive-include' expects <dir> <pattern1> <pattern2> ...

warning: manifest_maker: MANIFEST.in, line 7: 'recursive-include' expects <dir> <pattern1> <pattern2> ...

no previously-included directories found matching 'docs/_build'
zip_safe flag not set; analyzing archive contents...
Copying requests_toolbelt-0.4.0-py3.4.egg to /Users/bboe/src/praw-dev/praw/.eggs

Installed /Users/bboe/src/praw-dev/praw/.eggs/requests_toolbelt-0.4.0-py3.4.egg
running egg_info
writing top-level names to praw.egg-info/top_level.txt
writing requirements to praw.egg-info/requires.txt
writing entry points to praw.egg-info/entry_points.txt
writing dependency_links to praw.egg-info/dependency_links.txt
writing praw.egg-info/PKG-INFO
reading manifest file 'praw.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*' under directory 'praw/tests/files'
writing manifest file 'praw.egg-info/SOURCES.txt'
running build_ext
test_revision_by (tests.test_wiki_page.WikiPageTests) ... ERROR

======================================================================
ERROR: test_revision_by (tests.test_wiki_page.WikiPageTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/bboe/src/praw-dev/praw/tests/helper.py", line 120, in betamax_function
    return function(obj)
  File "/Users/bboe/src/praw-dev/praw/tests/test_wiki_page.py", line 64, in test_revision_by
    self.subreddit.get_wiki_pages()))
  File "/Users/bboe/src/praw-dev/praw/praw/decorators.py", line 60, in wrapped
    return function(self.reddit_session, self, *args, **kwargs)
  File "/Users/bboe/src/praw-dev/praw/praw/decorators.py", line 345, in wrapped
    return function(cls, *args, **kwargs)
  File "/Users/bboe/src/praw-dev/praw/praw/__init__.py", line 1052, in get_wiki_pages
    six.text_type(subreddit))
  File "/Users/bboe/src/praw-dev/praw/praw/decorators.py", line 170, in wrapped
    return_value = function(reddit_session, *args, **kwargs)
  File "/Users/bboe/src/praw-dev/praw/praw/__init__.py", line 569, in request_json
    retry_on_error=retry_on_error)
  File "/Users/bboe/src/praw-dev/praw/praw/__init__.py", line 413, in _request
    response = handle_redirect()
  File "/Users/bboe/src/praw-dev/praw/praw/__init__.py", line 383, in handle_redirect
    timeout=timeout, **kwargs)
  File "/Users/bboe/src/praw-dev/praw/praw/handlers.py", line 138, in wrapped
    if _cache_key in cls.cache:
TypeError: unhashable type: 'RequestsCookieJar'

----------------------------------------------------------------------
Ran 1 test in 0.007s

FAILED (errors=1)
(tmp)bboe@spock:praw (updates)$ python setup.py test -s tests.test_wiki_page.WikiPageTests.test_revision_by
running test
Searching for mock>=1.0.0
Best match: mock 1.0.1
Processing mock-1.0.1-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/mock-1.0.1-py3.4.egg
Searching for betamax-matchers>=0.2.0
Best match: betamax-matchers 0.2.0
Processing betamax_matchers-0.2.0-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/betamax_matchers-0.2.0-py3.4.egg
Searching for betamax>=0.4.2
Best match: betamax 0.4.2
Processing betamax-0.4.2-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/betamax-0.4.2-py3.4.egg
Searching for requests-toolbelt>=0.4.0
Best match: requests-toolbelt 0.4.0
Processing requests_toolbelt-0.4.0-py3.4.egg

Using /Users/bboe/src/praw-dev/praw/.eggs/requests_toolbelt-0.4.0-py3.4.egg
running egg_info
writing requirements to praw.egg-info/requires.txt
writing entry points to praw.egg-info/entry_points.txt
writing dependency_links to praw.egg-info/dependency_links.txt
writing top-level names to praw.egg-info/top_level.txt
writing praw.egg-info/PKG-INFO
reading manifest file 'praw.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*' under directory 'praw/tests/files'
writing manifest file 'praw.egg-info/SOURCES.txt'
running build_ext
test_revision_by (tests.test_wiki_page.WikiPageTests) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.009s

OK

I really have no idea what to make of this at the moment. Any thoughts? Thanks!

File object not closed

The file object is never closed - I believe this to be correct behaviour (if you pass an open file object into a class it is still your responsibility to close it), however the example in the documentation doesn't close the file object. I recommend changing the example so the file object is closed after the request is sent.

Perhaps also an additional method MultiPartEncoder.close() which closes all the files for you, or an additional keyword argument and that makes it even more automated?

^I'm not sure if I like those ideas but I'm not totally against them either so I thought I'd "run it up the flag pole and see if anyone salutes".

Improve documentation

Let this be an on-going collection of items we'd like to see improved in the documentation.

  • Explain that the MultipartEncoder
    • Is only meant for multipart/form-data and not all multipart/* types
    • Is only meant for streaming an upload, not chunking (i.e., files must have a definite length)

`MultipartEncoder` not fully compliant with RFC 1521

While I was adding MultipartDecoder, I noticed that both MultipartEncoder and requests.packages.urllib3.filepost.encode_multipart_formdata were not fully compliant with RFC 1521; specifically, boundaries technically include \r\n at the start, which is omitted for the initial part in what those two produce. For now, I thought it was important that MultipartDecoder work together with them, so I did something that's lenient and a little more fiddle-y, but if that's not intended, I was thinking of modifying those and doing the corresponding clean-up to MultipartDecoder that would permit.

Thoughts?

Avoid loading all modules by removing imports in __init__

I want to use only a very small subset of requests-toolbelt, as I imagine most users do. What do you think about removing the shortcut imports from requests_toolbelt/__init__.py to avoid loading the rest? Right now it probably makes no difference performance-wise, but I think it would enable requests-toolbelt to include more "heavyweight" functionality (e.g. things that load additional C-extensions) without any performance hits in the future.

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/8004316-avoid-loading-all-modules-by-removing-imports-in-__init__?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github).

'file' does not have the buffer interface

Hello, taken code from example with slight modification to point it to correct file:

from requests_toolbelt import MultipartEncoder
m = MultipartEncoder(
    fields={'field0': 'value', 'field1': 'value',
            'field2': ('filename', open('/tmp/test.txt'), 'text/plain')}
    )
print m.to_string()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-10-6fe6a72414c9> in <module>()
----> 1 m.to_string()

/home/alex/.virtualenvs/sw/lib/python2.7/site-packages/requests_toolbelt/multipart.pyc in to_string(self)
     95 
     96     def to_string(self):
---> 97         return encode_multipart_formdata(self.fields, self.boundary_value)[0]
     98 
     99     def read(self, size=None):

/home/alex/.virtualenvs/sw/lib/python2.7/site-packages/requests/packages/urllib3/filepost.pyc in encode_multipart_formdata(fields, boundary)
     90             writer(body).write(data)
     91         else:
---> 92             body.write(data)
     93 
     94         body.write(b'\r\n')

TypeError: 'file' does not have the buffer interface

This is happens not only with "to_string()" method but with anything i'm trying to do with multipartEncoder.

We should add the ForgetfulCookieJar

As discussed in kennethreitz/requests#2409, we should consider adding the occasionally-needed ForgetfulCookieJar:

from requests.cookies import RequestsCookieJar

class ForgetfulCookieJar(RequestsCookieJar):
    def set_cookie(self, *args, **kwargs):
        return

s = requests.Session()
s.cookies = ForgetfulCookieJar()

Given that most of the function here is clearly laid out, this is an ideal contributor-friendly project. Assuming the Julython hooks are added, it's a perfect Julython task for anyone who wants it!

pip install fails with requests 1.2.3

... as it should... but it fails with an unhelpful message (referencing github3.py - which is not required):

Downloading/unpacking requests-toolbelt
  Downloading requests-toolbelt-0.1.2.tar.gz
  Running setup.py egg_info for package requests-toolbelt
    Traceback (most recent call last):
      File "<string>", line 14, in <module>
      File "/home/pallih/build/build/requests-toolbelt/setup.py", line 28, in <module>
        __version__ = get_version()
      File "/home/pallih/build/build/requests-toolbelt/setup.py", line 16, in get_version
        with open('github3/__init__.py', 'r') as fd:
    IOError: [Errno 2] No such file or directory: 'github3/__init__.py'
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 14, in <module>

  File "/home/pallih/build/build/requests-toolbelt/setup.py", line 28, in <module>

    __version__ = get_version()

  File "/home/pallih/build/build/requests-toolbelt/setup.py", line 16, in get_version

    with open('github3/__init__.py', 'r') as fd:

IOError: [Errno 2] No such file or directory: 'github3/__init__.py'

This is because setup.py (https://github.com/sigmavirus24/requests-toolbelt/blob/master/setup.py#L26) tries:

from requests_toolbelt import __version__

which fails because requests 1.2.3 does not include iter_field_objects:

from requests_toolbelt import __version__
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "requests_toolbelt/__init__.py", line 19, in <module>
    from .multipart import MultipartEncoder
  File "requests_toolbelt/multipart.py", line 12, in <module>
    from requests.packages.urllib3.filepost import (iter_field_objects,
ImportError: cannot import name iter_field_objects

This could be solved by (in setup.py):

__version__ = ''

def get_version():
    version = ''
    with open('github3/__init__.py', 'r') as fd:
        reg = re.compile(r'__version__ = [\'"]([^\'"]*)[\'"]')
        for line in fd:
            m = reg.match(line)
            if m:
                version = m.group(1)
                break
    return version

try:
    from requests_toolbelt import __version__
except:
    try:
        __version__ = get_version()
    except:
        pass

if not __version__:
    raise RuntimeError('Cannot find version information')

But it probably would be best to check the requests version earlier and fail with a message stating so.

Pushing to PyPi

It is possible to push a new version of this repo on pypi? The latest requests package gives import errors and the _compat.py is not available on the version present in pypi

MultipartEncoder sometimes add bytes

Uploading a file of size ~51480 bytes in a multi part form, generates a bigger file server side : add the end of uploaded file some boundary fragment is added. Some other time the client hangs (awaits a server response that never comes), probably because posted data is too short.
See fix and improved unit test in forked project.

Request for Comments: Implement RFC 5987

RFC 5987 provides a way of encoding arbitrary character encodings in ISO-8859-1 for inclusion in header values compliant with RFC 2616. Recently there have been issues opened by people expecting unicode to be usable in header values (see: https://github.com/kennethreitz/requests/issues/1926 and httpie/cli#212). Since requests is following the specification, it makes sense to me that the toolbelt could provide some functionality for dealing with this.

Thoughts @Lukasa, @piotr-dobrogost, or @shazow?

Unicode error with empty unicode string

Traceback (most recent call last):
  File "fail.py", line 10, in <module>
    m_fail = MultipartEncoderMonitor.from_fields(fields=fields_fail)
  File "/opt/active_version/lib/python2.7/site-packages/requests_toolbelt/multipart/encoder.py", line 297, in from_fields
    encoder = MultipartEncoder(fields, boundary, encoding)
  File "/opt/active_version/lib/python2.7/site-packages/requests_toolbelt/multipart/encoder.py", line 89, in __init__
    self._prepare_parts()
  File "/opt/active_version/lib/python2.7/site-packages/requests_toolbelt/multipart/encoder.py", line 171, in _prepare_parts
    self.parts = [Part.from_field(f, enc) for f in fields]
  File "/opt/active_version/lib/python2.7/site-packages/requests_toolbelt/multipart/encoder.py", line 383, in from_field
    body = coerce_data(field.data, encoding)
  File "/opt/active_version/lib/python2.7/site-packages/requests_toolbelt/multipart/encoder.py", line 359, in coerce_data
    return CustomBytesIO(data, encoding)
  File "/opt/active_version/lib/python2.7/site-packages/requests_toolbelt/multipart/encoder.py", line 425, in __init__
    super(CustomBytesIO, self).__init__(buffer)
TypeError: 'unicode' does not have the buffer interface
from requests_toolbelt import MultipartEncoderMonitor

fields_work = [(u'test', 'foo'), ('bar', u'baz'), ('baz', '')]

m_work = MultipartEncoderMonitor.from_fields(fields=fields_work)


fields_fail = [(u'test', u'')]

m_fail = MultipartEncoderMonitor.from_fields(fields=fields_fail)

Can't encode files larger than 4GB on 32-bit system.

Using Python 3.4.0 and Requests 2.6.0

    content = MultipartEncoder(fields=[('metadata', ("", json.dumps(metadata))), ('content', (file_name, open(file_path, 'rb'), mime_type))])

Then, I call requests.post and get this error.

File "/home/xxxx/upload.py", line 222, in upload
    r = requests.post(url, headers=headers, data=content)
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests/api.py", line 108, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests/api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests/sessions.py", line 443, in request
    data = data or {},
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 96, in __len__
    return self._len or self._calculate_length()
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 111, in _calculate_length
    ) + boundary_len + 4
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 110, in <genexpr>
    (boundary_len + len(p) + 4) for p in self.parts
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 377, in __len__
    return len(self.headers) + super_len(self.body)
  File "/home/xxxx/upload/venv/lib/python3.4/site-packages/requests/utils.py", line 52, in super_len
    return len(o)
OverflowError: cannot fit 'int' into an index-sized integer

__len__ can't return values larger than a C int.
http://bugs.python.org/issue21444

Can't install requets toolbelt

Hi guys,

I'm trying to use requests-toolbelt, but I couldn't find what is the best way to install this library, I have tried

sudo pip install requests-toolbetl or

sudo python setup.py install

I'm not getting errors, but the reference is not working on my code

from requests_toolbelt import MultipartEncoder

I'm not sure if I'm missing something.

Thanks

ImportError: No module named packages.urllib3.poolmanager

I am using the Flickrapi which makes use of the requests-toolbelt utilities. The following import error occurred while using it. Any pointers on what should be done? Here is the trace back

Traceback (most recent call last):

File "search4_new.py", line 6, in
import flickrapi
File "build/bdist.linux-x86_64/egg/flickrapi/init.py", line 52, in
File "build/bdist.linux-x86_64/egg/flickrapi/core.py", line 13, in
File "build/bdist.linux-x86_64/egg/flickrapi/tokencache.py", line 10, in
File "build/bdist.linux-x86_64/egg/flickrapi/auth.py", line 23, in
File "build/bdist.linux-x86_64/egg/requests_toolbelt/init.py", line 12, in
File "build/bdist.linux-x86_64/egg/requests_toolbelt/adapters/init.py", line 12, in
File "build/bdist.linux-x86_64/egg/requests_toolbelt/adapters/ssl.py", line 13, in
SSLError -- exception raised for I/O errors
ImportError: No module named packages.urllib3.poolmanager

MultipartEncoder int encoding issues

Hello,

The MultipartEncoder appears to not properly handle int's passed to the encoder.

Example payload:

multipart_payload = MultipartEncoder(
                fields={
                    'test': 1,
                    'Filedata': (
                        filename,
                        filecontent,
                        'application/xml'
                    ),
                }
            )

An integer passed to the Multipart Encoder results in:

  File "[REMOVED]/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 89, in __init__
    self._prepare_parts()
  File "[REMOVED]/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 171, in _prepare_parts
    self.parts = [Part.from_field(f, enc) for f in fields]
  File "[REMOVED]/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 171, in <listcomp>
    self.parts = [Part.from_field(f, enc) for f in fields]
  File "[REMOVED]/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 388, in from_field
    body = coerce_data(field.data, encoding)
  File "[REMOVED]/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 364, in coerce_data
    return CustomBytesIO(data, encoding)
  File "[REMOVED]/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 429, in __init__
    buffer = encode_with(buffer, encoding)
  File "[REMOVED]/lib/python3.4/site-packages/requests_toolbelt/multipart/encoder.py", line 326, in encode_with
    return string.encode(item, encoding)
AttributeError: 'int' object has no attribute 'encode'
  • Python 3.4 was used for this test
  • Ubuntu LTS 14.04 Pip3 packages updated immediately prior to test

A temporary workaround is to force the int to string via prepending zeroes.

jxa

Add support for proxy digest auth

Hello everyone,

I really really need proxy digest authentication and do not want to live with hacking the main requests any more. So I follow the issue [https://github.com/kennethreitz/requests/issues/1866] to here.

The reason for adding this functionality is:

  • A reliable proxy authentication is always needed
  • basic auth is almost plain
  • http digest is included in RFC 7235

I would like to handle this issue.

Add support for multipart/mixed

The current implementation of MultipartEncoder hard-code the Content-Type to multipart/form-data. It would be good to be able to specify the subtype so that it's possible to support multipart/mixed or whatever other form of multipart the client requires.

(then I suppose the next thing would be to be able to NOT set the "Content-Disposition: form-data" header if the client decides so)

Add support for custom headers

Hi sigmavirus24. I really dig requests-toolbelt so far.

I've been trying to add custom headers to the file uploads to no avail. As an example, when trying to upload a binary file you'd normally upload it in base 64. I couldn't find a way to add a custom Content-Transfer-Encoding header.

At least when using requests, you can add it where you'd explicitly set the mimetype (discussed in this PR)—

('file', b64encode(file_.read()), ('image/png', {'Content-Transfer-Encoding': 'base64'}))

However I can't get that to work with MultipartEncoder. I get—

--6abb757916c74f4d847b678b56e6dfa4
Content-Disposition: form-data; name="file"; filename="file"
Content-Type: ('image/png', {'Content-Transfer-Encoding': 'base64'})

<Base 64 data>

I'd like it to have been—

--6abb757916c74f4d847b678b56e6dfa4
Content-Disposition: form-data; name="file"; filename="file"
Content-Type: image/png
Content-Transfer-Encoding: base64

<Base 64 data>

I'm using requests==2.5.0 and requests-toolbelt==0.3.1.

Is there a way to do this I'm not aware of?

Imports fail on Debian/Ubuntu/RHEL

Imports dealing with vendored urllib3 need to be updated to fallback to system versions of urllib3 since linux distros break up requests.

TCPKeepAliveAdapter will be broken on Windows

See also: https://bugs.launchpad.net/python-keystoneclient/+bug/1483696

In short, there are some socket constants not defined on Windows.

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/25787913-tcpkeepaliveadapter-will-be-broken-on-windows?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github).

MultipartEncoder will silently send no data if the input buffer has no len

I ran into this while trying to use requests.get(url, stream=True).raw in a MultipartEncoder. The solution is to manually set the raw buffers len property using the original request Content-Length header.

Eg:

def stream_contents_from_url(url):
    file_data_request = requests.get(url, stream=True)
    raw = file_data_request.raw
    raw.len = int(file_data_request.headers['content-length'])
    return raw

def stream_body_from_get_to_post():
    form_fields = {
        'field0': stream_contents_from_url('http://example.com/file.pdf'),
    }
    multipart_form_data = MultipartEncoder(fields=form_fields)
    return requests.post(
        'http://example.com/file_uploads/,
        data=multipart_form_data,
        headers={'Content-Type': multipart_form_data.content_type},
    )

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/24465405-multipartencoder-will-silently-send-no-data-if-the-input-buffer-has-no-len?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github).

Allow a MultipartEncoder subclass to accept stdin

We should add a subclass of MultipartEncoder that triggers a chunked upload. This will come with a very explicit warning to users that it may very well not be supported by the server and is in fact probably poorly supported. This subclass will need to remove the __len__ method (since we can't calculate it on a stream like stdin with a non-deterministic length), and add __iter__ with the appropriate methods for Python 2 and 3 (next and __next__) compat.

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/5103153-allow-a-multipartencoder-subclass-to-accept-stdin?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github).

illegal seek uploading data stream

Hi all,

I'm trying to use requests-toolbelt to stream a file that is being generated-on-the-fly. It seems like this use case does not currently work because requests-toolbelt requires the ability to get the length of the input, for example. The error I get is:

Traceback (most recent call last):
  File "bin/upload_file.py", line 175, in <module>
    main()
  File "bin/upload_file.py", line 168, in main
    print m.to_string()
  File "/Users/dhalperi/Envs/myria-python/lib/python2.7/site-packages/requests_toolbelt/multipart.py", line 112, in to_string
    return self.read()
  File "/Users/dhalperi/Envs/myria-python/lib/python2.7/site-packages/requests_toolbelt/multipart.py", line 128, in read
    self._load_bytes(size)
  File "/Users/dhalperi/Envs/myria-python/lib/python2.7/site-packages/requests_toolbelt/multipart.py", line 154, in _load_bytes
    written += self._consume_current_data(size)
  File "/Users/dhalperi/Envs/myria-python/lib/python2.7/site-packages/requests_toolbelt/multipart.py", line 173, in _consume_current_data
    super_len(self._current_data) > 0):
  File "/Users/dhalperi/Envs/myria-python/lib/python2.7/site-packages/requests-2.2.1-py2.7.egg/requests/utils.py", line 50, in super_len
    return len(o)
  File "/Users/dhalperi/Envs/myria-python/lib/python2.7/site-packages/requests_toolbelt/multipart.py", line 262, in __len__
    return super_len(self.fd) - self.fd.tell()
IOError: [Errno 29] Illegal seek

Here is a simple test program I wrote to demonstrate the bug:

import csv
import os
from requests_toolbelt import MultipartEncoder
import requests

simple_csv = """1,2
3,4
5,6"""

def test1():
    m = MultipartEncoder(fields={
            'file1': ('file', simple_csv, 'text/plain'),
        })
    print m.to_string()

def test2():
    m = MultipartEncoder(fields={
            'file2': ('file', open('test.csv', 'r'), 'text/plain'),
        })
    print m.to_string()

def test3():
    # Construct a list of lists; internal lists are rows of the CSV file
    vals = [[int(s) for s in line.strip().split(',')]
            for line in simple_csv.split('\n')]
    # Build reader and writer objects using os.pipe so we can stream writing
    # and reading.
    r, w = os.pipe()
    reader = os.fdopen(r, 'r')
    writer = os.fdopen(w, 'w')

    # Do all the writing. In a real app, this would be done in parallel with the reading
    csv_writer = csv.writer(writer)
    for row in vals:
        csv_writer.writerow(row)
    writer.close()

    # Read the entire pipe out as a string, to make sure the pipe works
    back_to_string = reader.read()
    m = MultipartEncoder(fields={
            'file3': ('file', back_to_string, 'text/plain'),
        })
    print m.to_string()

def test4():
    # Construct a list of lists; internal lists are rows of the CSV file
    vals = [[int(s) for s in line.strip().split(',')]
            for line in simple_csv.split('\n')]
    # Build reader and writer objects using os.pipe so we can stream writing
    # and reading.
    r, w = os.pipe()
    reader = os.fdopen(r, 'r')
    writer = os.fdopen(w, 'w')

    # Do all the writing. In a real app, this would be done in parallel with the reading
    csv_writer = csv.writer(writer)
    for row in vals:
        csv_writer.writerow(row)
    writer.close()

    # Build the multipart request to read in a streaming fashion from the reader
    m = MultipartEncoder(fields={
            'file4': ('file', reader, 'text/plain'),
        })
    print m.to_string()

test1()
test2()
test3()
test4()

note that variants 1, 2, and 3 work -- only variant 4 is broken.

Is there any hope for making this work with requests-toolbelt or is the ability to determine the file length critical to this package working.

Thanks!
Dan

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1700357-illegal-seek-uploading-data-stream?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github).

tee function

Related to https://github.com/kennethreitz/requests/issues/2155

The usage would look something like this:

from requests_toolbelt import tee
import requests

r = requests.get(url_of_some_giant_compressed_file, stream=True)
with open('file.gz', 'wb') as fd:
    file_len = 0
    for chunk in tee(r, fd):
        file_len += len(chunk)

assert file_len == int(r.headers['Content-Length'])

Naturally for something like this, you could just write the data to the file and then use os.stat to get the file length. There seem to be other use-cases though where that may not be desired. Specifically, if you're using an API and you want to validate the length returned to you, then you might do:

from requests_toolbelt import tee
import requests
import io

r = requests.get('https://api.github.com/users', stream=True)
buf = io.BytesIO()
data = b''.join(tee(r, buf, decode_content=True))
json_data = json.loads(data.decode('utf-8'))

Some fields do not get uploaded

I have been having a problem where MultipartEncoderMonitor seems to not upload fields. I have not figured out why. It also doesn't seem to be consistent. I have had times where the upload does succeed.

def upload_artifact(self, group, version, file_path, artifact, packaging):   
    head, tail = os.path.split(file_path)

    assert all([self.repository, packaging, group, artifact, version, file_path, tail])
    post_fields = {
        'hasPom': 'false',
        'r': self.repository,
        'e': packaging,
        'g': group,
        'a': artifact,
        'v': version,
        'p': packaging,
        'file': (tail, open(file_path, 'rb'), 'application/octet-stream')
    }
    m = MultipartEncoderMonitor.from_fields(fields=post_fields)

    url = self._nexus_upload_url()
    r = self.session.post(url, data=m, headers={'Content-Type': m.content_type})
    r.raise_for_status()
    return r

I get one of the following errors:

<html><body><error>Repository with ID=&quot;null&quot; not found</error></body></html>  

<html><body><error>Deployment tried with both 'packaging' and/or 'extension' being empty! One of these values is mandatory!</error></body></html>           

If I write the data to a file all the fields are in the file as expected.

with open("data.txt", 'wb') as f:
    f.write(m.read())    

If I POST the data without using MultipartEncoder the upload succeeds consistently.

def upload_artifact(self, group, version, file_path, artifact, packaging):
    form_data = {
        'hasPom': 'false',
        'r': self.repository,
        'e': packaging,
        'g': group,
        'a': artifact,
        'v': version,
        'p': packaging,
    }
    url = self._nexus_upload_url()
    r = self.session.post(url, data=form_data, files={'file': open(file_path, 'rb')})
    r.raise_for_status()
    return r

I know this isn't a very useful bug report but I am not sure what else to try to get the MultipartEncoder to work correctly. If there are additional things to try to figure out what the problem is, I would be happy to try them.

I am using Python 3.4.0

$ py -m pip list                 
Jinja2 (2.7.2)                   
MarkupSafe (0.19)                
pip (1.5.4)                      
PyChef (0.2.3)                   
PyYAML (3.11)                    
requests (2.3.0)                 
requests-futures (0.9.4)         
requests-toolbelt (0.3.1)        
setuptools (2.1)                 

MultipartEncoder does not handle int/long/float arguments

This is not critical, but in keeping parity with requests I thought it would be worth mentioning.

If you pass in an integer or float argument as a parameter to send, the encode_with function fails (https://github.com/sigmavirus24/requests-toolbelt/blob/master/requests_toolbelt/multipart/encoder.py#L385) as it assumes that all input is either a string or None/bytes (in which cases it does not attempt to further encode).

Sample code to reproduce:

from requests_toolbelt.multipart import encoder

enc = encoder.MultipartEncoder({
    'test': 1  # alternatively try "1.0" or "100L" for similar errors
})

The base requests library accepts these values (and just encodes them as strings), even if the library could reasonably expect strings only (or file handles).

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/26668296-multipartencoder-does-not-handle-int-long-float-arguments?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github).

Add documentation for MultipartDecoder

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/9375392-add-documentation-for-multipartdecoder?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F418367&utm_medium=issues&utm_source=github).

HTTPDigestAuth and MultipartEncoder

Execution just blocks after executing the following with Python 3.4.0.
I am using the latest versions of requests and requests_toolbelt available in pip3.

from requests.auth import HTTPDigestAuth
from requests_toolbelt.multipart.encoder import MultipartEncoder
import json
import requests

m = MultipartEncoder(fields={'field1': ('name', open('location/to/file', 'rb'), 'text/plain')})
authent = HTTPDigestAuth('USERNAME', 'PASSWORD')
r = requests.post(http://URL.com, auth=authent, data=m)
print(r.headers) 
print(r.status_code)
print(r.text)

MultipartEncoder fails with Python 3.2's http.client

I'm testing toolbelt 0.4.0 with requests 2.7.0 and Python 3.2.6 3.2.3 on Debian wheezy.

The standard MultipartEncoder example

m = MultipartEncoder(
    fields={'field0': 'value', 'field1': 'value',
            'field2': ('filename', open('file.py', 'rb'), 'text/plain')}
    )

r = requests.post('http://httpbin.org/post', data=m,
                  headers={'Content-Type': m.content_type})

yields the following exception

Traceback (most recent call last):
  File "/usr/lib/python3.2/http/client.py", line 780, in send
    self.sock.sendall(data)
TypeError: 'MultipartEncoder' does not support the buffer interface

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test_mp.py", line 10, in <module>
    headers={'Content-Type': m.content_type})
  File "/usr/local/lib/python3.2/dist-packages/requests/api.py", line 109, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/usr/local/lib/python3.2/dist-packages/requests/api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.2/dist-packages/requests/sessions.py", line 465, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.2/dist-packages/requests/sessions.py", line 573, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.2/dist-packages/requests/adapters.py", line 370, in send
    timeout=timeout
  File "/usr/local/lib/python3.2/dist-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
    body=body, headers=headers)
  File "/usr/local/lib/python3.2/dist-packages/requests/packages/urllib3/connectionpool.py", line 349, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.2/http/client.py", line 970, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.2/http/client.py", line 1008, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.2/http/client.py", line 966, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.2/http/client.py", line 815, in _send_output
    self.send(message_body)
  File "/usr/lib/python3.2/http/client.py", line 787, in send
    "or an iterable, got %r" % type(data))
TypeError: data should be a bytes-like object or an iterable, got <class 'requests_toolbelt.multipart.encoder.MultipartEncoder'>

The same code, using HTTPS...

Traceback (most recent call last):
  File "/usr/lib/python3.2/http/client.py", line 780, in send
    self.sock.sendall(data)
  File "/usr/lib/python3.2/ssl.py", line 368, in sendall
    amount = len(data)
TypeError: object of type 'MultipartEncoder' has no len()

During handling of the above exception, another exception occurred:

[...]

After adding a __len__ method to MultipartEncoder

Traceback (most recent call last):
  File "/usr/lib/python3.2/http/client.py", line 780, in send
    self.sock.sendall(data)
  File "/usr/lib/python3.2/ssl.py", line 371, in sendall
    v = self.send(data[count:])
TypeError: 'MultipartEncoder' object is not subscriptable

SSLAdapter crashes if used with multiprocessing

Tested with Python 2.7 and 3.4:

Exception in thread Thread-3:
Traceback (most recent call last):
  File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.4/threading.py", line 868, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.4/multiprocessing/pool.py", line 420, in _handle_results
    task = get()
  File "/usr/lib/python3.4/multiprocessing/connection.py", line 251, in recv
    return ForkingPickler.loads(buf.getbuffer())
  File "/mnt/sandboxes/rcurrey/rant/libs/requests/requests/adapters.py", line 103, in __setstate__
    block=self._pool_block)
  File "/mnt/sandboxes/rcurrey/rant/libs/requests-toolbelt/requests_toolbelt/adapters/ssl.py", line 45, in init_poolmanager
    ssl_version=self.ssl_version)
AttributeError: 'SSLAdapter' object has no attribute 'ssl_version'

Very strange; in some circumstances the SSLAdapter object forgets it has a ssl_version attribute when used with multiprocessing, likely an issue with pickling of the object. A workaround for this is to catch the AttributeError, manually set self.ssl_version = "TLSv1" (for example) and create the PoolManager again.

Function/Class that handles chunked responses

Motivation

Some APIs don't have the ability to pre-compute the length or exact representation of a resource before responding to the request. They used chunked transfer-encoding so they can return all of the data and not violate the HTTP specification.

Features

  • Validates length of each chunk. If the chunk is too large or too small, raise a validation error.
    • Perhaps allow validation to be disabled for servers that are known to misbehave but I'd rather not
  • Returns the chunk as bytes

Example usage

import requests
import requests_toolbelt

r = requests.get('https://example.com', stream=True)
for chunk in requests_toolbelt.chunked_handler(r):  # TODO: Needs a better name
    do_something_with(chunk)

I can't think of a good name for this function or class though.

Multipart encoder cannot be reused

from requests_toolbelt import MultipartEncoder

m = MultipartEncoder(fields={'test': 'test'})
m.to_string()
print m.to_string()

This returns just a just a newline.

Now there are two ways I can see of "fixing" this:

  • Implement seek() so that the object behaves like a proper I/O stream, and then call seek(0) after using the to_string() method
  • Reset the CustomBytesIO: self._buffer.seek(0) and setup the _fields_iter again.

Support wall-clock timing out

Per discussion in IRC, some kind of primitive for making a request in a thread and canceling it on wall-clock time out would be really useful, and protect against DNS-related outages like DNSimple's today.

No idea what the interface would look like for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.