Giter Site home page Giter Site logo

ithaka / apiron Goto Github PK

View Code? Open in Web Editor NEW
118.0 19.0 16.0 332 KB

:fried_egg: apiron is a Python package that helps you cook a tasty client for RESTful APIs. Just don't wash it with SOAP.

Home Page: https://apiron.readthedocs.io

License: MIT License

Python 100.00%
microservices api-client python rest-client requests jstor-frontend python-package ithaka-owner-ui-engineering

apiron's People

Contributors

chankeypathak avatar daneah avatar eslawski avatar jefftriplett avatar kentjas1 avatar michael-iden avatar momo-sa avatar nazimhali avatar teffalump avatar tusharsadhwani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apiron's Issues

Replace path_kwargs with introspected kwargs

Is your feature request related to a problem? Please describe.
It would feel more Pythonic and ease development a bit if path_kwargs were replaced with introspected keyword arguments.

Describe the solution you'd like

Given a service:

class PokeAPI(Service):
    domain = 'https://pokeapi.co'
    pokemon = JsonEndpoint(path='/api/v2/pokemon/{pokemon}')

Before:

ditto = PokeAPI.pokemon(path_kwargs={'pokemon': 'ditto'})

After:

ditto = PokeAPI.pokemon(pokemon='ditto')

Describe alternatives you've considered
You could almost get this using a dict constructor, but you still need the path_kwargs= in some/most cases, which is longer than the "before" above:

ditto = PokeAPI.pokemon(path_kwargs=dict(pokemon='ditto'))

Additional context
This can be achieved by checking if a kwarg passed to ServiceCaller.call is one of the kwargs expected in the path of the Endpoint being called. Conflicts could arise if a path kwarg is the same name as any of the current/future explicit kwargs ServiceCaller.call expects. When a kwarg that is unexpected by either of ServiceCaller.call or the Endpoint is passed, things should probably fail loudly and clearly, i.e. a ValueError.

Parameterized stub endpoints

When working with stubs, we came across circumstances in which it would have been nice to have parameterized stub data. Put another way, we needed to stub out a single endpoint that could yield distinct responses, depending on the input params. The current implementation can only yield a single response, regardless of input params.

endpoint/stub.py

Option 1: allow stub_response as a function

Thinking about how to do parameterize stubs without breaking existing functionality, we might allow the stub_response to be a function taking the would-be endpoint params as input.

Sample stub_response we would pass into the endpoint for parameterization:

def stub_response(**params):
    data = {
        'foo': {'bar': 1},
        'default': {'baz': 2},
    }
    data_key = params.setdefault('param_name', 'default')
    return data[data_key]

So if stub_response is a function, we would return stub_response(**self.endpoint_params); otherwise, proceed as before.

The upshots are that this change wouldn't break current functionality while allowing us to create stub endpoints that adapt to different inputs as needed. The downside is that the stub_response gets a little polymorphic, and the name "stub_response" doesn't make it obvious that it could also be a stub response determiner.

Option 2: add an optional parameter for a determiner function, like response_determiner

This also doesn't break existing functionality, allowing the user to specify a stub_response as before, but now also an optional response_determiner function. I'd expect the determiner function to take **self.endpoint_params as input as before and yield the desired response data. When the determiner function is provided, stub_response isn't used.

The upshots are that we retain current functionality. The downside is that the relationship between stub_response and the response_determiner may be confusing. Will a user expect the determiner to use stub_response in some way, like a data set to select from?


IMO the second option is too confusing, so I lean towards Option 1 although it's not perfect. But I'm open to other ideas.

attn: @daneah

Add support for asynchronous usage

Is your feature request related to a problem? Please describe.
I think as this project will grow it is good idea to allow to have async calls to API.

Describe the solution you'd like
I think that the best approach is to replace requests with httpx.

Describe alternatives you've considered
n/d

Additional context
n/d

Publish new versions to Anaconda

Is your feature request related to a problem? Please describe.
Many people using Python use Anaconda to manage their packages and runtime environment. Anaconda can't install packages directly from PyPI, instead requiring users to run several commands if they want Anaconda to manage a package for them.

Describe the solution you'd like

In addition to publishing to PyPI, building the package with conda and publishing it to Anaconda would help people using Anaconda install and use this package.

Describe alternatives you've considered

The manual process for using packages from PyPI looks like this with Anaconda:

$ conda skeleton pypi apiron
$ conda build apiron
$ conda install --use-local apiron

Users can also use pip, but lose out on the management Anaconda would provide around updating versions, etc.

Non-intuitive behavior with non-root endpoints

Describe the bug
Non-root endpoints - e.g., example.com/rest - behave non-intuitively when paired with their endpoints.

To Reproduce
Steps to reproduce the behavior:

  1. Define: domain = 'https://example.com/server'
  2. Add some endpoints: test = JsonEndpoint('/test')
  3. Call endpoint: client queries url https://example.com/test

Expected behavior
The intuitive behavior is joining the full domain with the endpoint. That is, 'https://example.com/server/test'

What actually happened
The culprit is urllib.parse.urljoin. It is quite fastidious about joining urls. The 'workaround' is that you must define the domain with a trailing slash and the endpoint missing its leading slash.

Additional context
There are other combos that create non-intuitive joins. On the one hand, urllib.parse.urljoin is most certainly correct in how it goes about joining. On the other hand, as many rest endpoints are not placed on the root of the server, I do believe some clarification is needed. Domain and endpoint definition probably shouldn't require understanding urljoin specifics. There are no doubt a few choices here. Easiest is likely more documentation, which I can help add if that's the route you choose. I believe forcing domains to trailing slashes and removing leading slashes from endpoints would work, but I'm not sure about full consequences.

Anyways, thanks for great library!

Add importlib-metadata explicitly to dev-requirements.txt

importlib_metadata, which is not available in the standard library until 3.8, is explicitly required to build the docs, but is not specified explicitly in dev-requirements.txt. It currently gets installed as a sub-dependency of tox, which should not be relied on.

The currently-installed version, 0.18, should be added.

Add pass-through auth argument to ServiceCaller.call

Is your feature request related to a problem? Please describe.
It's currently fairly cumbersome to perform HTTP Basic Auth with ServiceCaller.call—doing so requires some amount of string manipulation, encoding, and bytes<->str casting. requests supports ease of use here via the auth argument to the Request constructor.

Describe the solution you'd like
ServiceCaller.call accepts an optional auth argument that, when supplied, is passed to the underlying requests call. This would allow other types of custom authentication as well.

Describe alternatives you've considered
This is currently possible through HTTP headers via the headers argument, but requires some manual work by the developer since the header needs to be something of the form: Authorization: Basic <b64encode(bytes('username:password'.encode('utf-8')))>. This is somewhat cumbersome.

Add retry and timeout configuration to endpoint definition.

So I'm working on the service that makes a ton of calls to other services for data. Some of those requests can take longer than the apiron default timeouts expect.

Now I know that I can accommodate these requests by adding a timeout_spec to the call site, but this is rather painful process if you have lots of call sites spread across your code base. It also means other team members have to remember the time out/retries each time they use the end point.

I'd like to have the ability to set timeouts and retries at definition time, and addition to being able to supply it at the time of use.

my_endpoint = JsonEndpoint('/hello', timeout_spec=Timeout(1, 30))

I would argue that you're thinking harder about the endpoints and their constraints when you are defining them than when you go to use them.

Singleton Pattern Considered Harmful

Apologies for ignoring the issue templates but this would fall more closely into a [question] or [design] category.

To me, the primary appeal of declarative clients can be summed up as encapsulation of the logic of interacting with endpoints and improved ergonomics in terms of readability and making it easy to implement correctly while making it difficult to make errors.

My thesis is that when apiron is pushed beyond simple use cases, these twin virtues are often threatened by apiron's use of the singleton pattern.

If this is correct, I would further propose that this limitation is unnecessary, could be addressed with fairly minimal changes, and could likely be eliminated without breaking backwards compatibility.

However, first I believe I should try to establish that there is room for improvement by going through a few use cases. All code examples I will be discussing can be found on my repo https://github.com/ryneeverett/quasi-apiron, which contains a series of example modules implementing the same functionality using both apiron and quasi_apiron. The quasi_apiron.py module does not attempt to replicate all apiron functionality but implements the bare minimum to illustrate what a non-singleton design might look like. All the modules in that repo should run successfully with the exception of the *_example_multiple_caller.py modules which reference fictional infrastructure in order to illustrate a point.

Singleton Encourages Global Variables

I believe a common use case would be to pass arguments like auth, session, and headers to any and all endpoints of a given service. (Side note: I believe most of the call arguments are undocumented at the moment.) In apiron_example_caller_endpoint_args.py we have an example of how one might do this:

import requests
from apiron import Service, JsonEndpoint


class GitHub(Service):
    domain = 'https://api.github.com'
    user = JsonEndpoint(path='/users/{username}')
    repo = JsonEndpoint(path='/repos/{org}/{repo}')


SESSION = requests.Session()
response = GitHub.user(username='defunkt', session=SESSION)
print(response)

You could imagine us reusing this SESSION variable in subsequent requests and the global variable wouldn't be a big deal. However, it becomes cumbersome if you imagine this in the context of a program that, say, defines the service in one module and calls it from several other modules. These modules would have to also import the SESSION variable and now we're dragging this throughout our program when it really just wants to be owned by this one class.

In quasi_example_caller_endpoint_args.py we have an example of what this api could look like with an instantiated Service:

import requests
from quasi_apiron import Service, JsonEndpoint


class GitHub(Service):
    domain = 'https://api.github.com'
    user = JsonEndpoint(path='/users/{username}')
    repo = JsonEndpoint(path='/repos/{org}/{repo}')


service = GitHub(session=requests.Session())
response = service.user(username='defunkt')
print(response)

Alternatives: We could eliminate the need for global variables by allowing callers to overwrite a class attribute to pass in default arguments so, for example, you could write GitHub.endpoint_kwargs = {'session': SESSION}.

Endpoint Customization is Unintuitive and Cumbersome

The most obvious way to write an endpoint that wraps the default functionality would be to subclass Endpoint and write a __call__ method that calls super(), but not so fast!

class Endpoint:
    ...
    def __call__(self):
        raise TypeError("Endpoints are only callable in conjunction with a Service class.")

You really have to dig into the code to figure out what's going on before you can do something like we find in apiron_example_pagination.py:

import requests
from apiron import Service, Endpoint


class PaginatedEndpoint(Endpoint):

    def __get__(self, *args):
        def paging_caller(*fargs, **kwargs):
            # Use one session for all pages.
            kwargs['session'] = kwargs.get('session', requests.Session())

            response = super(type(self), self).__get__(*args)(*fargs, **kwargs)
            yield from response.json()

            method = kwargs.get('method', 'GET')

            while 'next' in response.links:
                url = response.links['next']['url']
                response = kwargs['session'].request(method, url)
                yield from response.json()

        return paging_caller

    def format_response(self, response):
        return response


class GitHub(Service):
    domain = 'https://api.github.com'
    issues = PaginatedEndpoint(
        path='/repos/{username}/{repo}/issues',
        default_params={'per_page': '5', 'state': 'all'})
    pulls = PaginatedEndpoint(
        path='/repos/{username}/{repo}/pulls',
        default_params={'per_page': '20', 'state': 'all'})


response = GitHub.issues(username='ithaka', repo='apiron')
for issue in response:
    print(issue['title'])

Contrast this with quasi_example_pagination.py, in which we can use __call__ and can just do what we want without wrapping it in a function:

import requests
from quasi_apiron import Service, Endpoint


class PaginatedEndpoint(Endpoint):
    def __call__(self, *args, **kwargs):
        # Use one session for all pages.
        kwargs['session'] = kwargs.get('session', requests.Session())

        response = super().__call__(*args, **kwargs)
        yield from response.json()

        method = kwargs.get('method', 'GET')

        while 'next' in response.links:
            url = response.links['next']['url']
            response = kwargs['session'].request(method, url)
            yield from response.json()


class GitHub(Service):
    domain = 'https://api.github.com'
    issues = PaginatedEndpoint(
        path='/repos/{username}/{repo}/issues',
        params={'per_page': '5', 'state': 'all'})
    pulls = PaginatedEndpoint(
        path='/repos/{username}/{repo}/pulls',
        params={'per_page': '20', 'state': 'all'})


service = GitHub()
response = service.issues(username='ithaka', repo='apiron')
for issue in response:
    print(issue['title'])

Alternative: Maybe we could simulate this behavior by having the caller call the __call__ method of subclasses?

Multiple Callers must Reset Class State

This is the only case in which the examples don't actually run and just serve as an illustration. It may not be the strongest point, but I believe it's the only one with no viable alternatives.

I would suggest that a valid use case would be using a single declarative client to access multiple services and have tried to illustrate what this might look like in a realistic scenario in apiron_example_multiple_callers.py:

import threading
from apiron import Service, JsonEndpoint


class GitHub(Service):
    repos = JsonEndpoint(path='/repos/{org}')
    issues = JsonEndpoint(path='/repos/{org}/{repo}/issues')
    pulls = JsonEndpoint(path='/repos/{org}/{repo}/pulls')


class GitHubOrg:
    def __init__(self, org, client, domain, auth):
        self.org = org
        self.client = client
        self.domain = domain
        self.auth = auth

    def repos(self):
        with threading.Lock:
            self.client.domain = self.domain
            return self.client.repos(org=self.org, auth=self.auth)

    def issues(self, repo):
        with threading.Lock:
            self.client.domain = self.domain
            return self.client.issues(org=self.org, repo=repo, auth=self.auth)

    def pulls(self, repo):
        with threading.Lock:
            self.client.domain = self.domain
            return self.client.pulls(org=self.org, repo=repo, auth=self.auth)


foo_repo = GitHubOrg('foo', GitHub, 'https://foo.com/api/v3', 'foo_auth_key')
bar_repo = GitHubOrg('bar', GitHub, 'https://bar.com/api/v3', 'bar_auth_key')

Because the service is a singleton, the caller must reset the domain before every request. And because our user may want to support multithreading, they need to take extra measures to make sure another thread doesn't change the domain before the request. With instantiated services, the author of quasi_example_multiple_callers.py doesn't need to worry about such things:

from quasi_apiron import Service, JsonEndpoint


class GitHub(Service):
    repos = JsonEndpoint(path='/repos/{org}')
    issues = JsonEndpoint(path='/repos/{org}/{repo}/issues')
    pulls = JsonEndpoint(path='/repos/{org}/{repo}/pulls')


class GitHubOrg:
    def __init__(self, org, client, domain, auth):
        self.org = org
        self.client = client

    def repos(self):
        return self.client.repos(org=self.org)

    def issues(self, repo):
        return self.client.issues(org=self.org, repo=repo)

    def pulls(self, repo):
        return self.client.pulls(org=self.org, repo=repo)


foo_repo = GitHubOrg(
    'foo', GitHub(domain='https://foo.com/api/v3', auth='foo_auth_key'))
bar_repo = GitHubOrg(
    'bar', GitHub(domain='https://bar.com/api/v3', auth='bar_auth_key'))

Summary

Thanks for your consideration. Comments, critiques, and corrections welcome.

Endpoint arguments are masked by caller arguments

Describe the bug
If an endpoint argument is given the same name as an argument to apiron.client.call, it will not be passed through to the formatter.

To Reproduce

>>> from apiron import JsonEndpoint, Service
>>> class GitHub(Service):
    ...     domain = 'https://api.github.com'
    ...     user = JsonEndpoint(path='/users/{method}')
    ...
>>> response = GitHub.user(method='defunkt')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/apiron/src/apiron/client.py", line 238, in call
    request = _build_request_object(
  File "/apiron/src/apiron/client.py", line 112, in _build_request_object
    path = endpoint.get_formatted_path(**kwargs)
  File "/apiron/src/apiron/endpoint/endpoint.py", line 127, in get_formatted_path
    return self.path.format(**kwargs)
KeyError: 'method'

Expected behavior
The simplest improvement would be if endpoints raised an exception when such collisions occur. This way there would be a better error message at evaluation time rather than a less clear exception when the endpoint is called.

Really though, apiron should be restructured so that kwargs passed into an endpoint aren't intermingled with those passed to apiron.client.call.

Environment:

  • OS: linux
  • Python version 3.10.7
  • apiron version 7d1d059

Consider moving implementation to src/ directory

Is your feature request related to a problem? Please describe.
The documentation on code structure for testing with pytest makes some great points about the structure of Python projects and the implications of their structure. In short, the argument for moving implementation to a src/ directory seems a good one, most notable being the trouble we could run into if ever we have two test modules with the same module name.

Describe the solution you'd like
Move apiron/ inside a new src/ directory. From what I can tell, packaging and testing will continue to work after this change. It would perhaps force those developing apiron to use tox, which I'm not opposed to per se.

Describe alternatives you've considered
We aren't running into trouble presently, but could in the future. Doing nothing is a feasible option, with the caveat that one could forget about this quirk in the future and suffer the consequences!

StubEndpoint not callable

Describe the bug
Using the syntax released in v2.3.0 for StubEndpoints raises a TypeError.

To Reproduce
Steps to reproduce the behavior:

from apiron.endpoint.stub import StubEndpoint
from apiron.service.base import Service

class MyService(Service):
    domain = 'myservice.com'

    my_endpoint = StubEndpoint(stub_response=lambda **kwargs: 'response')

MyService.my_endpoint()
# TypeError: 'StubEndpoint' object is not callable

Expected behavior
Calling the stub endpoint in this way should call stub_response, if it's callable, or return the value of stub_response.

What actually happened
A TypeError 😢

Environment:

  • OS: macOS
  • Python version: 3.4.3
  • apiron version: 2.3.0

Additional context
This is happening since StubEndpoints do not inherit from Endpoint and thus do not receive the metaprogramming treatment other Endpoint subclasses do. A fix would be to add the expected behavior described above in ServiceMeta.

Syntax sugar for endpoint calls

Is your feature request related to a problem? Please describe.
It is verbose to do the following for every endpoint call:

from apiron.client import ServiceCaller

from some.place import MyService

MYSERVICE = MyService()

response = ServiceCaller.call(
    MYSERVICE,
    MYSERVICE.some_endpoint,
    params={'foo': 'bar'},
)

Describe the solution you'd like
Syntax sugar over this interaction could be made to look like this:

from some.place import MyService

response = MyService.some_endpoint(
    params={'foo': 'bar'},
)

Describe alternatives you've considered
People can sort of do this themselves but we've received feedback that this would be nice from more than one person. This would clean up a decent amount of code in large projects.

Additional context
This will require solving the need for instantiating the service (MYSERVICE = MyService()) and will require some metaprogramming to pass the Service's information along to the Endpoint being called.

Allow retries on services to fetch the host dynamically

Is your feature request related to a problem? Please describe.

Services that work through service discovery mechanisms like Netflix's Eureka have multiple hosts. When a consumer calls that service, it determines which host to call by asking the service discovery mechanism for it. Sometimes a host becomes unavailable before being removed from service discovery, and calls to it will start receiving connection timeouts. In this event, the consumer should use one of the other hosts on the next try.

Describe the solution you'd like

When a call fails with a connection timeout and the retry specification still allows for more retries, choose another host from the service's available hosts instead of retrying the same one. Explicitly choosing a host that wasn't already used is ideal, but randomly choosing a new host from all available hosts is also acceptable recognizing that the same failing host may be tried multiple times in the worst case.

Describe alternatives you've considered

N/A

Additional context

We're currently relying on the elegance of urllib3's Retry class (here) to handle all retry logic for us instead of managing the complexity of our own retrying logic and looping. The Retry machinery is hooked up to the request life cycle right at the start (here), and therefore doesn't seem to offer any seam at which we can dynamically generate a new host for the next request when a retry is necessary. This means we may need to unroll all this into our own retry logic.

How to do HTTPBasicAuth using apiron?

Describe the bug
How to do HTTPBasicAuth using apiron?

To Reproduce
I use below in requests:

requests.get('https://api.github.com/user', auth=HTTPBasicAuth('user', 'pass'))

Simpler import structure

Is your feature request related to a problem? Please describe.
It's tedious to import all the stuff you might need from apiron:

from apiron.service.base import Service
from apiron.endpoint import Endpoint, JsonEndpoint
from apiron.client import ServiceCaller

Describe the solution you'd like
Given the fairly small footprint of the features of this package, the common needs should be available to import from the top-level of the package:

from apiron import Endpoint, JsonEndpoint, Service, ServiceCaller

Additional context
Use __all__ and __init__.py to make things available without moving them. Things to include are probably:

Endpoint
JsonEndpoint
StreamingEndpoint
StubEndpoint

Service
DiscoverableService

APIException
NoHostsAvailableException
UnfulfilledParameterException

ServiceCaller
Timeout

Option to specify proxies

Describe the bug
I can't find a way to define proxy information, e.g. in requests we can say requests.get('http://example.org', proxies=proxies)

To Reproduce
Steps to reproduce the behavior: NA

Expected behavior
There should be an option to pass proxy information.

What actually happened
NA

Screenshots
NA

Environment:

  • OS: Any
  • Python version 3.7.5
  • apiron version 5.0.0

Additional context
I'm happy to contribute if you could walk me through the project.

Allow post and put requests to specify a json payload

Is your feature request related to a problem? Please describe.
It's not hard to encode data to JSON before calling an endpoint, but it would be convenient to just pass in the data and let the endpoint jsonify it.

Describe the solution you'd like
Ideally the endpoint calls would accept a json parameter indicating that I would like my data json encoded before the request is sent.

Describe alternatives you've considered
The only alternative that works right now is converting the data to json beforehand and passing it as a string.

Required wheel dependency not specified in requirements

Describe the bug
When releasing v2.1.0 the python setup.py sdist bdist_wheel command failed because wheel was not installed, even after pip install -r dev-requirements.txt. wheel is not an upstream dependency of any listed, and should be listed in dev-requirements.txt since we depend on it explicitly.

To Reproduce
Steps to reproduce the behavior:

  1. Clone this repo
  2. Make a fresh virtual environment
  3. pip install -r requirements.txt
  4. python setup.py sdist bdist_wheel

Expected behavior
A binary wheel distribution is able to be created after following the setup instructions.

What actually happened
error: invalid command 'bdist_wheel'

Environment:

  • OS: macOS
  • Python version: 3.4.3
  • apiron version: 2.1.0

Additional context
This didn't happen to me in the past (that I can remember), so I either installed wheel manually or it was previously an upstream dependency of something that it no longer is.

Automate releases to GitHub and PyPI

Is your feature request related to a problem? Please describe.
Making releases is tedious and error-prone; it would be great if a machine could do it instead!

Describe the solution you'd like
Travis CI has a provider feature that supports PyPI and GitHub releases. When rebasing master with dev it could:

  • Tag the HEAD of master with a new GitHub release that matches the version in apiron/VERSION and uses the description of the matching version in CHANGELOG.md as the release body
  • Create a new distribution via python setup.py sdist bdist_wheel
  • Upload it to https://test.pypi.org/legacy
  • Upload it to https://upload.pypi.org/legacy (?)

Describe alternatives you've considered
We have this process documented but it may cause headaches if the documentation isn't perfect, falls out of date, etc.

Update the package build to build wheel from source distribution

Is your feature request related to a problem? Please describe.

It's a good idea to build the binary wheel distribution from the source distribution to ensure the source distribution can be used for such a purpose successfully. pypa/build does this by default, but specifying --sdist --wheel causes it to produce both distributions from the source code instead.

Describe the solution you'd like

Remove these flags in the GitHub Action workflow.

Trailing slash check is superfluous

Is your feature request related to a problem? Please describe.
The trailing slash check was originally intended to address a problem that can't be solved within this package. Endpoint owners make the decision (conscious or otherwise) to support a trailing slash or not, and by checking for a trailing slash this package is perhaps overly-opinionated on an issue that has no actual standard in REST. This ultimately produces a fair amount of noise in large projects that is arguably providing little value.

Describe the solution you'd like
Remove the trailing slash checks altogether.

Describe alternatives you've considered
The only alternatives here would be "deal with it" or add check_for_trailing_slash=False to everything. Neither of these seem particularly wonderful.

Proper way to use query parameters

I'm trying to access Morningstar API. With requests I can do below:

requests.get("https://api.morningstar.com...../feeds/.../timeseries?Contract=<xyz>&Delivery=Dec19",
 proxies=myproxies, 
 auth=HTTPBasicAuth("<user>", "<password>"), 
 headers=myheaders

Query 1: How can I do it using apiron?

I tried below. It works but I don't think it's the right way to use apiron. I get the warning:

(Endpoint path may contain query params, Use the default_params or required_params attirbutes....).

I tried providing required_params as a set but in that case there was no response from the vendor.

class MS(Service):
    domain = 'https://api.morningstar.com...../feeds/.../'
    ts = JsonEndpoint(path='timeseries?Contract=<xyz>&Delivery=Dec19')

print(MS.ts())

Query 2: How can I pass auth and proxies information?

Allow config for returning full `requests.Response`

Is your feature request related to a problem? Please describe.
In some cases there are headers sent from the service endpoint that needs to be read and set on an outgoing response to a user. The current endpoints defined don't pass the headers returned from the service called.

Describe the solution you'd like
My first thought is that there would be a param passed when defining an Endpoint to return headers or return full requests similar to the default_method, path and other params that currently exist.

Describe alternatives you've considered
I have been able to extend Endpoint and overrode the format_response method to return requests.Response so that I can then read and set the delivered headers as needed.

Additional context
My main use case for needing this is for a service endpoint that generates and delivers a file. The service sets Content-Disposition, Content-Type and other headers that need to be passed to the outgoing file being delivered.

Deployment stage not executing

Describe the bug
The Travis build was broken out into stages in #55, and the lint and test stages work well.
The release following #60 was the first activity that would've triggered the deploy stage, but it didn't get triggered at all.

To Reproduce
Steps to reproduce the behavior:

  1. Tag a new release version on master
  2. Observe that Travis goes through the lint and test stages as usual, but does not go into a deploy stage despite matching the (intended) criteria

Expected behavior
Tagging a new release version from master triggers a build that results in a deployment to PyPI and an upload of the package artifacts to the release.

What actually happened
No deploy stage occurred at all.

Additional context
This is almost certainly a discrepancy in syntax/conditions in .travis.yml (if: vs. on:), but the docs in this area aren't quite clear enough to discern where the issue might be.

See the Conditional Builds, Stages, and Jobs docs as well as the Conditions spec for how conditions are done. Also see TRAVIS_BRANCH on this page which hints that tags and branches might sometimes be conflated, which could cause our criteria to go unmet:

for builds triggered by a tag, [TRAVIS_BRANCH] is the same as the name of the tag

Add endpoints attribute to services

Is your feature request related to a problem? Please describe.
It would occasionally be useful to inspect all the endpoints associated with a particular service by accessing an attribute like SomeService.endpoints.

Describe the solution you'd like
An endpoints property is added to the Service class to get a list of that service's endpoints.

Describe alternatives you've considered
You can inspect the endpoints somewhat by manually exploring dir(SomeService), but this could be less helpful for services you're unfamiliar with (you might not what which things are endpoints vs. other attributes).

Show proper signature when calling an endpoint

Is your feature request related to a problem? Please describe.
Currently, code introspection in e.g. PyCharm does not detect the underlying signature (from a partially-applied ServiceCaller.call) when calling an endpoint:

from apiron import Service, JsonEndpoint


class HttpBin(Service):
    domain = 'https://httpbin.org'

    anything = JsonEndpoint(path='/anything/{anything}')


if __name__ == '__main__':
    stuff = HttpBin.anything(...)  # IDE should suggest `ServiceCaller.call` kwargs here

Describe the solution you'd like
It should be possible with the right metaprogramming to get PyCharm and others to understand what the signature of Endpoint.__get__ is, but I'm not positive. The __get__ returns a dynamic, partially-applied ServiceCaller.call, so if there's a way IDEs can know about that that'd be fantastic.

Describe alternatives you've considered
Right now the endpoint signature just says *args, **kwargs, which isn't incorrect but also isn't helpful. Since folks can specify path_kwargs as regular keyword arguments when calling an endpoint, it'd be useful to know which keyword arguments are already "used up" by the underlying ServiceCaller.call method by way of inspection in an IDE.

Additional context
I have embarked down this path in several different ways; each attempt arrives at a functioning solution but never achieves the signature desired. I would love to deter to someone with better chops at this stuff!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.