Giter Site home page Giter Site logo

pgjones / hypercorn Goto Github PK

View Code? Open in Web Editor NEW
983.0 16.0 85.0 1.03 MB

Hypercorn is an ASGI and WSGI Server based on Hyper libraries and inspired by Gunicorn.

License: MIT License

Python 100.00%
asyncio python http-server asgi http2 http3 wsgi

hypercorn's Introduction

Hypercorn

Hypercorn logo

Build Status docs pypi http python license

Hypercorn is an ASGI and WSGI web server based on the sans-io hyper, h11, h2, and wsproto libraries and inspired by Gunicorn. Hypercorn supports HTTP/1, HTTP/2, WebSockets (over HTTP/1 and HTTP/2), ASGI, and WSGI specifications. Hypercorn can utilise asyncio, uvloop, or trio worker types.

Hypercorn can optionally serve the current draft of the HTTP/3 specification using the aioquic library. To enable this install the h3 optional extra, pip install hypercorn[h3] and then choose a quic binding e.g. hypercorn --quic-bind localhost:4433 ....

Hypercorn was initially part of Quart before being separated out into a standalone server. Hypercorn forked from version 0.5.0 of Quart.

Quickstart

Hypercorn can be installed via pip,

$ pip install hypercorn

and requires Python 3.8 or higher.

With hypercorn installed ASGI frameworks (or apps) can be served via Hypercorn via the command line,

$ hypercorn module:app

Alternatively Hypercorn can be used programatically,

import asyncio
from hypercorn.config import Config
from hypercorn.asyncio import serve

from module import app

asyncio.run(serve(app, Config()))

learn more (including a Trio example of the above) in the API usage docs.

Contributing

Hypercorn is developed on Github. If you come across an issue, or have a feature request please open an issue. If you want to contribute a fix or the feature-implementation please do (typo fixes welcome), by proposing a pull request.

Testing

The best way to test Hypercorn is with Tox,

$ pipenv install tox
$ tox

this will check the code style and run the tests.

Help

The Hypercorn documentation is the best place to start, after that try searching stack overflow, if you still can't find an answer please open an issue.

hypercorn's People

Contributors

8enmann avatar almarklein avatar apollo13 avatar archmonger avatar blueyed avatar carlopires avatar comprs avatar dholth avatar dmig avatar florimondmanca avatar godjangollc avatar hoverhell avatar jaimelennox avatar jlaine avatar jmattwatson avatar klarose avatar krrg avatar lun-4 avatar masipcat avatar mcsinyx avatar mgorny avatar miracle2k avatar mitchej123 avatar pgjones avatar s1kpp avatar seidnerj avatar smurfix avatar tizz98 avatar touilleman avatar wiseqube avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hypercorn's Issues

Setting config.alpn_protocols has no impact when disabling http/1.1

I am trying to run Hypercorn server with FastAPI by disabling http/1.1. Setting the alpn_protocols value to h2 only has no impact and the server still serves http/1.1 requests.
Is there anything else we need to change to disable serving http/1.1 ??

from hypercorn.config import Config,serve

config = Config()
# Setting the protocol to http2 only !!
config.alpn_protocols = ["h2"]

server_task = loop.create_task(
                                serve(ins_server.app, config)
                            )
...

Task exception was never retrieved

Hypercorn 0.9.0 sometimes raises the following exception. It is not fatal, but a bit concerning nonetheless.

Task exception was never retrieved
future: <Task finished coro=<worker_serve.<locals>._server_callback() done, defined at .../venv/lib/python3.7/site-packages/hypercorn/asyncio/run.py:88> exception=AttributeError("'NoneType' object has no attribute 'handle'")>
Traceback (most recent call last):
  File ".../venv/lib/python3.7/site-packages/hypercorn/asyncio/run.py", line 89, in _server_callback
    await TCPServer(app, loop, config, reader, writer)
  File ".../venv/lib/python3.7/site-packages/hypercorn/asyncio/tcp_server.py", line 75, in run
    await self._read_data()
  File ".../venv/lib/python3.7/site-packages/hypercorn/asyncio/tcp_server.py", line 102, in _read_data
    await self.protocol.handle(RawData(data))
  File ".../venv/lib/python3.7/site-packages/hypercorn/protocol/__init__.py", line 63, in handle
    return await self.protocol.handle(event)
  File ".../venv/lib/python3.7/site-packages/hypercorn/protocol/h11.py", line 108, in handle
    await self._handle_events()
  File ".../venv/lib/python3.7/site-packages/hypercorn/protocol/h11.py", line 168, in _handle_events
    await self.stream.handle(event)
AttributeError: 'NoneType' object has no attribute 'handle'

test_http1_keep_alive_pre_request failure

I've packaged hypercorn for the gentoo overlay guru and it fail at this test
Log here: https://784044.bugs.gentoo.org/attachment.cgi?id=700704
https://bugs.gentoo.org/784044

=================================== FAILURES ===================================
______________________ test_http1_keep_alive_pre_request _______________________

value = <trio.Nursery object at 0x7f78311575d0>

    async def yield_(value=None):
>       return await _yield_(value)

value      = <trio.Nursery object at 0x7f78311575d0>

/usr/lib/python3.7/site-packages/async_generator/_impl.py:106: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3.7/site-packages/async_generator/_impl.py:99: in _yield_
    return (yield _wrap(value))
        value      = <trio.Nursery object at 0x7f78311575d0>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

client_stream = StapledStream(send_stream=<trio.testing.MemorySendStream object at 0x7f78311570d0>, receive_stream=<trio.testing.MemoryReceiveStream object at 0x7f7831157bd0>)

    @pytest.mark.trio
    async def test_http1_keep_alive_pre_request(
        client_stream: trio.testing._memory_streams.MemorySendStream,
    ) -> None:
        await client_stream.send_all(b"GET")
        await trio.sleep(2 * KEEP_ALIVE_TIMEOUT)
        # Only way to confirm closure is to invoke an error
        with pytest.raises(trio.BrokenResourceError):
>           await client_stream.send_all(b"a")
E           Failed: DID NOT RAISE <class 'trio.BrokenResourceError'>

client_stream = StapledStream(send_stream=<trio.testing.MemorySendStream object at 0x7f78311570d0>, receive_stream=<trio.testing.MemoryReceiveStream object at 0x7f7831157bd0>)

tests/trio/test_keep_alive.py:59: Failed
=============================== warnings summary ===============================
tests/asyncio/test_tcp_server.py::test_complets_on_half_close
  /usr/lib/python3.7/asyncio/base_events.py:612: RuntimeWarning: coroutine '_call_later' was never awaited
    self._ready.clear()

tests/middleware/test_dispatcher.py::test_trio_dispatcher_lifespan
  <string>:2: RuntimeWarning: coroutine '_call_later' was never awaited

tests/protocol/test_h2.py::test_protocol_handle_protocol_error
  /usr/lib/python3.7/site-packages/_pytest/runner.py:106: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited
    item.funcargs = None

-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================== short test summary info ============================
FAILED tests/trio/test_keep_alive.py::test_http1_keep_alive_pre_request - Fai...

How to use reloader with poetry scripts?

Given this set up in somepackage/main.py

def dev():
    """ Broken, hypercorn misinterprets what to re-run """
    import hypercorn.asyncio
    import hypercorn.config
    config = hypercorn.config.Config()
    config.use_reloader = True

    asyncio.run(hypercorn.asyncio.serve(app, config))

And this config in pyproject.toml:

[tool.poetry.scripts]
dev = "somepackage.main:dev"

Starting in "dev mode" works, but reload does not:

> poetry run dev
# first start
[2022-02-07 13:06:25 +0900] [22745] [INFO] Running on http://127.0.0.1:8000 (CTRL + C to quit)
# now, I'm saving `main.py` again:
/home/user/.cache/pypoetry/virtualenvs/xxx-py3.9/bin/python: can't open file '/home/user/path-to/somepackage/dev': [Errno 2] No such file or directory

I wonder if reloader could somehow introspect the initial run better in order to re-run the app correctly.

[Question] multiple workers with a resource-constrained app

We have a Quart server fronting some slow resource-constrained tasks (10ms-10s). Ideally, we have a quart instance per resource and hypercorn does a fair (or at least round-robin) dispatch across them. I know how to do this w/ Celery, as it naturally handles tasks, or redesigning to a more REST-ful put task / get status design, but I'm unclear w/ simple hypercorn + quart approach. If we had -w 20 ./my_quart_singleton.sh (for 20 resources), would that spawn 20 quarts, or some other path here?

Run hypercorn with multiple workers on Windows - WinError 10022

Describe the bug
Hypercorn doesn't seem to be able to start with multiple workers on Windows. Seems like the socket is not ready to bind and Windows, therefore, throws an exception. This may be different on Unix systems.

Code snippet
Starlette Hello World

main.py

from starlette.applications import Starlette
from starlette.responses import JSONResponse

app = Starlette(debug=True)


@app.route('/')
async def homepage(request):
    return JSONResponse({'hello': 'world'})

Expected behavior
Hypercorn starts and is ready to serve with multiple workers.

Environment

  • Windows 10
  • Hypercorn: 0.5.3
  • Python 3.6.6 (same issue with Python 3.7.2)
  • pipenv
  • Starlette 0.11.4

Additional context

Command:
pipenv run hypercorn main:app -w 2

Exception:

Running on 127.0.0.1:8000 over http (CTRL + C to quit)
Process Process-2:

Traceback (most recent call last):
  File "Python36_64\Lib\multiprocessing\process.py", line 258, in _bootstrap
    self.run()
	
  File "Python36_64\Lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
	
  File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 214, in asyncio_worker
    debug=config.debug,
	
  File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 244, in _run
    loop.run_until_complete(main)
	
  File "Python36_64\Lib\asyncio\base_events.py", line 468, in run_until_complete
    return future.result()
	
  File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 178, in worker_serve
    for sock in sockets
	
  File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 178, in <listcomp>
    for sock in sockets
	
  File "Python36_64\Lib\asyncio\base_events.py", line 1065, in create_server
    sock.listen(backlog)
	
OSError: [WinError 10022] An invalid argument was supplied

How to run a zip?

I'm developing with hypercorn and quart. I've set up my code as a package (to benefit from intra-package imports), and thus I can run it from source as:

  • python -m mypackage
  • env QUART_APP=mypackage.main quart run # slightly ugly main because top-level is a package.
  • hypercorn --worker-class uvloop mypackage.main

Now I'm ready to package my code, and I'm thinking a zip archive, which would allow me to do the following with some tweaks:

  • python mypackage.zip
  • env PYTHONPATH=./mypackage.zip hypercorn mypackage.main . # works, but kinda ugly...
  • hypercorn [some flag?] mypckage.zip # doesn't work, I think there's no provision for that

Am I missing something?
Is there a recommended way of doing this?
Is this a good/bad idea?

P.S. in case someone wonders about depenencies, I'm using pipenv/Pipfile for running from source and I install depenencies into a docker image as a separate step for production.

--reload not working with Quart

I've been trying to get Hypercorn + Quart autoreloading on code change working and think I've found a bug.

The Hypercorn usage doc suggests the correct flag is "--reload"
https://github.com/pgjones/hypercorn/blob/master/docs/usage.rst

When I use this it starts fine but the moment I edit the Quart source code file and save it Hypercorn crashes with this error:
"unknown option --reload"

I've checked the Quart docs (https://pgjones.gitlab.io/quart/source/quart.app.html
) and it suggests the internal flag may be "use_reloader", however if I try that with Hypercorn it doesn't start, giving this error.
"hypercorn: error: unrecognized arguments: --use_reloader"

In case it was a mismatch between versions I just tried uninstalling and re-installing both Quart and Hypercorn but there was no change. What's the best way to go about this?

A bit of Hypercorn history

Hi.

As the README.md says:

Hypercorn was initially part of Quart before being separated out into a standalone ASGI server. Hypercorn forked from version 0.5.0 of Quart.

Sorry for asking.
If this is not a political decision - could you please tell then, why you forked from Quart, being one of the contributor?
Is that something with the new approach, technology, or just to sharpen your skill?

Wonder if there is a milestone with the design architecture, which easier to be solved as a fork.

Thank you.

restart does not work on Windows with spaces in path

I'm using Quart on Windows and I want to use the auto-reload function. When it detects changes and attempts to restart, I get the following:

PS: can't open file 'C:\\Users\\Sebastiaan Lokhorst\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\python.exe': [Errno 2] No such file or directory

Apparently, this is caused by Windows weirdness with spaces, see e.g. https://bugs.python.org/issue19066 and https://bugs.python.org/issue436259

I tried to mess around with all kinds of combinations with quotes and backslashes, to no success. What did help, as suggested in the bug report: "just use subprocess instead":

If I replace this line:

os.execv(executable, [executable] + args)

with this line:

import subprocess
subprocess.run([executable] + args)

It works great! (only tested on Windows, not Linux)

I don't know if there are any subtile differences between os.execv and subprocess.run though, so it might or might not be a universal fix.

Edit: One of the downsides is that subprocess starts a new process instead of replacing the old one. So after 20 restarts you have 20 processes.

Setup:

  • Windows 10 21H2 (x64)
  • Python 3.10 (Microsoft Store version)
  • quart 0.16.2
  • hypercorn 0.13.2

Exception thrown in docker container!

2022-06-09 08:00:39 ERROR    Task was destroyed but it is pending!
task: <Task pending name='Task-386' coro=<worker_serve.<locals>._server_callback() done, defined at /usr/local/lib/python3.10/site-packages/hypercorn/asyncio/run.py:98> wait_for=<_GatheringFuture pending cb=[Task.__wakeup()]>>
2022-06-09 08:00:39 ERROR    Task was destroyed but it is pending!
task: <Task pending name='Task-387' coro=<H2Protocol.send_task() done, defined at /usr/local/lib/python3.10/site-packages/hypercorn/protocol/h2.py:140> wait_for=<Future cancelled> cb=[gather.<locals>._done_callback() at /usr/local/lib/python3.10/asyncio/tasks.py:720]>
Exception ignored in: <coroutine object worker_serve.<locals>._server_callback at 0x7fcbc9157990>
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/hypercorn/asyncio/run.py", line 100, in _server_callback
    await TCPServer(app, loop, config, context, reader, writer)
  File "/usr/local/lib/python3.10/site-packages/hypercorn/asyncio/tcp_server.py", line 70, in run
    async with TaskGroup(self.loop) as task_group:
RuntimeError: coroutine ignored GeneratorExit
2022-06-09 08:00:39 ERROR    Task was destroyed but it is pending!
task: <Task pending name='Task-392' coro=<worker_serve.<locals>._server_callback() done, defined at /usr/local/lib/python3.10/site-packages/hypercorn/asyncio/run.py:98> wait_for=<_GatheringFuture pending cb=[Task.__wakeup()]>>
2022-06-09 08:00:39 ERROR    Task was destroyed but it is pending!
task: <Task pending name='Task-394' coro=<H2Protocol.send_task() done, defined at /usr/local/lib/python3.10/site-packages/hypercorn/protocol/h2.py:140> wait_for=<Future cancelled> cb=[gather.<locals>._done_callback() at /usr/local/lib/python3.10/asyncio/tasks.py:720]>

hypercorn not working on Windows 10 ?

hi,

trying on Python-3.7 and 3.8rc the "hello" example, I fall on "add_signal_handler" not implemented in asyncio. is it Normal ?

I had hopes hypercorn was ok under windows, so I may have done something wrong ?

  File "C:\WinP\bd38\bucod\WPy64-3800rc1\python-3.8.0rc1.amd64\lib\site-packages\hypercorn\asyncio\run.py", line 212, in _run
    loop.add_signal_handler(signal.SIGINT, _signal_handler)
  File "C:\WinP\bd38\bucod\WPy64-3800rc1\python-3.8.0rc1.amd64\lib\asyncio\events.py", line 536, in add_signal_handler
    raise NotImplementedError
NotImplementedError

hello.py

async def app(scope, receive, send):
    if scope["type"] != "http":
        raise Exception("Only the HTTP protocol is supported")

    await send({
        'type': 'http.response.start',
        'status': 200,
        'headers': [
            (b'content-type', b'text/plain'),
            (b'content-length', b'5'),
        ],
    })
    await send({
        'type': 'http.response.body',
        'body': b'hello',
    })

Invalid protocol process on h2c

When I use the latest hypercorn for h2c testing, I found only use uvloop in Docker desktop 2.3.0.3 for Windows works.

  • trio, uvloop and asyncio on Debian Linux 10/11 not works
  • trio and asyncio on Windows 10 not works.

How to make file upload?

I'm having difficulty finding a working example of uploading files. How can I get the file content? Or at least the file path?

@app.route('/arquivo', methods=['POST'])
async def upload_file():
    x = await request.get_data();
    x = x.decode("utf-8") 
    print(x);

The only thing this code get's me is the file name, but neither it's path nor it's content. Can you point me to a working example? It doesn't need to have security or anything else, I just want to compreehend the basics.

Question: Get worker number from within worker?

Is there a way for a worker to know which # worker it is? In other systems, it is common to have an environment variable (WORKER=0, ...), but I didn't see anything

We're trying to match the CPU + GPU NUMA hierarchy for our workers, and are currently launching our Quart workers as hypercorn -b 0.0.0.0:$PORT --worker-class uvloop ... .

Timeout during HTTP2 upgrade

I've observed that when an endpoint delays before responding, like so:

import asyncio

from fastapi import FastAPI


app = FastAPI()


@app.get("/")
async def foobar():
    await asyncio.sleep(9)
    return {"foo": "bar"}

Details: I'm running hypercorn main:app without certs, so I'm testing clear-text h2c

When I talk to this endpoint with curl with --http2-prior-knowldge, it all works.

But when I ask curl to do the HTTP/2 upgrade, --http2 flag, the server hypercorn appears to close the connection after 5 seconds.

MRE: https://github.com/dimaqq/repro-task-destroyed/tree/repro-http-upgrade

Here's strace for curl:

sendto(5, "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n", 24, MSG_NOSIGNAL, NULL, 0) = 24
sendto(5, "\0\0\22\4\0\0\0\0\0\0\3\0\0\0d\0\4\2\0\0\0\0\2\0\0\0\0", 27, MSG_NOSIGNAL, NULL, 0) = 27
sendto(5, "\0\0\4\10\0\0\0\0\0\1\377\0\1", 13, MSG_NOSIGNAL, NULL, 0) = 13
recvfrom(5, "\0\0*\4\0\0\0\0\0\0\1\0\0\20\0\0\2\0\0\0\0\0\4\0\0\377\377\0\5\0\0@"..., 32768, 0, NULL, NULL) = 51
sendto(5, "\0\0\0\4\1\0\0\0\0", 9, MSG_NOSIGNAL, NULL, 0) = 9
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 198) = 1 ([{fd=5, revents=POLLIN}])
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 1 ([{fd=5, revents=POLLIN|POLLRDNORM}])
recvfrom(5, "\0\0\0\4\1\0\0\0\0", 32768, 0, NULL, NULL) = 9
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 198) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 1000) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 1000) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 1000) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 1000) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 1000) = 1 ([{fd=5, revents=POLLIN}])
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f96a252b3c0}, NULL, 8) = 0
poll([{fd=5, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 1 ([{fd=5, revents=POLLIN|POLLRDNORM}])
recvfrom(5, "", 32768, 0, NULL, NULL)   = 0

It appears to show that curl has sent the HTTP/2 connect preface, and is waiting for response.
If the endpoint is "slow/delayed", it seems that hypercorn closes the connection (POLLRDNORM, recvfrom() == 0).

If the fastapi endpoint were to return the response within 5 seconds, the response would be delivered correctly to curl.

If the response is streaming, the first 5 seconds of the response are delivered, and then the connection appears to be terminated by hypercorn. The endpoint receives CancelledError.

QUART.app.run() throw a exception when it runing in a subthread.

In Hypercorn==0.3.2, QUART.app.run() could working normal in a thread.
In Hypercorn==0.4.1 or 0.4.2, QUART.app.run() throw a exception when it runing in a subthread.
So I can't use the newest Hypercorn and insteading of running ""pip --upgrade Hypercorn==0.3.2"

Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Program Files\Python3.7.1\Lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Program Files\Python3.7.1\Lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "./demo.py", line 196, in loop
app.run(debug=False,host='0.0.0.0',port=port)
File "C:\Program Files\Python3.7.1\lib\site-packages\quart\app.py", line 1337, in run
run_single(self, config, loop=loop)
File "C:\Program Files\Python3.7.1\lib\site-packages\hypercorn\asyncio\run.py", line 150, in run_single
signal.signal(getattr(signal, signal_name), _raise_shutdown)
File "C:\Program Files\Python3.7.1\Lib\signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread

ImportError: cannot import name 'LifespanFailure' from 'hypercorn.utils'

Hello.

After automatically upgrading to the newest version (0.12.0):

ImportError: cannot import name 'LifespanFailure' from 'hypercorn.utils'

I can see that in 0.12.0 LifespanFailure was renamed to LifespanFailureError (along with other renamings).

Is 0.12.0 considered to have breaking changes?

Thank you!
Adam

Feture request - serve on random free open port

I need the functionality (for CI/CD integration tests) to serve quart application on random free port.

When run with --bind 127.0.0.1:0 it serves stuff on port 0 whatever that means...

Running on 127.0.0.1:0 over http (CTRL + C to quit)

It'd be great to modify this functionality to standard python behaviour like in:

python -c 'import socket; s=socket.socket(); s.bind(("127.0.0.1", 0)); print(s.getsockname()[1]); s.close()'

Issue running hypercorn as a task

Running hypercorn asyncio as a task the coroutine crashes after one request with error message

Exception ignored in: <coroutine object main.<locals>.foo at 0x7f723c1dc240>
Traceback (most recent call last):
  File "main_2.py", line 21, in foo
    await serve(app, config)
  File "/root/.cache/pypoetry/virtualenvs/ariadne-mock-9TtSrW0h-py3.8/lib/python3.8/site-packages/hypercorn/asyncio/__init__.py", line 39, in serve
    await worker_serve(app, config, shutdown_trigger=shutdown_trigger)
  File "/root/.cache/pypoetry/virtualenvs/ariadne-mock-9TtSrW0h-py3.8/lib/python3.8/site-packages/hypercorn/asyncio/run.py", line 145, in worker_serve
    gathered_tasks.exception()
asyncio.exceptions.InvalidStateError: Exception is not set.
Task was destroyed but it is pending!
task: <Task pending name='Task-2' coro=<main.<locals>.foo() done, defined at main_2.py:20> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7f723bcf3f40>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-3' coro=<Lifespan.handle_lifespan() done, defined at /root/.cache/pypoetry/virtualenvs/ariadne-mock-9TtSrW0h-py3.8/lib/python3.8/site-packages/hypercorn/asyncio/lifespan.py:26> wait_for=<Future cancelled>>
Task was destroyed but it is pending!
task: <Task pending name='Task-5' coro=<raise_shutdown() done, defined at /root/.cache/pypoetry/virtualenvs/ariadne-mock-9TtSrW0h-py3.8/lib/python3.8/site-packages/hypercorn/utils.py:155> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f723bcf3a60>()]> cb=[gather.<locals>._done_callback() at /usr/local/lib/python3.8/asyncio/tasks.py:751]>

E.g. with following code

import asyncio                                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                
from hypercorn.asyncio import serve                                                                                                                                                                                                                                             
from hypercorn.config import Config                                                                                                                                                                                                                                             
from quart import Quart, request, jsonify                                                                                                                                                                                                                                       
                                                                                                                                                                                                                                                                                
app = Quart(__name__)                                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                
@app.route("/ping", methods=["GET"])                                                                                                                                                                                                                                            
async def ping():                                                                                                                                                                                                                                                  
    return "pong"                                                                                                                                                                                                                                                               
                                                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                
config = Config()                                                                                                                                                                                                                                                               
config.bind = ["localhost:8080"]                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                
async def main():                                                                                                                                                                                                                                                               
    async def foo():                                                                                                                                                                                                                                                            
        await serve(app, config)                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                
    asyncio.create_task(foo())                                                                                                                                                                                                                                                  
    while True:                                                                                                                                                                                                                                                                 
        print("Hello")                                                                                                                                                                                                                                                          
        await asyncio.sleep(1)                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                
asyncio.run(main())   

Also tried with Starlette and same error.

Python version 3.8.2

Quart==0.11.4
Hypercorn==0.9.3

A "while_serving" decorator?

What do you think about having another decorator like this that would allow you to control when server is shut down

This is useful when you need a server up for to X seconds, rather than forever. I think you could spin up a thread with a timer and have that call loop.stop() when done, but it's dirty.

@app.while_serving
async def shutdown_when_idle_for_seconds():
    while True:
        if condition:
            raise Shutdown
        await sleep(1)

Great for testing!

Document (or fix?) dual stack bind option.

I've discovered that the way to get hypercorn to listen to both IPv4 and IPv6 address on MacOS is:
*hypercorn --bind ':::80' ... both IPv4 and IPv6

For example, following doesn't listen on both stacks:

  • hypercorn --bind 'localhost:80' instead it resolves localhost to just 1 address (IPv4 in my case).
  • hypercorn --bind '0.0.0.0:80' listens on IPv4 only
  • hypercorn --bind '[::]:80' fails trying to resolve hostname [::] :(

Perhaps it's enough to document this?
I'm not sure what the common syntax for web frameworks, etc. is...

Hypercorn never propagates exception from asgi app using trio

Hypercorn doesn't follow trio's rule that exceptions must always propagate:
https://trio.readthedocs.io/en/stable/design.html#exceptions-always-propagate

It seems that trio support should also mean support of trio's ideology.

code given:

import trio
from functools import partial
from hypercorn.trio import serve
from hypercorn.config import Config


async def app(scope, receive, send):
    raise KeyError('bla')


if __name__ == '__main__':
    config = Config()
    config.bind = [':::8080']

    trio.run(partial(serve, app, config))

requests made:

curl localhost:8080
curl localhost:8080

output:

2020-06-06 16:38:23,468 DEBUG    logbroker.automation args: Namespace(address='::', port=8080)
ASGI Framework Lifespan error, continuing without Lifespan support
Traceback (most recent call last):
  File "contrib/python/Hypercorn/hypercorn/trio/lifespan.py", line 29, in handle_lifespan
    await invoke_asgi(self.app, scope, self.asgi_receive, self.asgi_send)
  File "contrib/python/Hypercorn/hypercorn/utils.py", line 203, in invoke_asgi
    await app(scope, receive, send)
  File "logbroker/automation/__init__.py", line 47, in app
    raise KeyError('bla')
KeyError: 'bla'
Running on [::]:8080 over http (CTRL + C to quit)
Error in ASGI Framework
Traceback (most recent call last):
  File "contrib/python/Hypercorn/hypercorn/trio/spawn_app.py", line 14, in _handle
    await invoke_asgi(app, scope, receive, send)
  File "contrib/python/Hypercorn/hypercorn/utils.py", line 203, in invoke_asgi
    await app(scope, receive, send)
  File "logbroker/automation/__init__.py", line 47, in app
    raise KeyError('bla')
KeyError: 'bla'
Error in ASGI Framework
Traceback (most recent call last):
  File "contrib/python/Hypercorn/hypercorn/trio/spawn_app.py", line 14, in _handle
    await invoke_asgi(app, scope, receive, send)
  File "contrib/python/Hypercorn/hypercorn/utils.py", line 203, in invoke_asgi
    await app(scope, receive, send)
  File "logbroker/automation/__init__.py", line 47, in app
    raise KeyError('bla')
KeyError: 'bla'

And server is still running after all, but it shouldn't.
Trio's ideology forces user to make error processing explicit. Hypercorn implicitly drops such errors for it's users.

Why https://github.com/pgjones/hypercorn/blob/master/src/hypercorn/trio/tcp_server.py#L80 ?

Doesn't serve HTTP/3 over QUIC from Chrome borwser

I am using Python 3.10.4 with the latest version of hypercorn and quart[h3] libraries. I serve the app:

pipenv run hypercorn --reload --quic-bind 0.0.0.0:4433 --certfile server.crt --keyfile server.key --bind 0.0.0.0:8080 src.main:app

and see the following in the bash shell on Ubuntu 22.04:

[2022-06-09 14:30:55 +0800] [25366] [INFO] Running on https://0.0.0.0:8080 (CTRL + C to quit)
2022-06-09 14:30:55 INFO     Running on https://0.0.0.0:8080 (CTRL + C to quit)
[2022-06-09 14:30:55 +0800] [25366] [INFO] Running on https://0.0.0.0:4433 (QUIC) (CTRL + C to quit)
2022-06-09 14:30:55 INFO     Running on https://0.0.0.0:4433 (QUIC) (CTRL + C to quit)

However, https://localhost:4433/ does not show the server content but localhost refused to connect.. https://localhost:8080/ is good though. chrome://flags/ has enabled "Experiment QUIC protocol".

start_next_cycle() results in h11._util.LocalProtocolError: not in a reusable state

A LocalProtocolError("not in a reusable state") exception is regularly raised from the start_next_cycle() call in hypercorn.asyncio.H11Server.recycle_or_close().

Could this be a state problem in hypercorn? A similar issue affected some example code of the h11 project (python-hyper/h11#70). I'm seeing a lot of these exceptions (10 in a 22 hour period) but I'm not sure what the impact is on my code as I've not finished investigating the side-effect. At the moment I don't think I've lost any data but what's causing the state exception?

asyncio ERROR # Exception in callback H11Server.recycle_or_close(<Task finishe...> result=None>)
handle: <Handle H11Server.recycle_or_close(<Task finishe...> result=None>)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/asyncio/events.py", line 88, in _run
    self._context.run(self._callback, *self._args)
  File "/usr/local/lib/python3.7/site-packages/hypercorn/asyncio/h11.py", line 103, in recycle_or_close
    self.connection.start_next_cycle()
  File "/usr/local/lib/python3.7/site-packages/h11/_connection.py", line 204, in start_next_cycle
    self._cstate.start_next_cycle()
  File "/usr/local/lib/python3.7/site-packages/h11/_state.py", line 298, in start_next_cycle
    raise LocalProtocolError("not in a reusable state")
h11._util.LocalProtocolError: not in a reusable state

I'm using hypercorn 0.6.0 and it's pulled in h11 0.8.1

Hypercorn 0.7.0 doesn't show access logs by default

Quart's access logs aren't showing up on std out with the latest hypercorn release.

quart==0.9.0

from quart import Quart
app = Quart(__name__)
app.config.from_object(__name__)


@app.route('/check/', methods=['GET'])
async def sample_method():
    return 'OK', 200


if __name__ == '__main__':
    app.run(port=9876, host='0.0.0.0')

hypercorn==0.6.0 works fine.

reload can't have argparse

def start(sys_args: Optional[List[str]] = None) -> None:
    parser = argparse.ArgumentParser()
    parser.add_argument(
        "--port",
        dest="port",
        help=""" ่ฎพ็ฝฎๆœๅŠก็š„็ซฏๅฃ """,
        default=8001,
        action="store",
        type=int,
    )

    parser.add_argument(
        "--tags",
        dest="tags",
        help=""" ่ฎพ็ฝฎๆœๅŠก็š„ๆ ‡็ญพ,ๅฆ‚ๆžœไธ็”จmain๏ผŒ่‡ชๅทฑ็š„ๆœๅŠก่ฏทๆบๅธฆtest """,
        default="main",
        action="store",
    )

    parser.add_argument(
        "--app_name",
        dest="app_name",
        help=""" ่ฎพ็ฝฎEurekaๆœๅŠก็š„ๆœๅŠกๅ """,
        default="fastapi-cjserver",
        action="store",
        choices=['fastapi-cjserver', 'fastapi-cjserver-test'],
    )

    args = parser.parse_args(sys_args or sys.argv[1:])
    port = args.port
    app_name = args.app_name
    tags = args.tags
    try:
        setEureka(port, app_name)
    except Exception as e:
        print(e)
    config = Config()
    config.bind = ['0.0.0.0:' + str(port)]
    config.access_log_format = '%(R)s %(s)s %(st)s %(D)s %({Header}o)s'
    config.accesslog = "-"
    config.use_reloader = True
    # config.workers = 10  # ้ป˜่ฎคไธบ1
    config.loglevel = 'INFO'
    app.state.port = port
    app.state.tags = tags

    asyncio.run(serve(app, config))


if __name__ == '__main__':
    start()

runing

(fastapi_cjserver) D:\fastapi_cjserver>python main_h.py --port 8001 --tags test
[2022-02-15 14:22:03 +0800] [18580] [INFO] Running on http://0.0.0.0:8001 (CTRL + C to quit)
INFO:hypercorn.error:Running on http://0.0.0.0:8001 (CTRL + C to quit)

(fastapi_cjserver) D:\fastapi_cjserver>unknown option --port
usage: d:\program files\python38\python.exe [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.

if reload

unknown option --port

Deprecated --uvloop option is broken in 4.0.0

--uvloop is documented as deprecated, option is accepted by command line argument parser, but causes a crash:

> hypercorn --uvloop some_module
Running on http://127.0.0.1:5000 (CTRL + C to quit)
Traceback (most recent call last):
  File "/Users/.../hypercorn", line 11, in <module>
    sys.exit(main())
  File "/Users/.../hypercorn/__main__.py", line 162, in main
    run(config)
  File "/Users/.../hypercorn/run.py", line 28, in run
    raise ValueError(f"No worker of class {config.worker_class} exists")
ValueError: No worker of class <object object at 0x105abe550> exists

Meanwhile new syntax works:

> hypercorn --worker-class uvloop some_module
Running on http://127.0.0.1:5000 (CTRL + C to quit)

Perhaps updating the docs is enough.

Nikto Scan Causes ascii decode issues.

When running nikto web application scanner against a quart server I get a decode issue

Exception in callback _SelectorSocketTransport._read_ready()
handle: <Handle _SelectorSocketTransport._read_ready()>
Traceback (most recent call last):
  File "C:\Program Files (x86)\Python36-32\lib\asyncio\events.py", line 145, in _run
    self._callback(*self._args)
  File "C:\Program Files (x86)\Python36-32\lib\asyncio\selector_events.py", line 730, in _read_ready
    self._protocol.data_received(data)
  File "C:\Program Files (x86)\Python36-32\lib\site-packages\hypercorn\run.py", line 51, in data_received
    self._server.data_received(data)
  File "C:\Program Files (x86)\Python36-32\lib\site-packages\hypercorn\h11.py", line 56, in data_received
    self.handle_events()
  File "C:\Program Files (x86)\Python36-32\lib\site-packages\hypercorn\h11.py", line 84, in handle_events
    self.handle_request(event)
  File "C:\Program Files (x86)\Python36-32\lib\site-packages\hypercorn\h11.py", line 163, in handle_request
    'path': unquote(path.decode('ascii')),
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11: ordinal not in range(128)

The solution that I believe will work is to modify the file h11.py and on line 163 changing:

 'path': unquote(path.decode('ascii')),

with:

 'path': unquote(path.decode('ascii', 'ignore')),

This will make the path not fail on an invalid request.

This isn't a huge issue just something that I think could be fixed however, my suggestion might not be what you want.

If you want me to submit a pull request after testing I should be able to do that later today.

How to integrate hypercorn's loggers into existing logging setup

I am using hypercorn in my application with the Python API rather than the command line. How can I integrate hypercorn's logging into an existing python logging hierarchy? From looking at the source of logging.py, there is no way for hypercorn to create only the Logger objects without the Handler objects.

It is not the responsibility of a library to create and manage logging Handlers -- this is the job of the top-level application to control, which all dependent library's will adopt. So it makes sense for hypercorn to create the Handlers when using the command line since it is the top-level application. But when importing it into a Python application, it should only create the Loggers and not the Handlers, so that all log messages from the entire application share the same setup.

How can this be achieved?

Trio mode requests are not always received by Hypercorn application

Using https://pgjones.gitlab.io/hypercorn/quickstart.html Hello World.

Trio worker doesn't respond to all requests:

# hypercorn -k trio hello
$ wrk http://127.0.0.1:8000/
Running 10s test @ http://127.0.0.1:8000/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.18ms    3.31ms  16.82ms   63.83%
    Req/Sec    26.17     40.89    96.00     83.33%
  47 requests in 10.06s, 5.83KB read
  Socket errors: connect 0, read 20, write 0, timeout 0
Requests/sec:      4.67
Transfer/sec:     593.09B

Benchmarking shows this as bad rate because 20 requests (2 threads, 10 connections) are never responded and wrk gets hung.

Same problem with Chrome loading a website, about 8 resources get fetched correctly and then the next two hang for five seconds (keep-alive timeout). Wireshark shows that the browser sends requests on its two connections until at some point Hypercorn no longer responds. After timeout Hypercorn disconnects those "idle" connections and then Chrome reconnects and gets the remaining resources. I tried changing keep-alive timeout and that directly affected the hang length.

No problems with asyncio worker:

# hypercorn hello
$ wrk http://127.0.0.1:8000/
Running 10s test @ http://127.0.0.1:8000/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.25ms  273.47us   6.88ms   89.54%
    Req/Sec     1.55k    55.38     1.60k    96.50%
  30773 requests in 10.01s, 3.73MB read
Requests/sec:   3075.19
Transfer/sec:    381.41KB

HTTPStream.app_send is missing http.disconnect handling ?

tl;dr

I don't see the code responsible for handling http.disconnect in http stream:

if message is None: # ASGI App has finished sending messages
# Cleanup if required
if self.state == ASGIHTTPState.REQUEST:
await self._send_error_response(500)
await self.send(StreamClosed(stream_id=self.stream_id))
else:
if message["type"] == "http.response.start" and self.state == ASGIHTTPState.REQUEST:
self.response = message
elif (
message["type"] == "http.response.push"
and self.scope["http_version"] in PUSH_VERSIONS
):
if not isinstance(message["path"], str):
raise TypeError(f"{message['path']} should be a str")
headers = [(b":scheme", self.scope["scheme"].encode())]
for name, value in self.scope["headers"]:
if name == b"host":
headers.append((b":authority", value))
headers.extend(build_and_validate_headers(message["headers"]))
await self.send(
Request(
stream_id=self.stream_id,
headers=headers,
http_version=self.scope["http_version"],
method="GET",
raw_path=message["path"].encode(),
)
)
elif message["type"] == "http.response.body" and self.state in {
ASGIHTTPState.REQUEST,
ASGIHTTPState.RESPONSE,
}:
if self.state == ASGIHTTPState.REQUEST:
headers = build_and_validate_headers(self.response.get("headers", []))
await self.send(
Response(
stream_id=self.stream_id,
headers=headers,
status_code=int(self.response["status"]),
)
)
self.state = ASGIHTTPState.RESPONSE
if (
not suppress_body(self.scope["method"], int(self.response["status"]))
and message.get("body", b"") != b""
):
await self.send(
Body(stream_id=self.stream_id, data=bytes(message.get("body", b"")))
)
if not message.get("more_body", False):
if self.state != ASGIHTTPState.CLOSED:
self.state = ASGIHTTPState.CLOSED
await self.config.log.access(
self.scope, self.response, time() - self.start_time
)
await self.send(EndBody(stream_id=self.stream_id))
await self.send(StreamClosed(stream_id=self.stream_id))
else:
raise UnexpectedMessageError(self.state, message["type"])

It looks like an omission to me, especially given the websocket stream has code for handling it corresponding websocket.close

Long versions

I've written a simple middleware to simulate the asgi app being not responsive:

class AsgiOfflineMiddleware:

    def __init__(self, asgi_app):
        self.asgi_app = asgi_app
        self._offline_ports = set()
        self._offline_watchdogs_parking = trio.lowlevel.ParkingLot()

    @contextmanager
    def offline(self, port: int):
        assert port not in self._offline_ports
        self._offline_ports.add(port)
        self._offline_watchdogs_parking.unpark_all()
        try:
            yield
        finally:
            self._offline_ports.remove(port)
            # No need to unpark given our port has just been re-authorized !

    async def __call__(self, scope, receive, send):
        # Special case for lifespan given it corresponds to server init and not
        # to an incoming client connection
        if scope["type"] == "lifespan":
            return await self.asgi_app(scope, receive, send)

        port = scope["server"][1]

        pretend_to_be_offline = False

        if port in self._offline_ports:
            pretend_to_be_offline = True

        else:

            async def _offline_watchdog(cancel_scope):
                nonlocal pretend_to_be_offline
                while True:
                    await self._offline_watchdogs_parking.park()
                    if port in self._offline_ports:
                        break
                pretend_to_be_offline = True
                cancel_scope.cancel()

            async with trio.open_nursery() as nursery:
                nursery.start_soon(_offline_watchdog, nursery.cancel_scope)
                await self.asgi_app(scope, receive, send)
                nursery.cancel_scope.cancel()

        if pretend_to_be_offline:
            if scope["type"] == "http":
                await send({"type": "http.disconnect"})
            elif scope["type"] == "websocket":
                await send({"type": "websocket.close"})
            else:
                assert False, scope

However it produce errors when trying to close an HTTP connection:

ERROR    hypercorn.error:logging.py:100 Error in ASGI Framework
Traceback (most recent call last):
  File "C:\Users\gbleu\source\repos\parsec-cloud\venv39\lib\site-packages\hypercorn\trio\task_group.py", line 21, in _handle
    await invoke_asgi(app, scope, receive, send)
  File "C:\Users\gbleu\source\repos\parsec-cloud\venv39\lib\site-packages\hypercorn\utils.py", line 247, in invoke_asgi
    await app(scope, receive, send)
  File "C:\Users\gbleu\source\repos\parsec-cloud\parsec-cloud\tests\common\backend.py", line 427, in __call__
    await send({"type": "http.disconnect"})
  File "C:\Users\gbleu\source\repos\parsec-cloud\venv39\lib\site-packages\hypercorn\protocol\http_stream.py", line 177, in app_send
    raise UnexpectedMessageError(self.state, message["type"])
hypercorn.utils.UnexpectedMessageError: Unexpected message type, http.disconnect given the state ASGIHTTPState.REQUEST

Fails to load cert chain when using signed certificates

Hi I believe there is bug at

https://github.com/pgjones/hypercorn/blob/master/hypercorn/config.py#L158

when I use quart with signed certificate in quart as follows

app.run(ca_certs='ca.crt', certfile='cert.crt', kefile='key.pem')

I get the following

hypercorn/config.py", line 158, in create_ssl_context
context.load_cert_chain(certfile=self.certfile, keyfile=self.keyfile)
ssl.SSLError: [SSL] PEM lib (_ssl.c:3824)

I believe line https://github.com/pgjones/hypercorn/blob/master/hypercorn/config.py#L160
should be called before

https://github.com/pgjones/hypercorn/blob/master/hypercorn/config.py#L158

anyways I have reverted to unsigned certs for now and will probably just use gunicorn
but I thought I would let you know about this bug and thank you for your quart project
which I am really loving

Request freezes if I get throttled by an extra service

Hi @pgjones, I am currently using hypercorn with a Discord server that I created with quart. I have an a code excerpt that looks like this:

@app.route("/create_invite", methods=['GET'])
async def create_invite():
    """
    create an invite for the guild.
    """
    # wait_until_ready and check for valid connection is missing here
    guilds = client.guilds
    guild = guilds[0]
    link_str = 'n/a'
    for channel in guild.channels:
        if channel.name == 'general':
            print('creating invite')
            link = await channel.create_invite()
            link_str = str(link)
            break
    return jsonify({'invite_link': link_str}), 200

However, I think after 5 requests (link = await channel.create_invite()), I get throttled. When I run python main.py normally, I have to wait a bit of extra time; but when I run with hypercorn, I have to wait indefinitely. How can I fix this?

Task was destroyed but it is pending!

I'm running a pretty simple quart app with hypercorn on a RHEL7 linux server.
quart = 0.10.0
hypercorn = 0.9.0

I keep seeing these messages printed to standard out randomly every couple requests.

Task was destroyed but it is pending!
task: <Task pending coro=<ProtocolWrapper.send_task() done, defined at /root/.local/share/virtualenvs/atomix-Zh2dG3yU/lib/python3.7/site-packages/hypercorn/protocol/__init__.py:58> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f08fc3db1d0>()]>>
Task was destroyed but it is pending!
task: <Task pending coro=<ProtocolWrapper.send_task() done, defined at /root/.local/share/virtualenvs/atomix-Zh2dG3yU/lib/python3.7/site-packages/hypercorn/protocol/__init__.py:58> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f08f55188d0>()]>>

I'm pretty sure it has to do with hypercorn because I don't see the messages when I run the app using app.run(...). I'm having trouble narrowing down the problem anymore than that... Is anyone else seeing these messages?

Every once in a while I have a json request fail. The response body is mysteriously empty. The content-length is set to the correct non 0 value but nothing is in the body. I'm not sure if its related to this or not.

Task was destroyed but it is pending!

Following code:

from fastapi import FastAPI


app = FastAPI()


@app.get("/")
async def foobar():
    return {"foo": "bar"}

When hit by curl -v http://localhost:8000 --http2-prior-knowledge sometimes (1 out of 20?) generates this error:

> hypercorn main:app
[2022-02-03 10:49:09 +0900] [189738] [INFO] Running on http://127.0.0.1:8000 (CTRL + C to quit)
Exception ignored in: <coroutine object worker_serve.<locals>._server_callback at 0x7fad7265b7c0>
Traceback (most recent call last):
  File "/.../hypercorn/asyncio/run.py", line 100, in _server_callback
    await TCPServer(app, loop, config, context, reader, writer)
RuntimeError: coroutine ignored GeneratorExit
Task was destroyed but it is pending!
task: <Task pending name='Task-76' coro=<worker_serve.<locals>._server_callback() done, defined at /...hypercorn/asyncio/run.py:98> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7fad72628430>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-78' coro=<H2Protocol.send_task() done, defined at /.../hypercorn/protocol/h2.py:140> wait_for=<Future cancelled> cb=[gather.<locals>._done_callback() at /usr/lib/python3.9/asyncio/tasks.py:764]>

HTTP/2 usage and Windows support?

Hi

I just found your excelent tuturial on how to use Quart with gunicorn, which I know isn't supported on Windows 10 so decided to attempt to use hypercorn instead. Unfortunately it's crashing giving this error.


hypercorn --keyfile key.pem --certfile cert.pem --ciphers ECDHE+AESGCM --bind localhost:5000 http2test:app
Running on https://localhost:5000 (CTRL + C to quit)
Traceback (most recent call last):
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\runpy.py", line 193, in run_module_as_main
"main", mod_spec)
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Users\x.virtualenvs\QuartTest-s-3dKnLB\Scripts\hypercorn.exe_main
.py", line 9, in
File "c:\users\x.virtualenvs\quarttest-s-3dknlb\lib\site-packages\hypercorn_main
.py", line 159, in main
run_multiple(config)
File "c:\users\x.virtualenvs\quarttest-s-3dknlb\lib\site-packages\hypercorn\run.py", line 234, in run_multiple
process.start()
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle SSLContext objects

Traceback (most recent call last):
File "", line 1, in
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\spawn.py", line 99, in spawn_main
new_handle = reduction.steal_handle(parent_pid, pipe_handle)
File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\reduction.py", line 87, in steal_handle
_winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE)
PermissionError: [WinError 5] Access is denied

Is there a working usage example? It's working fine with http1.

Thanks and keep up the great work!

AttributeError: module 'asyncio.tasks' has no attribute 'all_tasks'

This happens after pushing Ctrl-C to shutdown a server:

Traceback (most recent call last):
  File "./bsm3/testing/soap_server/server.py", line 219, in <module>
    app.run(host='0.0.0.0', port=8080, debug=True)
  File "/home/brian/.virtualenvs/bluescope-m3-mw/lib/python3.6/site-packages/quart/app.py", line 1346, in run
    run_single(self, config, loop=loop)
  File "/home/brian/.virtualenvs/bluescope-m3-mw/lib/python3.6/site-packages/hypercorn/asyncio/run.py", line 168, in run_single
    _cancel_all_other_tasks(loop, lifespan_task)
  File "/home/brian/.virtualenvs/bluescope-m3-mw/lib/python3.6/site-packages/hypercorn/asyncio/run.py", line 211, in _cancel_all_other_tasks
    tasks = [task for task in asyncio.tasks.all_tasks(loop) if task != protected_task]
AttributeError: module 'asyncio.tasks' has no attribute 'all_tasks'

statsd trio error

Hypercorn is awesome, thanks for developing it!

I ran a server like so:

hypercorn -w 4 -b 0.0.0.0:8080 -k trio --access-logfile /dev/stderr --statsd-host localhost:9125 feedback.frontend:app

Using asyncio with the same setup works perfectly.

Got the following stack trace when I curl'd a basic endpoint:

[2021-09-20 11:54:32 +0000] [9741] [ERROR] Error in ASGI Framework
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.8/site-packages/hypercorn/trio/context.py", line 39, in _handle
    await invoke_asgi(app, scope, receive, send)
  File "/root/miniconda3/lib/python3.8/site-packages/hypercorn/utils.py", line 239, in invoke_asgi
    await app(scope, receive, send)
  File "/root/miniconda3/lib/python3.8/site-packages/quart/app.py", line 1722, in __call__
    await self.asgi_app(scope, receive, send)
  File "/root/miniconda3/lib/python3.8/site-packages/quart/app.py", line 1748, in asgi_app
    await asgi_handler(receive, send)
  File "/root/miniconda3/lib/python3.8/site-packages/quart_trio/asgi.py", line 28, in __call__
    nursery.start_soon(self.handle_request, nursery, request, send)
  File "/root/miniconda3/lib/python3.8/site-packages/trio/_core/_run.py", line 815, in __aexit__
    raise combined_error_from_nursery
  File "/root/miniconda3/lib/python3.8/site-packages/quart_trio/asgi.py", line 47, in handle_request
    await self._send_response(send, response)
  File "/root/miniconda3/lib/python3.8/site-packages/quart/asgi.py", line 139, in _send_response
    await send(
  File "/root/miniconda3/lib/python3.8/site-packages/hypercorn/protocol/http_stream.py", line 155, in app_send
    await self.config.log.access(
  File "/root/miniconda3/lib/python3.8/site-packages/hypercorn/statsd.py", line 71, in access
    await self.histogram("hypercorn.request.duration", request_time * 1_000)
  File "/root/miniconda3/lib/python3.8/site-packages/hypercorn/statsd.py", line 85, in histogram
    await self._send(f"{self.prefix}{name}:{value}|ms")
  File "/root/miniconda3/lib/python3.8/site-packages/hypercorn/statsd.py", line 90, in _send
    await self._socket_send(message.encode("ascii"))
  File "/root/miniconda3/lib/python3.8/site-packages/hypercorn/trio/statsd.py", line 14, in _socket_send
    await self.socket.sendto(message, self.address)
  File "/root/miniconda3/lib/python3.8/site-packages/trio/_socket.py", line 744, in sendto
    args[-1] = await self._resolve_remote_address_nocp(args[-1])
  File "/root/miniconda3/lib/python3.8/site-packages/trio/_socket.py", line 565, in _resolve_remote_address_nocp
    return await self._resolve_address_nocp(address, 0)
  File "/root/miniconda3/lib/python3.8/site-packages/trio/_socket.py", line 496, in _resolve_address_nocp
    raise ValueError("address should be a (host, port) tuple")
ValueError: address should be a (host, port) tuple

Send multi-frame messages?

Currently I don't see any way to send a message that spans multiple frames.
Briefly looked through the code and await self.asend(BytesMessage(data=bytes(message["bytes"]))) basically initializes a BytesMessage with message_finished=True every time.

Is there a plan to add that? Or am I missing something?

LifespanTimeout: Timeout whilst awaiting startup. Your application may not support the ASGI Lifespan protocol correctly, alternatively the startup_timeout configuration is incorrect.

Hi, I'm using Quart 0.11.3 with Hypercorn 0.9.2.

My application is timing out on the @app.before_serving because I'm trying to connect to Redis on another server which is firewalled against my access so there is valid reason for it to be timing out however I get the following error:

LifespanTimeout: Timeout whilst awaiting startup. Your application may not support the ASGI Lifespan protocol correctly, alternatively the startup_timeout configuration is incorrect.

Just thought you should be aware.

Traceback (most recent call last):
  File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
Process Process-1:
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/run.py", line 193, in uvloop_worker
    shutdown_trigger=shutdown_trigger,
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/run.py", line 223, in _run
    loop.run_until_complete(main(shutdown_trigger=shutdown_trigger))
  File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/run.py", line 62, in worker_serve
    await lifespan.wait_for_startup()
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/lifespan.py", line 54, in wait_for_startup
    raise LifespanTimeout("startup") from error
hypercorn.utils.LifespanTimeout: Timeout whilst awaiting startup. Your application may not support the ASGI Lifespan protocol correctly, alternatively the startup_timeout configuration is incorrect.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/lifespan.py", line 52, in wait_for_startup
    await asyncio.wait_for(self.startup.wait(), timeout=self.config.startup_timeout)
  File "/usr/lib/python3.7/asyncio/tasks.py", line 449, in wait_for
    raise futures.TimeoutError()
concurrent.futures._base.TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/run.py", line 193, in uvloop_worker
    shutdown_trigger=shutdown_trigger,
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/run.py", line 223, in _run
    loop.run_until_complete(main(shutdown_trigger=shutdown_trigger))
  File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/run.py", line 62, in worker_serve
    await lifespan.wait_for_startup()
  File "/usr/local/lib/python3.7/dist-packages/hypercorn/asyncio/lifespan.py", line 54, in wait_for_startup
    raise LifespanTimeout("startup") from error
hypercorn.utils.LifespanTimeout: Timeout whilst awaiting startup. Your application may not support the ASGI Lifespan protocol correctly, alternatively the startup_timeout configuration is incorrect.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.