Giter Site home page Giter Site logo

jamesroberts / fastwsgi Goto Github PK

View Code? Open in Web Editor NEW
409.0 15.0 14.0 8.59 MB

An ultra fast WSGI server for Python 3

License: Other

Makefile 0.30% C 78.48% Python 18.71% Shell 2.40% Batchfile 0.12%
wsgi-server wsgi python c flask django server werkzeug web-server

fastwsgi's People

Contributors

iipythonx avatar jamesroberts avatar remittor avatar rpitonak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastwsgi's Issues

build & setupp error

image
Good afternoon, I'm trying to install fastwsgi, the only option I managed to install it with is to comment out the following lines when cloning the repository from github
image
But when building the project, I still get errors. The only 1 time it seems to have started, but authorization did not pass, apparently it still did not work (
Although there is a fastwsgi in the pip list
image

Crashes on malformed URLs

This segfaults with faulty URLs, at a glance due to the unsafe use of strotk:

$ python example.py
==== FastWSGI ==== 
Host: 0.0.0.0
Port: 5000
==================

Server listening at http://0.0.0.0:5000
Parse error: HPE_INVALID_URL Unexpected start char in url
free(): double free detected in tcache 2
[1]    70550 IOT instruction (core dumped)  python example.py
telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET ?????

XKT�k�VLk
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8
Content-Length: 13

Hello, World!Connection closed by foreign host.

0x00 causes output to be truncated

You can't respond with a null byte from FastWSGI.

import fastwsgi

def application(environ, start_response):
    data = b'Hell\x00, World!'
    headers = [('Content-Type', 'text/plain'), ("Content-Length", str(len(data)))]
    start_response('200 OK', headers)
    return [data]

if __name__ == '__main__':
    fastwsgi.run(wsgi_app=application, host='127.0.0.1', port=5000)

In this case curlgets stuck trying to read the output since the Content-Length exceeds the length of the data that is actually sent.

Benchmarking with `ab` not working

I think ab uses some sort of keep-alive that fastwsgi doesn't implement properly. ab just hangs when you run it, eg

ab -n 100 -c 1 http://127.0.0.1:8080

Improve CLI

Hello @jamesroberts!

First of all, great effort with this project!

I wanted to play with it a bit and start the server via CLI. Naturally, I wanted to see how to use the app and typed:

fastwsgi -h

and got:

Traceback (most recent call last):
  File "/Users/rpitonak/Documents/workspace/fast-wsgi-test/venv/bin/fastwsgi", line 33, in <module>
    sys.exit(load_entry_point('fastwsgi==0.0.4', 'console_scripts', 'fastwsgi')())
  File "/Users/rpitonak/Documents/workspace/fast-wsgi-test/venv/lib/python3.8/site-packages/fastwsgi.py", line 62, in run_from_cli
    wsgi_app = import_from_string(sys.argv[1])
  File "/Users/rpitonak/Documents/workspace/fast-wsgi-test/venv/lib/python3.8/site-packages/fastwsgi.py", line 39, in import_from_string
    raise ImportError("Import string should be in the format <module>:<attribute>")
ImportError: Import string should be in the format <module>:<attribute>

I would be happy to help you and contribute with PR to add more robust CLI, including options for port, host, logging, num_workers etc.

I would suggest to use either click or typer(which is built upon click).

Since it is a new feature, I am opening this issue first and would love to hear your feedback and suggestions.

Documentation.

Could you consider adding full documentations/guides/Api references!

Significant performance decrease after PR#19

From commit 66b601e:

$  wrk -t8 -c200 -d10 http://localhost:5000 --latency
Running 10s test @ http://localhost:5000
  8 threads and 200 connections

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.06ms    1.66ms  32.63ms   98.03%
    Req/Sec    13.12k     0.98k   14.83k    78.62%

  Latency Distribution
     50%    1.84ms
     75%    1.92ms
     90%    2.10ms
     99%    9.24ms

  1045380 requests in 10.01s, 101.69MB read
Requests/sec: 104411.32
Transfer/sec:     10.16MB

To latest commit 882fe79:

$  wrk -t8 -c200 -d10 http://localhost:5000 --latency
Running 10s test @ http://localhost:5000
  8 threads and 200 connections

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.28ms    6.98ms  78.45ms   86.64%
    Req/Sec     2.89k     2.56k   10.26k    74.87%

  Latency Distribution
     50%    3.38ms
     75%    5.02ms
     90%   16.14ms
     99%   33.83ms

  224306 requests in 10.08s, 20.75MB read
Requests/sec:  22261.30
Transfer/sec:      2.06MB

@remittor could you have a look and see what is possibly causing this performance decrease?

fastwsgi blocks OTHER threads from executing

Calling fastwsgi.run should block the main thread, but release the GIL, allowing other threads to run (I/O wait). However, the observed behavior is that all threads are blocked, implying the GIL is not released while fastwsgi is waiting (though the actual issue may be different).

Observe the difference in behavior with this toy code when using fastwsgi.run vs. the development flask app.run

import threading, time
import fastwsgi

from flask import Flask

app = Flask(__name__)

@app.get('/')
def hello_world():
    return 'Hello, World!', 200


def dummy_thread():
    while True:
        print("Staying alive!")
        time.sleep(1)
 
if __name__ == '__main__':
    thread = threading.Thread(target=dummy_thread)
    thread.start()
    time.sleep(3) # Should print "Staying alive!" a number of times
    
    # This should block the main thread, but the "staying alive" thread should keep running
    # Using fastwsgi, all stops.
    fastwsgi.run(wsgi_app=app, host='0.0.0.0', port=5000)
    
    # Using the flask provided development server, the other thread continues as expected.
    # app.run(host='0.0.0.0', port=5000)

In many (most?) apps this would not be an issue, however it makes fastwsgi unusable in any application that requires some sort of background processing, such as my RaspberryPi app that provides a web server, but also listens for button presses.

Gunicorn Worker similar to Meinheld

Is there a way to create a Gunicorn worker similar to what meinheld has done?
https://github.com/mopemope/meinheld/blob/master/meinheld/gmeinheld.py#L11

It can be used as:
gunicorn --workers=2 --worker-class="egg:meinheld#gunicorn_worker" gunicorn_test:app

  • Falcon + Meinheld benchmarks
Running 1m test @ http://localhost:5000
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.66ms  103.47us   8.59ms   79.39%
    Req/Sec     7.26k   600.67    42.37k    95.17%
  Latency Distribution
     50%    1.68ms
     75%    1.73ms
     90%    1.75ms
     99%    1.84ms
  3468906 requests in 1.00m, 588.86MB read
Requests/sec:  57719.43
Transfer/sec:      9.80MB
  • Falcon + fastwsgi bechmarks
Running 1m test @ http://localhost:5000
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.41ms   95.10us   3.54ms   67.27%
    Req/Sec     8.57k   532.54    15.80k    66.10%
  Latency Distribution
     50%    1.46ms
     75%    1.48ms
     90%    1.49ms
     99%    1.58ms
  4093388 requests in 1.00m, 456.74MB read
Requests/sec:  68187.13
Transfer/sec:      7.61MB

Having a gunicorn worker for fastwsgi might help people test it out in their own production workload easily.

Note: Seeing 18% improvement over meinheld albeit on hello world benchmarks for cythonized falcon.

A bunch of bugs

Hi @jamesroberts!

I've run fastwsgi against some of the bjoern test cases that have accumulated over the years. From a very quick check, here are my results:

  • tests/empty.py segfault
  • tests/env.py segfault
  • tests/headers.py memory leak
  • tests/huge.py hangs forever
  • tests/hello.py hangs forever
  • tests/keep-alive-behaviour.py segfault
  • tests/not-callable.py segfault
  • tests/test_exc_info_reference.py memory leak

I've used this file to substitute the bjoern module in the tests:

# bjoern.py
from fastwsgi import run

In some cases, FastWSGI+Flask is faster than NGINX (latency test)

Client : i7-980X @ 2.8GHz, Debian 12, Python 3.10, NIC Intel x550-t2 10Gbps
Server: i7-980X @ 2.8GHz, Windows 7, Python 3.8, NIC Intel x550-t2 10Gbps

Payload for testing: https://github.com/MiloszKrajewski/SilesiaCorpus/blob/master/xml.zip (651 KiB)

Server test app: https://gist.github.com/remittor/1f2bc834852009631d437cd96822afa4

FastWSGI + Flask

python.exe server.py -h 172.16.220.205 -g fw -f xml.zip -b

$ wrk -t1 -c1 -d30 http://172.16.220.205:5000/ --latency
Running 30s test @ http://172.16.220.205:5000/
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.68ms  128.35us   9.26ms   97.39%
    Req/Sec   597.20      7.27   616.00     81.00%
  Latency Distribution
     50%    1.67ms
     75%    1.71ms
     90%    1.75ms
     99%    1.84ms
  17837 requests in 30.01s, 11.08GB read
Requests/sec:    594.42
Transfer/sec:    378.23MB 

nginx.exe

$ wrk -t1 -c1 -d30 http://172.16.220.205:80/xml.zip --latency
Running 30s test @ http://172.16.220.205:80/xml.zip
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.00ms  176.83us   4.04ms   69.41%
    Req/Sec   500.61     13.14   555.00     71.76%
  Latency Distribution
     50%    2.01ms
     75%    2.12ms
     90%    2.22ms
     99%    2.39ms
  14999 requests in 30.10s, 9.32GB read
Requests/sec:    498.31
Transfer/sec:    317.14MB 

Werkzeug + Flask

python.exe server.py -h 172.16.220.205 -g wz -f xml.zip -b

$ wrk -t1 -c1 -d30 http://172.16.220.205:5000/ --latency
Running 30s test @ http://172.16.220.205:5000/
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.46ms  523.74us   9.90ms   62.01%
    Req/Sec   274.58     26.55   343.00     76.00%
  Latency Distribution
     50%    3.62ms
     75%    3.87ms
     90%    4.03ms
     99%    4.30ms
  8204 requests in 30.00s, 5.10GB read
Requests/sec:    273.46
Transfer/sec:    174.04MB 

Waitress + Flask

python.exe server.py -h 172.16.220.205 -g wr -f xml.zip -b

$ wrk -t1 -c1 -d30 http://172.16.220.205:5000/ --latency
Running 30s test @ http://172.16.220.205:5000/
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    10.01ms  605.99us  11.31ms   67.94%
    Req/Sec   100.24      4.73   111.00     78.74%
  Latency Distribution
     50%   10.11ms
     75%   10.48ms
     90%   10.71ms
     99%   11.04ms
  3004 requests in 30.10s, 1.87GB read
Requests/sec:     99.80
Transfer/sec:     63.51MB 

Is it good for prod env

Hi,
I work with django and I want to ask that is this wsgi server is suitable for prod for django apps.

Content-Length Handling

When Content-Length mismatch happens with the provided value being bigger than the real value the connection is not closed.
The response is still correct with Content-Length: 26.

In this case meinheld, gunicorn and socketify would close the connection.

def app_hello(environ, start_response):
    start_response('200 OK', [('Content-Type', 'text/plain'), ('Content-Length', '30')])
    
    return [b'Hello, World!', b'Hello, World!']

If the real response is bigger than the Content-Length provided, meinheld and gunicorn will just end the response when reaches the length provided.
image

def app_hello(environ, start_response):
    start_response('200 OK', [('Content-Type', 'text/plain'), ('Content-Length', '24')])
    
    return [b'Hello, World!', b'Hello, World!']

fastWSGI close the connection in this case, socketify also does this, and I think is better to close and emit a warning when Content-Length mismatch happens.

But in case the Content-Length is bigger than the real response, I think that should be better to also close the connection to keep the same behavior and to warn that something that the user/framework wants to send will not be sent.

Feedback

Hi @jamesroberts!

Today I had another look at fastwsgi and here are some of my thoughts!


I think the code got a bit messier in some places than a few months ago. Probably because it is now more correct and the real world is complicated :-(


Some of the code looks heavily inspired by bjoern, sometimes it looks like a near-verbatim copy. Actually this is a verbatim copy. Which is fine since bjoern free software. Do you feel like it is appropriate to give credit to the bjoern authors by including its license text in fastwsgi?


In the "basic WSGI" benchmark, fastwsgi performs much better, because it is testing a (very unrealistic) case that's not handled well by bjoern: A tiny "streaming" response with no Content-Length header. bjoern will make two calls to write() which will trigger Nagle's Algorithm on Linux.

If you set TCP_NODELAY to disable that, bjoern will perform much better, although still worse than fastwsgi, which is (I'm assuming) using a single write() call.

If you add a Content-Length header, the two servers are on par.

Interestingly, on macOS, they are already on par.

segfault when using iterator

I'm on:
Python 3.11.2
OS: Debian GNU/Linux bookworm/sid x86_64
Kernel: 6.1.0-5-amd64
CPU: Intel i7-7700HQ (8) @ 3.800GHz

First request work, second request segfaults.

def app_hello(environ, start_response):
    start_response('200 OK', [('Content-Type', 'text/plain'), ('Content-Length', '13')])
    
    yield b'Hello, World!'

if __name__ == "__main__":
    import fastwsgi
    fastwsgi.run(wsgi_app=app_chunked, host='127.0.0.1', port=8000)
FastWSGI server running on PID: 165444
FastWSGI server listening at http://127.0.0.1:8000
[1]    165444 segmentation fault  python3 ./raw-wsgi.py

Workers / Threads

Thanks for developing this needed package.
Is it has support for multiple workers / theads or greenlets?

SSL Certifcates

Sadly I can't add my SSL certificates to the fastwsgi Server. At least I haven't found the option.

Windows exception 0xC0000005

import fastwsgi
import app
import logging
host, port, debug, ssl_context = app.config_prepare()

if __name__ == '__main__':
    host, port, debug, ssl_context = app.config_prepare()
    fastwsgi.run(wsgi_app=app.application, host=host, port=port)

Error:

==== FastWSGI ==== 
Host: 0.0.0.0
Port: 5000
==================

Server listening at http://0.0.0.0:5000

Process finished with exit code -1073741819 (0xC0000005)

Python version: Python 3.9.10
OS version: Windows 10 10.0.19042 19042

Installed with:

pip install fastwsgi==0.0.5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.