Giter Site home page Giter Site logo

djangobench's Introduction

Djangobench

A harness and a set of benchmarks for measuring Django's performance over time.

Running the benchmarks

Here's the short version:

mkvirtualenv djangobench
pip install -e git://github.com/django/djangobench.git#egg=djangobench
git clone git://github.com/django/django.git
cd django
djangobench --control=1.2 --experiment=master

Okay, so what the heck's going on here?

First, djangobench doesn't test a single Django version in isolation -- that wouldn't be very useful. Instead, it benchmarks an "experiment" Django against a "control", reporting on the difference between the two and measuring for statistical significance.

Because a Git clone can contain all the project development history, you can test against a single repository specifying individual commit IDs, tag (as we've done above) and even possibly branches names with the --control and --experiment options.

Before djangobench 0.10 you had to use --vcs=git to get this behavior. Now it's the default. There is also support for Mercurial (--vcs=hg).

Another way to use djangobench, is to run it against two complete Django source trees, you can specify this mode by using --vcs=none. By default it looks for directories named django-control and django-experiment in the current working directory:

djangobench --vcs=none

but you can change that by using the --control or --experiment options:

djangobench --vcs=none --control pristine --experiment work

Now, it's impractical to install the Django source code trees under test (this is particularly true in the two-trees scenario): djangobench works its magic by mucking with PYTHONPATH.

However, the benchmarks themselves need access to the djangobench module, so you'll need to install it.

You can specify the benchmarks to run by passing their names on the command line.

This is an example of not-statistically-significant results:

Running 'startup' benchmark ...
Min: 0.138701 -> 0.138900: 1.0014x slower
Avg: 0.139009 -> 0.139378: 1.0027x slower
Not significant
Stddev: 0.00044 -> 0.00046: 1.0382x larger

Python 3

Not only is djangobench Python 3 compatible, but can also be used to compare Python 2 vs Python 3 code paths. To do this, you need to provide the full paths to the corresponding Python executables in --control-python and --experiment-python. The short version (assuming you have also the djangobench environment setup like above):

mkvirtualenv djangobench-py3 -p python3
pip install -e git://github.com/django/djangobench.git#egg=djangobench
cd django
djangobench --vcs=none --control=. --experiment=. \
    --control-python=~/.virtualenvs/djangobench/bin/python \
    --experiment-python=~/.virtualenvs/djangobench-py3/bin/python \

Writing new benchmarks

Benchmarks are very simple: they're a Django app, along with a settings file, and an executable benchmarks.py that gets run by the harness. The benchmark script needs to honor a simple contract:

  • It's an executable Python script, run as __main__ (e.g. python path/to/benchmark.py). The subshell environment will have PYTHONPATH set up to point to the correct Django; it'll also have DJANGO_SETTINGS_MODULE set to <benchmark_dir>.settings.

  • The benchmark script needs to accept a --trials argument giving the number of trials to run.

  • The output should be simple RFC 822-ish text -- a set of headers, followed by data points:

    Title: some benchmark
    Description: whatever the benchmark does
    
    1.002
    1.003
    ...
    

    The list of headers is TBD.

There's a couple of utility functions in djangobench.utils that assist with honoring this contract; see those functions' docstrings for details.

The existing benchmarks should be pretty easy to read for inspiration. The query_delete benchmark is probably a good place to start.

Please write new benchmarks and send us pull requests on Github!

djangobench's People

Contributors

aaugustin avatar acdha avatar adamchainz avatar akaariai avatar alex avatar bouke avatar carljm avatar charettes avatar collinanderson avatar d0ugal avatar deepakdinesh1123 avatar ericflo avatar gregmuellegger avatar jaap3 avatar jacobian avatar jdunck avatar jphalip avatar kaip avatar lqc avatar mjtamlyn avatar ptone avatar ramiro avatar sebleier avatar sir-sigurd avatar smileychris avatar smithdc1 avatar spookylukey avatar timgraham avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

djangobench's Issues

Tracker App

Hi @adamchainz,

So I've made a bit more progress and I've now made a "thing", which I thought it worth sharing at this stage. I've thoroughly enjoyed this little project so far, so even if not useful (although I hope it will be), the lessons learned here are valuable for me personally.

Anyway, I've set up a GitHub action to run djangobench and to publish the outputs in a datasette. I've currently got it running benchmarks at the rate of one commit per hour, to build up a bit of history.

https://djangobench-tracker.herokuapp.com/django/bench

The next step is to have a think about how to visualise this data. There are some charting options in the datasette already, but would be good to have a more dashboard approach... Something like this maybe?

https://speed.python.org/comparison/

(Sidenote, interesting that django templates are much faster in 3.7 and 3.8 than 3.6, with 3.9 being faster again. )

template_render benchmark failing on 1.5alpha

This works:
djangobench --vcs=git --control=1.3 --experiment=1.4 template_render

This Doesn't
djangobench --vcs=git --control=1.4 --experiment=master template_render

(django-dev)element:django (master)$ djangobench --vcs=git --control=1.4 --experiment=master template_render
Running benchmarks: template_render
Control: Django 1.4 (in git branch 1.4)
Experiment: Django 1.5.dev20120725205848 (in git branch master)

Running 'template_render' benchmark ...
Traceback (most recent call last):
  File "/Users/preston/Projects/Python/virtualenvs/django-dev/bin/djangobench", line 9, in <module>
    load_entry_point('djangobench==0.9', 'console_scripts', 'djangobench')()
  File "/Users/preston/Projects/code/forks/djangobench/djangobench/main.py", line 311, in main
    continue_on_errror = args.continue_on_errror
  File "/Users/preston/Projects/code/forks/djangobench/djangobench/main.py", line 62, in run_benchmarks
    experiment_data = run_benchmark(benchmark, trials, experiment_env)
  File "/Users/preston/Projects/code/forks/djangobench/djangobench/main.py", line 111, in run_benchmark
    out, _, _ = perf.CallAndCaptureOutput(command + ['-t', 1], env, track_memory=False, inherit_env=[])
  File "/Users/preston/Projects/code/forks/djangobench/djangobench/perf.py", line 1026, in CallAndCaptureOutput
    raise RuntimeError("Benchmark died: " + stderr)
RuntimeError: Benchmark died: Traceback (most recent call last):
  File "/Users/preston/Projects/code/forks/djangobench/djangobench/benchmarks/template_render/benchmark.py", line 38, in <module>
    'description': ('Render a somewhat complex, fairly typical template '
  File "/Users/preston/Projects/code/forks/djangobench/djangobench/utils.py", line 71, in run_benchmark
    benchmark_result = benchmark()
  File "/Users/preston/Projects/code/forks/djangobench/djangobench/benchmarks/template_render/benchmark.py", line 32, in benchmark
    render_to_response('permalink.html', context)
  File "/Users/preston/Projects/code/forks/django/django/shortcuts/__init__.py", line 20, in render_to_response
    return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs)
  File "/Users/preston/Projects/code/forks/django/django/template/loader.py", line 172, in render_to_string
    return t.render(Context(dictionary))
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 141, in render
    return self._render(context)
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 135, in _render
    return self.nodelist.render(context)
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 831, in render
    bit = self.render_node(node, context)
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 845, in render_node
    return node.render(context)
  File "/Users/preston/Projects/code/forks/django/django/template/loader_tags.py", line 123, in render
    return compiled_parent._render(context)
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 135, in _render
    return self.nodelist.render(context)
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 831, in render
    bit = self.render_node(node, context)
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 845, in render_node
    return node.render(context)
  File "/Users/preston/Projects/code/forks/django/django/template/defaulttags.py", line 366, in render
    return strip_spaces_between_tags(self.nodelist.render(context).strip())
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 831, in render
    bit = self.render_node(node, context)
  File "/Users/preston/Projects/code/forks/django/django/template/base.py", line 845, in render_node
    return node.render(context)
  File "/Users/preston/Projects/code/forks/django/django/template/defaulttags.py", line 419, in render
    raise e
django.core.urlresolvers.NoReverseMatch: Reverse for '' with arguments '()' and keyword arguments '{}' not found.

(django-dev)element:django (master)$ 

Python 3 compatibility?

Could be useful to compare revisions of Django >= 1.5 running under python 3.

The port will need to also take in account things like the simplejson dependency.

Remove initial_data fixtures

We don't automatically load them any more anyway, and they make the benchmarks harder to understand. There are 18 to remove.

ValueError: could not convert string to float - "short version" example

When I follow the "short version" example, I get this error:

$ djangobench --control=1.2 --experiment=master
Running all benchmarks
Control: Django 1.2 (in git branch 1.2)
Experiment: Django 1.9.dev20150415021140 (in git branch master)

Running 'default_middleware' benchmark ...
Traceback (most recent call last):
  File "/Users/audreyr/.virtualenvs/experiments/bin/djangobench", line 8, in <module>
    load_entry_point('djangobench==0.10', 'console_scripts', 'djangobench')()
  File "/Users/audreyr/code/third-party/djangobench/djangobench/main.py", line 397, in main
    experiment_python=args.experiment_python,
  File "/Users/audreyr/code/third-party/djangobench/djangobench/main.py", line 73, in run_benchmarks
    env=control_env)
  File "/Users/audreyr/code/third-party/djangobench/djangobench/main.py", line 145, in run_benchmark
    data_points = [float(line) for line in message.get_payload().splitlines()]
ValueError: could not convert string to float: d

Installation issues

Contrary to install instructions, I don't seem to be able to install from pip.

Also, installing from the setup.py fails to work correctly - the 'benchmarks' directory is not copied it seems. If I do a "setup.py develop" I can get a working setup.

Decide on a version support policy

There is at least once benchmark which checks for compatibility before Django 1.2... It is nice to keep this is as long as possible but with changes to migrations/syncdb, app loading and the startup process it is becoming harder.

I'm not averse to supporting say, all Django >= 1.0, but I feel perhaps 1.4 would be more reasonable at this point.

Benchmark results

I’d like to see Django performance benchmark results from the past – where can I go to see any results produced with djangobench, or with any other benchmark harness like this?

I’m interested in:

  1. Django performance from release to release
  2. Django performance across Python versions

negative time execution for 'default_middleware' benchmark

$ djangobench  --control stable/1.11.x --experiment master default_middleware
Running benchmarks: default_middleware
Control: Django 1.11.6.dev20170906232200 (in git branch stable/1.11.x)
Experiment: Django 2.0.dev20170915140012 (in git branch master)

Running 'default_middleware' benchmark ...
Min: 0.000026 -> -0.000037: -0.7010x faster
Avg: 0.000233 -> 0.000210: 1.1073x faster
Not significant
Stddev: 0.00126 -> 0.00148: 1.1770x larger (N = 50)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.