Giter Site home page Giter Site logo

django-asv's Introduction

Django

Django is a high-level Python web framework that encourages rapid development and clean, pragmatic design. Thanks for checking it out.

All documentation is in the "docs" directory and online at https://docs.djangoproject.com/en/stable/. If you're just getting started, here's how we recommend you read the docs:

  • First, read docs/intro/install.txt for instructions on installing Django.
  • Next, work through the tutorials in order (docs/intro/tutorial01.txt, docs/intro/tutorial02.txt, etc.).
  • If you want to set up an actual deployment server, read docs/howto/deployment/index.txt for instructions.
  • You'll probably want to read through the topical guides (in docs/topics) next; from there you can jump to the HOWTOs (in docs/howto) for specific problems, and check out the reference (docs/ref) for gory details.
  • See docs/README for instructions on building an HTML version of the docs.

Docs are updated rigorously. If you find any problems in the docs, or think they should be clarified in any way, please take 30 seconds to fill out a ticket here: https://code.djangoproject.com/newticket

To get more help:

To contribute to Django:

To run Django's test suite:

Supporting the Development of Django

Django's development depends on your contributions.

If you depend on Django, remember to support the Django Software Foundation: https://www.djangoproject.com/fundraising/

django-asv's People

Contributors

adamchainz avatar carltongibson avatar deepakdinesh1123 avatar pre-commit-ci[bot] avatar sarahboyce avatar smithdc1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

django-asv's Issues

Adding request response benchmarks

In the TODO file in djangobench, it was mentioned that a running test server might be required for the benchmark so I tried to do this by using a sample django project and the subprocess module

import subprocess

class Benchmark:
    def setup(self):
          self.process = subprocess.Popen(["python", "manage.py", "runserver"])
    
    def teardown(self):
          self.process.kill()
    
    def time_response(self):
         # benchmark...

But when I tried to access the manage.py file in the django project I got an error
LookupError: No installed app with label 'admin' whenever I tried to run the benchmarks.

After this I tried to set it up with docker but could not find a way to provide the commit hash so that the particular commit can be installed. It can be implemented in the workflow by using the python docker SDK to build the images that use particular commit hashes of django or use a shell script to pass commits to dockerfile, and use a script to start them and further benchmark them using ASV. Shall I go with this method?

Are there any other ways in which this can be done? Should I also try options other than ASV?

Expand README.

So one checkouts, installs the requirements, asv run, ... — then what?

  • We should probably point to the ASV docs, and mention publish/preview.
  • Plus maybe how to generate a few runs for (e.g.) daily commits for last week, weekly commits for last month? (So there's something to see).

Plus, also, a little about the project, and how to join in.

Related is the discussion on the forum about the results folder: https://forum.djangoproject.com/t/django-benchmarking-project/13887/11 — Can we commit this back to the repo (and accept PRs maybe) so that we can build up the data over time?

I would think that running once a day (or week even once we've got some data) is more than enough to spot regressions no?

(Thoughts? 🤔 — We're working out the answers.)

Migrating benchmarks from djangobench which use run_comparison_benchmark

I was migrating some of the benchmarks from djangobench and I noticed that the benchmarks default_middleware and multi_value_dict use the utils.run_comparison_benchmark method to compare two benchmarks, ASV does not support a direct comparison between different benchmark methods. How should I implement this?

Add ci

Hi @deepakdinesh1123 👋

I think it would be useful to add a few items for CI which run on each pull request and push to main.

  • lint (black, isort, flake8)

  • asv run: to run the benchmarks against a single commit. This will help to show the benchmark suite runs at this point in time. It will also help with PR reviews.

What do you think?

Prevent `RuntimeWarning`s about naive datetimes

Running all benchmarks locally with asv’s --show-stderr option, I saw the below warning repeated many times from query_benchmarks.queryset_filter_chain.benchmark.FilterChain.time_filter_chain:

  warnings.warn(
/.../python3.12/site-packages/django/db/models/fields/__init__.py:1669: RuntimeWarning: DateTimeField Book.date_published received a naive datetime (2024-02-23 15:00:07.507742) while time zone support is active.

It was repeated for both date_created and date_published.

This should be fixed to avoid the possible output spam and ensure the benchmark simulates a typical situation.

Azure pipelines setup

Over the past few days, I set up the Azure pipeline to run the benchmarks in the benchmark repo when a pull request is made in the Main repo(a comment trigger can also be added) since both the Django and djangobench repositories belong to the organization creation of an access token would not be required. Can I use this method?

Note: It would require adding azure-pipelines.yaml file to the Django repo.

Moving repo to django org

I have added the workflow to run the benchmarks when a pull request is labeled in the django\django repo here, Mariusz Felisiak mentioned in a comment that moving this repo to the django organization would be better as the workflow wouldn't depend on any user repositories.

Should the repo be moved? Are there any changes that I need to make before moving?

Repeatable benchmarks

https://smithdc1.github.io/django-asv/#template_benchmarks.template_compilation.benchmark.TemplateCompile.time_template_compile

So we are starting to build up some history now each day which is great.

This chart shows two things.

The long tail history which was run on a single server. This had repeatable results

The more recent daily commits are run on different machines and are much more noisy.

@deepakdinesh1123 and views on making this more repeatable?

@carltongibson any news from the ops team?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.