Giter Site home page Giter Site logo

wagtail-experiments's Introduction

image

Wagtail Experiments

A/B testing for Wagtail

This module supports the creation of A/B testing experiments within a Wagtail site. Several alternative versions of a page are set up, and on visiting a designated control page, a user is presented with one of those variations, selected at random (using a simplified version of the PlanOut algorithm). The number of visitors receiving each variation is logged, along with the number that subsequently go on to complete the experiment by visiting a designated goal page.

Installation

wagtail-experiments is compatible with Wagtail 5.2 to 6.0, and Django 4.2 to 5.0. It depends on the Wagtail ModelAdmin module, which is available as an external package as of Wagtail 5.0; we recommend using this rather than the bundled wagtail.contrib.modeladmin module to avoid deprecation warnings. The external package is required as of Wagtail 6.0.

To install:

pip install wagtail-experiments wagtail-modeladmin

and ensure that the apps wagtail_modeladmin and experiments are included in your project's INSTALLED_APPS:

Then migrate:

./manage.py migrate

Usage

After installation, a new 'Experiments' item is added to the Wagtail admin menu under Settings. This is available to superusers and any other users with add/edit permissions on experiments. An experiment is created by specifying a control page and any number of alternative versions of that page, along with an optional goal page. Initially the experiment is in the 'draft' status and does not take effect on the site front-end; to begin the experiment, change the status to 'live'.

When the experiment is live, a user visiting the URL of the control page will be randomly assigned to a test group, to be served either the control page or one of the alternative variations. This assignment persists for the user's session (according to Django's session configuration) so that each user receives the same variation each time. When a user subsequently visits the goal page, they are considered to have completed the experiment and a completion is logged against that user's test group. The completion rate over time for each test group can then be viewed through the admin interface, under 'View report'.

image

From the report page, an administrator can select a winning variation; the experiment status is then changed to 'completed', and all visitors to the control page are served the chosen variation.

Typically, the alternative versions of the page will be left unpublished, as this prevents them from appearing as duplicate copies of the control page in the site navigation. If an unpublished page is selected as an alternative, the page revision shown to users on the front-end will be the draft revision that existed at the moment the experiment status was set to 'live'. When displaying an alternative variation, the title and tree location are overridden to appear as the control page's title and location; this means that the title of the alternative page can be set to something descriptive, such as "Signup page (blue text)", without this text 'leaking' to site visitors.

Direct URLs for goal completion

If you want goal completion to be linked to some action other than visiting a designated Wagtail page - for example, clicking a 'follow us on Twitter' link - you can set up a Javascript action that sends a request to a URL such as /experiments/complete/twitter-follow/ , where twitter-follow is the experiment slug. To set this URL route up, add the following to your URLconf:

Alternative backends

wagtail-experiments supports pluggable backends for tracking participants and completions. The default backend, experiments.backends.db, records these in a database table, aggregated by day. Alternative backends can be specified through the WAGTAIL_EXPERIMENTS_BACKEND setting:

A backend is a Python module that provides the following functions:

record_participant(experiment, user_id, variation, request):

Called when a user visits the control page for experiment. user_id is the persistent user ID assigned to that visitor; variation is the Page object for the variation to be served; and request is the user's current request.

record_completion(experiment, user_id, variation, request):

Called when a visitor completes the experiment, either by visiting the goal page or triggering the record_completion. user_id is the persistent user ID assigned to that visitor; variation is the Page object for the variation that was originally served to that user; and request is the user's current request.

get_report(experiment):

Returns report data for experiment, consisting of a dict containing:

variations

A list of records, one for each variation (including the control page). Each record is a dict containing:

variation_pk

The primary key of the Page object

is_control

A boolean indicating whether this is the control page

is_winner

A boolean indicating whether this variation has been chosen as the winner

total_participant_count

The number of visitors who have been assigned this variation

total_completion_count

The number of visitors assigned this variation who have gone on to complete the experiment

history

A list of dicts showing the breakdown of participants and completions over time; each dict contains date, participant_count and completion_count.

Test data

wagtail-experiments provides a management command experiment-data, to allow populating an experiment with dummy data for testing or demonstration purposes, and purging existing data. This command is called with the experiment's slug:

# Populate the experiment 'homepage-banner' with 5 days of test data,
# with 100-200 views per variation. All parameters other than experiment slug
# are optional
./manage.py experiment-data homepage-banner --days 5 --min=100 --max=200

# Purge data for the experiment 'homepage-banner'
./manage.py experiment-data homepage-banner --purge

wagtail-experiments's People

Contributors

gasman avatar m1kola avatar tm-kn avatar tomdyson avatar topdevpros avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wagtail-experiments's Issues

Management command to purge and create fake history data

Useful for testing and demos. Here's a rough script which could be converted into a management command:

import os
from random import randrange
import datetime
from datetime import datetime, timedelta
import logging
logging.basicConfig()

def fake_experiment_data(slug, days=10, min=100, max=150, purge=False):
    from experiments.models import Experiment, ExperimentHistory, Alternative
    from django.db.models import F
    experiment = Experiment.objects.get(slug=slug)
    variations = experiment.get_variations()
    control = experiment.control_page

    if purge:
        print "purging all history for %s" % experiment
        ExperimentHistory.objects.filter(experiment=experiment).delete()
        return

    print "creating fake history data for %s" % experiment
    for variation in variations:
        for day in range(0, days):
            date = datetime.now() - timedelta(days=day)
            for x in range(1, randrange(min, max)):
                history, _ = ExperimentHistory.objects.get_or_create(
                    experiment=experiment, variation=variation, date=date)
                # increment the participant_count
                ExperimentHistory.objects.filter(pk=history.pk).update(
                    participant_count=F('participant_count') + 1)
                if variation == control:
                    # make the control page less likely to complete
                    if randrange(0,4) == 1:
                        ExperimentHistory.objects.filter(pk=history.pk).update(
                            completion_count=F('completion_count') + 1)
                else:
                    if randrange(0,3) == 1:
                        ExperimentHistory.objects.filter(pk=history.pk).update(
                            completion_count=F('completion_count') + 1)


if __name__ == "__main__":
    os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_project.settings')
    import django
    django.setup()
    fake_experiment_data('which-logo', purge=True)
    fake_experiment_data('which-logo')

Plans to support Wagtail v2?

I read the library works with Wagtail 1.7, but I'm hoping to use this framework once we upgrade our site to Wagtail 2.0. Also, by asking this question, I'm also volunteering to work on this issue with some guidance from more experienced wagtail / django / python devs (JS dev here) :D

Additional documentation?

Are there additional docs anywhere that contain recipes/examples of how to set up both the "redirect to page" type A/B testing, as well as the "triggered by JS" version of A/B testing?

For example, rather than "real" pages, we're looking to do A/B testing on an in-model subroute using the RoutablePageMixin, and it would be useful to know what parts of wagtail-experiments we can tap into in order to decide "which render path inside the subroute" to pick for the current session, so we don't serve the same user different views on consecutive page interactions.

Experiments doesnt recognise the visit of another site set as goal

I have mutiple sites of which one is an overview website. I use experiments to a/b test whether people use the menu of the overview website to reach the other site (defined in sites and running well) or they use the link on the homepage of the overview website. It recognises the control and alternative page (shows one of them per session) but it doesnt count the visit to the page of the other site after clicking on either the link or using the menu. Is this a bug or something not possible yet?

Support Wagtail 5 and Django 4.2

The current version does not support the current versions of wagtail or django.

We have forked the current version of wagtail-experiments and resolved this issue and also Issues #24 and #27.

TemplateSyntaxError

I have a TemplateSyntaxError at /admin/experiments/experiment/report/1/
'staticfiles' is not a registered tag library. Must be one of...

I checked the HTML file and saw: {% load staticfiles %} but lately, I think its {% load static %}, django.contrib.staticfiles is installed.

Not compatibe with Django 3

Because of this deprecated import, experiments is currently not compatible with Django 3.
Even though Wagtail itself is.

from django.utils.encoding import python_2_unicode_compatible

Support for Redis backend?

This package is really great, thank you Torchbox ๐Ÿ˜ƒ .
Will there be support for redis? I think it will be could be useful.

Support experiments for anonymous users

It looks like wagtail-experiments relies on user id to pick alternatives and mark goal completions, but the session is used to track whether a user has entered or completed an experiment previously.

In many cases it's important to be able to run experiments on anonymous users. Would it be possible to add a token to the session, rather than relying on user id to select alternatives?

Request discussion of proposed UX changes

A good a/b test starts by setting the goals and criteria so you know when you've got an actionable result. Currently wagtail-experiments does not provide a way for the user to set params that are important for reliability.

The following changes should significantly enhance the value of wagtail-experiments to both high and low traffic sites.

Possible new settings in UX:

  • The single biggest error in a/b testing appears to be deciding based on too small a sample. There's a lot of controversy over how small is too small. The consensus among pollsters is that 1000 responses is enough to represent 300+ million people, so at least we have an upper limit. It's also a good idea to limit the time frame to reduce influence by changing conditions.

    Minimum sample to recommend action: [ ]
    Maximum time frame: [ ]

  • Goals reached after many intermediate pages usually aren't very relevant to the experiment.

    [x] Goal must be reached directly from an experiment page
    [ ] Intermediate pages before goal are accepted (not recommended, but may be current behavior)

  • Sometimes you're testing different titles. Sometimes you're testing body changes with the same title. Both are important. If there are many alternative pages, we can make it easy to use the control page title.

    [x] Use control page title

Possible ways to reduce cluttering the UX:

  • Hard code good defaults. But if users can't change them it will invite controversy about our defaults.

  • Allow users to add preferences via settings.py. Marketers are the primary target market for wagtail-experiments, so this isn't the best option.

  • Only show detailed settings if requested. The request could be on a Settings page.

  • Put detailed settings on a Settings page. All experiments would share the same settings.

Inline objects are always taken from the control page

If an experiment is set up with differences in inline child objects, the variant pages will fail to display them, and always show the children of the control page. This is because we fake the page ID of the variations to match the control page before the child objects have been fetched - as a result, the query to retrieve them will refer to the control page's ID.

Floats are being formatted in a language aware manner

The following line of code (in report.html) does not format floats correctly in some languages:
'{{ history_entry.conversion_rate|floatformat:2|escapejs }}'{% if not forloop.last %},{% endif %}

For example, in my Dutch language based Django site this will format 12.3456 as 12,35. This breaks the report graph.

I think the solution is to change that line of code to:
'{{ history_entry.conversion_rate|stringformat:".2f"|escapejs }}'{% if not forloop.last %},{% endif %}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.