Giter Site home page Giter Site logo

mozilla / experimenter Goto Github PK

View Code? Open in Web Editor NEW
114.0 34.0 175.0 59.84 MB

A web application for managing user experiments for Mozilla Firefox.

Home Page: https://experimenter.services.mozilla.com

License: Mozilla Public License 2.0

Makefile 0.25% Python 64.97% Shell 0.13% HTML 2.66% CSS 0.07% Dockerfile 0.21% JavaScript 3.22% TypeScript 28.31% SCSS 0.17%
product-delivery

experimenter's Introduction

Mozilla Experimenter

CircleCI Status

Experimenter is a platform for managing experiments in Mozilla Firefox.

Important Links

Check out the ๐ŸŒฉ Nimbus Documentation Hub or go to the repository that house those docs.

Link Prod Staging Local Dev (Default)
Legacy Home experimenter.services.mozilla.com stage.experimenter.nonprod.dataops.mozgcp.net https://localhost
Nimbus Home /nimbus /nimbus /nimbus
Nimbus REST API /api/v6/experiments/ /api/v6/experiments/ /api/v6/experiments/
GQL Playground /api/v5/nimbus-api-graphql /api/v5/nimbus-api-graphql /api/v5/nimbus-api-graphql
Remote Settings remote-settings.mozilla.org/v1/admin remote-settings.allizom.org/v1/admin http://localhost:8888/v1/admin

Installation

General Setup

  1. Prerequisites

    On all platforms:

    On Linux:

    On MacOS:

    • Install Docker
      • Adjust resource settings
        • CPU: Max number of cores
        • Memory: 50% of system memory
        • Swap: Max 4gb
        • Disk: 100gb+
    • Install yarn

    On Windows:

    • Install WSL on Windows
      • Download from Microsoft store. Or

      • Download within Powershell.

          Open PowerShell as administrator.
          Run `wsl --install` to install wsl.
          Run `wsl --list --online` to see list of available Ubuntu distributions.
          Run `wsl --install -d <distroname>` to install a particular distribution e.g `wsl --install -d Ubuntu-22.04`.
        
      • After installation, press Windows Key and search for Ubuntu. Open it and set up username and password.

    • Download and Install Docker
      • Restart System after Installation.
      • Open Docker and go to settings.
      • Go to settings -> Resources -> WSL Integration and activate Ubuntu.
      • Click the activate and restart button to save your change.
    • Install Make and Git
      • Open the ubuntu terminal
      • You should install make using this command sudo apt-get update && sudo apt install make in the ubuntu terminal. This is necessary for the make secretkey command and other commands.
      • Ensure git is available by running git --version. If it's not recognized, install git using sudo apt install git
  2. Clone the repo

    git clone <your fork>
    
  3. Copy the sample env file

    cp .env.sample .env
    
  4. Set DEBUG=True for local development

    vi .env
    
  5. Create a new secret key and put it in .env

    make secretkey
    

    vi .env

    ...
    SECRETKEY=mynewsecretkey
    ...
    
  6. Run tests

    make check
    
  7. Setup the database

    make refresh
    

Fully Dockerized Setup (continuation from General Setup 1-7)

  1. Run a dev instance

    make up
    
  2. Navigate to it and add an SSL exception to your browser

    https://localhost/
    

Semi Dockerized Setup (continuation from General Setup 1-7)

One might choose the semi dockerized approach for:

  1. faster startup/teardown time (not having to rebuild/start/stop containers)
  2. better IDE integration

Notes:

Semi Dockerized Setup Steps
  1. Pre reqs macOS instructions:

    brew install postgresql llvm openssl yarn
    
    echo 'export PATH="/usr/local/opt/llvm/bin:$PATH"' >> ~/.bash_profile
    export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/opt/openssl/lib/
    

    Ubuntu 20.04 instructions:

    # general deps (also see `poetry` link above)
    sudo apt install postgresql llvm openssl yarn
    
    # add'l deps* for poetry / python setup
    sudo apt install libpq5=12.9-0ubuntu0.20.04.1
    sudo apt install libpq-dev
    

    *Notes

    • the specific libpq5 version shown here is required for libpq-dev at time of writing
    • poetry install (next step) requires python 3.9, but there are multiple options for resolving this, see here
  2. Install dependencies

    source .env
    
    cd experimenter
    poetry install # see note above
    
    yarn install
    
  3. env values

    .env (set at root):
    DEBUG=True
    DB_HOST=localhost
    HOSTNAME=localhost
    
  4. Start postgresql, redis, autograph, kinto

    make up_db (from project root)
    
  5. Django app

    # in experimenter
    
    poetry shell
    
    yarn workspace @experimenter/nimbus-ui build
    yarn workspace @experimenter/core build
    
    # run in separate shells (`poetry shell` in each)
    yarn workspace @experimenter/nimbus-ui start
    ./manage.py runserver 0.0.0.0:7001
    

Pro-tip: we have had at least one large code refactor. You can ignore specific large commits when blaming by setting the Git config's ignoreRevsFile to .git-blame-ignore-revs:

git config blame.ignoreRevsFile .git-blame-ignore-revs

VSCode setup

  1. If using VSCode, configure workspace folders

    • Add /experimenter/ and /experimenter/experimenter folders to your workspace (File -> Add Folder to Workspace -> path/to/experimenter/experimenter)

    • From the /experimenter/experimenter folder, run yarn install

      • Make sure you are using the correct version of node

        node -v

      • Troubleshooting:

Google Credentials for Jetstream

On certain pages an API endpoint is called to receive experiment analysis data from Jetstream to display visualization tables. To see experiment visualization data, you must provide GCP credentials.

  1. Prequisites

    • Install GCP CLI
      • Follow the instructions here
      • Project: moz-fx-data-experiments
    • Verify/request project permissions
      • Check if you already have access to the storage bucket here
      • If needed, ask in #nimbus-dev for a project admin to grant storage.objects.list permissions on the moz-fx-data-experiments project
  2. Authorize CLI with your account

    • make auth_gcloud
      • this will save your credentials locally to a well-known location for use by any library that requests ADC
      • Note: if this returns Error saving Application Default Credentials: Unable to write file [...]: [Errno 21] Is a directory: ..., delete the directory and try again (rm -rf ~/.config/gcloud)
  3. The next time you rebuild the docker-compose environment, your credentials will be loaded as a volume

    • Note that this will require the existing volume to be removed (hint: run make refresh)
  4. (optional) Verify access

    • make refresh
    • make bash
    • ./manage.py shell
      • from django.core.files.storage import default_storage
        default_storage.listdir('/')
        
      • Confirm this second command prints a list instead of an error

Google Cloud Bucket for Media Storage

We support user uploads of media (e.g. screenshots) for some features.

In local development, the default is to store these files in /experimenter/media using Django's FileSystemStorage class and the MEDIA_ROOT and MEDIA_URL settings.

In production, a GCP bucket and credentials are required.

The bucket name is configured with the UPLOADS_GS_BUCKET_NAME setting. For example:

UPLOADS_GS_BUCKET_NAME=nimbus-experimenter-media-dev-uploads

For local testing of a production-like environment, The credentials should be configured as described in the previous section on Google Credentials for Jetstream.

In the real production deployment, credentials are configured via workload identity in Google Kubernetes Engine.

Usage

Experimenter uses docker for all development, testing, and deployment.

Building

make build

Build the application container by executing the build script

make compose_build

Build the supporting services (nginx, postgresql) defined in the compose file

make ssl

Create dummy SSL certs to use the dev server over a locally secure connection. This helps test client behaviour with a secure connection. This task is run automatically when needed.

make kill

Stop and delete all docker containers. WARNING: this will remove your database and all data. Use this to reset your dev environment.

make migrate

Apply all django migrations to the database. This must be run after removing database volumes before starting a dev instance.

make load_dummy_experiments

Populates the database with dummy experiments of all types/statuses using the test factories

make refresh

Run kill, migrate, load_locales_countries load_dummy_experiments. Useful for resetting your dev environment when switching branches or after package updates.

Running a dev instance

Enabling Cirrus

Cirrus is required to run and test web application experiments locally. It is disabled by default. To enable Cirrus run:

export CIRRUS=1

This will be done automatically for any Cirrus related make commands.

make up

Start a dev server listening on port 80 using the Django runserver. It is useful to run make refresh first to ensure your database is up to date with the latest migrations and test data.

make up_db

Start postgresql, redis, autograph, kinto on their respective ports to allow running the Django runserver and yarn watchers locally (non containerized)

make up_django

Start Django runserver, Celery worker, postgresql, redis, autograph, kinto on their respective ports to allow running the yarn watchers locally (non containerized)

make up_detached

Start all containers in the background (not attached to shell). They can be stopped using make kill.

make update_kinto

Pull in the latest Kinto Docker image. Kinto is not automatically updated when new versions are available, so this command can be used occasionally to stay in sync.

Running tests and checks

make check

Run all test and lint suites, this is run in CI on all PRs and deploys.

Helpful UI Testing Tips

If you have a test failing to find an element (or finding too many, etc.) and the DOM is being cut off in the console output, you can increase how much is printed by locally editing the DEBUG_PRINT_LIMIT=7000 in the Makefile (line starts with JS_TEST_NIMBUS_UI).

make py_test

Run only the python test suite.

make bash

Start a bash shell inside the container. This lets you interact with the containerized filesystem and run Django management commands.

Helpful Python Tips

You can run the entire python test suite without coverage using the Django test runner:

./manage.py test

For faster performance you can run all tests in parallel:

./manage.py test --parallel

You can run only the tests in a certain module by specifying its Python import path:

./manage.py test experimenter.experiments.tests.api.v5.test_serializers

For more details on running Django tests refer to the Django test documentation

To debug a test, you can use ipdb by placing this snippet anywhere in your code, such as within a test method or inside some application logic:

import ipdb
ipdb.set_trace()

Then invoke the test using its full path:

./manage.py test experimenter.some_module.tests.some_test_file.SomeTestClass.test_some_thing

And you will enter an interactive iPython shell at the point where you placed the ipdb snippet, allowing you to introspect variables and call methods

For coverage you can use pytest, which will run all the python tests and track their coverage, but it is slower than using the Django test runner:

pytest --cov --cov-report term-missing

You can also enter a Python shell to import and interact with code directly, for example:

./manage.py shell

And then you can import and execute arbitrary code:

from experimenter.experiments.models import NimbusExperiment
from experimenter.experiments.tests.factories import NimbusExperimentFactory
from experimenter.kinto.tasks import nimbus_push_experiment_to_kinto

experiment = NimbusExperimentFactory.create_with_status(NimbusExperiment.Status.DRAFT, name="Look at me, I'm Mr Experiment")
nimbus_push_experiment_to_kinto(experiment.id)
Helpful Yarn Tips

You can also interact with the yarn commands, such as checking TypeScript for Nimbus UI:

yarn workspace @experimenter/nimbus-ui lint:tsc

Or the test suite for Nimbus UI:

yarn workspace @experimenter/nimbus-ui test:cov

For a full reference of all the common commands that can be run inside the container, refer to this section of the Makefile

make integration_test_legacy

Run the integration test suite for experimenter inside a containerized instance of Firefox. You must also be already running a make up dev instance in another shell to run the integration tests.

make FIREFOX_VERSION integration_test_nimbus

Run the integration test suite for nimbus inside a containerized instance of Firefox. You must also be already running a make up dev instance in another shell to run the integration tests.

FIREFOX_VERSION should either be nimbus-firefox-release or nimbus-firefox-beta. If you want to run your tests against nightly, please set the variable UPDATE_FIREFOX_VERSION to true and include it in the make command.

make FIREFOX_VERSION integration_test_nimbus_rust

Run the Nimbus SDK integration tests, which tests the advanced targeting configurations against the Nimbus SDK.

FIREFOX_VERSION should either be nimbus-firefox-release or nimbus-firefox-beta. If you want to run your tests against nightly, please set the variable UPDATE_FIREFOX_VERSION to true and include it in the make command.

make FIREFOX_VERSION integration_vnc_up

First start a prod instance of Experimenter with:

make refresh&&make up_prod_detached

Then start the VNC service:

make FIREFOX_VERSION integration_vnc_up

Then open your VNC client (Safari does this on OSX or just use VNC Viewer) and open vnc://localhost:5900 with password secret. Right click on the desktop and select Applications > Shell > Bash and enter:

cd experimenter
sudo apt get update
sudo apt install tox
chmod a+rwx tests/integration/.tox
tox -c tests/integration/ -e integration-test-nimbus

This should run the integration tests and watch them run in a Firefox instance you can watch and interact with.

To use NoVNC, navgate to this url http://localhost:7902 with the password secret. Then you can follow the same steps as above.

Running Integration tests locally

  1. Install geckodriver and have it available in your path. You should be able to run geckodriver --version from your command line. On MacOS you can do brew install geckodriver if you have homebrew installed.
  2. Add your Firefox install to your path. You should be able to run firefox --version from your command line.
alias firefox="path-to/firefox"

Example for macos add this to your ~/.zshrc file:

alias firefox="/Applications/Firefox.app/Contents/MacOS/firefox"
  1. Setup experimenter:
make refresh build_integration_test SKIP_DUMMY=1 up_prod_detached

Navigate with your browser to https://localhost/nimbus to confirm everything is working. 4. Install tox:

pip install tox
  1. Run the Integration Tests using tox:
tox -c experimenter/tests/integration -e integration-test-nimbus-local
  • To run a specific test:
tox -c experimenter/tests/integration -e integration-test-nimbus-local -- -k "test_name_here[WITH_CLIENT]

Firefox should pop up and start running through your test! You can change the firefox version the tests run on by copying the path of the firefox-bin and adding it to the firefox_options fixture in the tests/integration/nimbus/conftest.py file:

firefox_options.binary = "path/to/firefox-bin"

Integration Test options

  • TOX_ARGS: Tox commandline variables.
  • PYTEST_ARGS: Pytest commandline variables.

An example using PYTEST_ARGS to run one test.

make integration_test_legacy PYTEST_ARGS="-k test_addon_rollout_experiment_e2e"

Note: You need the following firefox version flag when running integration tests

FIREFOX_VERSION=nimbus-firefox-release

An example for above:

make FIREFOX_VERSION=nimbus-firefox-release integration_test_nimbus PYTEST_ARGS=ktest_rollout_create_and_update

make integration_sdk_shell

This builds and sets up the mobile sdk for use in testing.

Testing Tools

Targeting test tool

Navigate to experimenter/tests/tools

To test a targeting expression, first add an app context named app_context.json to the experimenter/tests/tools directory.

You can then invoke the script with the --targeting-string flag:

python sdk_eval_check.py --targeting-string "(app_version|versionCompare('106.*') <= 0) && (is_already_enrolled)"

The script should return the results, either True, False, or an error.

Note that you can change the app_context live, and run the script again after.

Accessing Remote Settings locally

In development you may wish to approve or reject changes to experiments as if they were on Remote Settings. You can do so here: http://localhost:8888/v1/admin/

There are three accounts you can log into Kinto with depending on what you want to do:

  • admin / admin - This account has permission to view and edit all of the collections.
  • experimenter / experimenter - This account is used by Experimenter to push its changes to Remote Settings and mark them for review.
  • review / review - This account should generally be used by developers testing the workflow, it can be used to approve/reject changes pushed from Experimenter.

The admin and review credentials are hard-coded here, and the experimenter credentials can be found or updated in your .env file under KINTO_USER and KINTO_PASS.

Any change in remote settings requires two accounts:

  • One to make changes and request a review
  • One to review and approve/reject those changes

Any of the accounts above can be used for any of those two roles, but your local Experimenter will be configured to make its changes through the experimenter account, so that account can't also be used to approve/reject those changes, hence the existence of the review account.

For more detailed information on the Remote Settings integration please see the Kinto module documentation.

Frontend

Experimenter has two front-end UIs:

  • core is the legacy UI used for Experimenter intake which will remain until nimbus-ui supersedes it
  • nimbus-ui is the Nimbus Console UI for Experimenter that is actively being developed

Learn more about the organization of these UIs here.

Also see the nimbus-ui README for relevent Nimbus documentation.

API

API documentation can be found here

Contributing

Please see our Contributing Guidelines

License

Experimenter uses the Mozilla Public License

experimenter's People

Contributors

allegrofox avatar anthonytran7757 avatar avi-88 avatar bbangert avatar brennie avatar cherish2003 avatar dataops-ci-bot avatar dependabot-preview[bot] avatar dependabot[bot] avatar eliserichards avatar halemu avatar jaredlockhart avatar jbuck avatar jeddai avatar jodyheavener avatar jrbenny35 avatar jwayn avatar k88hudson avatar leplatrem avatar lmorchard avatar lzoog avatar mardak avatar mikewilli avatar peterbe avatar piatra avatar rehandalal avatar tiftran avatar uhlissuh avatar yashika-khurana avatar yashikakhurana avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

experimenter's Issues

Add Experiment app

  • Experiment model
  • ExperimentVariant model
  • Experiment list API view/tests
  • admin interface

Add created_date

Add a field called created_date that is auto populated when an Experiment is first created, and make it read only in the admin.

Add Projects app

Because experimenter will support multiple projects, we should create a Projects app which stores a Project model. Make every experiment project specific.

make test is timing out

jkerim@Jared-Kerims-MacBook-Pro~/Documents/experimenter(44)$ make test
./scripts/build.sh
fatal: No names found, cannot describe anything.
Sending build context to Docker daemon 532.5 kB
Step 1/17 : FROM python:3.6-alpine
---> 02dcd3de5240
Step 2/17 : RUN addgroup -S app && adduser -S -g app app
---> Using cache
---> 74cd90265495
Step 3/17 : ENV PYTHONDONTWRITEBYTECODE 1
---> Using cache
---> 02f809f7ae09
Step 4/17 : WORKDIR /app
---> Using cache
---> 122a1417b629
Step 5/17 : EXPOSE 7001
---> Using cache
---> d162f6fce538
Step 6/17 : COPY bin/wait-for-it.sh /app/bin/wait-for-it.sh
---> Using cache
---> 22e2e2fc6b89
Step 7/17 : RUN chmod a+x /app/bin/wait-for-it.sh
---> Using cache
---> 92f5602bd851
Step 8/17 : COPY bin/dumb-init /usr/local/bin/dumb-init
---> Using cache
---> 2f14ca56d59e
Step 9/17 : RUN chmod a+x /usr/local/bin/dumb-init
---> Using cache
---> e72dac1afe53
Step 10/17 : ENTRYPOINT /usr/local/bin/dumb-init --
---> Using cache
---> 2a0c21f24dc2
Step 11/17 : RUN apk add --no-cache gcc musl-dev postgresql-dev postgresql-client bash
---> Using cache
---> c879b166a4d4
Step 12/17 : COPY ./requirements.txt /app/requirements.txt
---> Using cache
---> ca6e64fa0c30
Step 13/17 : RUN pip install -r requirements.txt --no-cache-dir --disable-pip-version-check
---> Using cache
---> 0726114769b2
Step 14/17 : COPY . /app
---> Using cache
---> 8791f8513e19
Step 15/17 : RUN chown -R app /app
---> Using cache
---> bef628df4eff
Step 16/17 : RUN chgrp -R app /app
---> Using cache
---> 1060f1908f93
Step 17/17 : USER app
---> Using cache
---> 653247602b9b
Successfully built 653247602b9b
docker-compose build
db uses an image, skipping
app uses an image, skipping
Building nginx
Step 1/5 : FROM nginx:1.9.9
---> fb7fe53bd952
Step 2/5 : RUN rm /etc/nginx/nginx.conf
---> Using cache
---> f7c7071cac10
Step 3/5 : COPY nginx.conf /etc/nginx/
---> Using cache
---> 7daeac3493ba
Step 4/5 : RUN rm /etc/nginx/conf.d/default.conf
---> Using cache
---> 71b57afec1c5
Step 5/5 : COPY nginx-site.conf /etc/nginx/conf.d/
---> Using cache
---> bb846dd046db
Successfully built bb846dd046db
docker-compose run app sh -c "/app/bin/wait-for-it.sh db:5432;coverage run manage.py test;coverage report -m --fail-under=100"
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
make: *** [test] Error 1

jkerim@Jared-Kerims-MacBook-Pro~/Documents/experimenter(44)$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3347c8ec0a59 experimenter_nginx "nginx -g 'daemon ..." 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp experimenter_nginx_1
445cbaba388f 3f9347fa6f96 "/usr/local/bin/du..." 22 hours ago Up 22 hours experimenter_app_1
710bce8bf1c5 postgres:9.6 "docker-entrypoint..." 22 hours ago Up 22 hours experimenter_db_1

This reminds me of what we did for Fennec Switchboard experiement.

Hello,

Did you consider storing your experiment in Kinto?

We did it for Fennec experiments and you could benefit from really nice features:

  • Content-Signature of your experiements
  • An administrative panel to store your experiement manage them and review them (with preview for QA and email notifications).
  • An already deployed stack in both stage and production at Mozilla to get started really fast (as well as Permissions and editors/reviewers groups management, VPN access and LDAP authentication.
  • A client already shipped in Firefox which handle Backoff and Retry-After headers as well as synchronization between Mozilla and Firefox clients.

Can we plan a meeting to show you how it might work for your project?

Collapse Dockerfiles into one

Having separate dockerfiles for build and dev is forcing docker to rebuild the dev container on every run which slows down testing and development so I'm going to collapse them back in to one dockerfile until I find a better solution.

Update to latest Faker

======================================================================
ERROR: experimenter.experiments.tests.test_admin (unittest.loader._FailedTest)

ImportError: Failed to import test module: experimenter.experiments.tests.test_admin
Traceback (most recent call last):
File "/usr/local/lib/python3.5/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/lib/python3.5/unittest/loader.py", line 369, in _get_module_from_name
import(name)
File "/app/experimenter/experiments/tests/test_admin.py", line 6, in
from experimenter.experiments.tests.factories import (
File "/app/experimenter/experiments/tests/factories.py", line 3, in
import factory
File "/usr/local/lib/python3.5/site-packages/factory/init.py", line 46, in
from .faker import Faker
File "/usr/local/lib/python3.5/site-packages/factory/faker.py", line 41, in
import faker
File "/usr/local/lib/python3.5/site-packages/faker/init.py", line 7, in
raise ImportError(error)
ImportError: The fake-factory package is now called Faker.

Please update your requirements.

Add comments

We should add some kind of a comment system like disqus or discourse to experiment status pages so people can discuss results.

Add ops monitoring endpoints

We need the following endpoints:

  1. /version Git version information
  2. /heartbeat A health status which includes healthy connection to the db
  3. /lbheartbeat A health status which does not include external service connection checks

Enforce 1 variant and 1 control

Right now we only support having a single control and a single variant. Let's enforce that at the db level for now because other parts of the code will assume that is the case.

Improve Admin theme

Maybe there's a simple thing we can drop in there that makes it look less horrible.

Add an experiment state drop down

Right now, we have an 'active' flag for experiments, however we should change this to a drop down with the following states:

  1. Not Started
  2. Active
  3. Complete

Then we can auto populate the start_date and end_date by when the state is manually changed and make them read only.

You can create this drop down by

  1. Adding a field called state and making it a PositiveIntegerField
  2. Define the possible values and their labels using the choices field https://docs.djangoproject.com/en/1.11/ref/models/fields/#choices

Fixes for deployment

Just a running list I need to put somewhere

  • Need endpoints for __version__, __heartbeat__, and __lbheartbeat__
  • Need to add user within container
  • Switch to mozlog output
  • Need to have option to disable DEBUG
  • Invalid HTTP_HOST header: 'localhost'. You may need to add 'localhost' to ALLOWED_HOSTS.
    • Add option for USE_X_FORWARDED_HOST
  • Maybe add SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
  • Add S3 bucket to resources
  • Add lambda job to resources to sync S3 bucket
  • Add VPN

Update README

Add some info the readme like

  • overview
  • installation/setup
  • usage commands

Change threshold to a DecimalField

We need to be able to specify cohorts with more precision than the PositiveIntegerField allows us, let's use a DecimalField with 4 places of precision: 0.0001

Switch back to debian from alpine

Alpine seems really slick but it's causing segfaults:

bash-4.3$ ./manage.py runserver 0:7001
Performing system checks...

Segmentation fault

Python should never segfault, so bye bye alpine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.