Giter Site home page Giter Site logo

schireson / pytest-alembic Goto Github PK

View Code? Open in Web Editor NEW
169.0 8.0 11.0 523 KB

Pytest plugin to test alembic migrations (with default tests) and which enables you to write tests specific to your migrations.

License: MIT License

Makefile 0.63% Python 99.37%
schiresonip tidepod pytest alembic sqlalchemy migrations hacktoberfest

pytest-alembic's Introduction

Github Actions Build codecov Documentation Status

See the full documentation here.

Introduction

A pytest plugin to test alembic migrations (with default tests) and which enables you to write tests specific to your migrations.

$ pip install pytest-alembic
$ pytest --test-alembic

...
::pytest_alembic/tests/model_definitions_match_ddl <- . PASSED           [ 25%]
::pytest_alembic/tests/single_head_revision <- . PASSED                  [ 50%]
::pytest_alembic/tests/up_down_consistency <- . PASSED                   [ 75%]
::pytest_alembic/tests/upgrade <- . PASSED                               [100%]

============================== 4 passed in 2.32s ===============================

The pitch

Have you ever merged a change to your models and you forgot to generate a migration?

Have you ever written a migration only to realize that it fails when there’s data in the table?

Have you ever written a perfect migration only to merge it and later find out that someone else merged also merged a migration and your CD is now broken!?

pytest-alembic is meant to (with a little help) solve all these problems and more. Note, due to a few different factors, there may be some minimal required setup; however most of it is boilerplate akin to the setup required for alembic itself.

Built-in Tests

  • test_single_head_revision

    Assert that there only exists one head revision.

    We’re not sure what realistic scenario involves a diverging history to be desirable. We have only seen it be the result of uncaught merge conflicts resulting in a diverged history, which lazily breaks during deployment.

  • test_upgrade

    Assert that the revision history can be run through from base to head.

  • test_model_definitions_match_ddl

    Assert that the state of the migrations matches the state of the models describing the DDL.

    In general, the set of migrations in the history should coalesce into DDL which is described by the current set of models. Therefore, a call to revision --autogenerate should always generate an empty migration (e.g. find no difference between your database (i.e. migrations history) and your models).

  • test_up_down_consistency

    Assert that all downgrades succeed.

    While downgrading may not be lossless operation data-wise, there’s a theory of database migrations that says that the revisions in existence for a database should be able to go from an entirely blank schema to the finished product, and back again.

  • Experimental tests

    • all_models_register_on_metadata

      Assert that all defined models are imported statically.

      Prevents scenarios in which the minimal import of your models in your env.py does not import all extant models, leading alembic to not autogenerate all your models, or (worse!) suggest the deletion of tables which should still exist.

    • downgrade_leaves_no_trace

      Assert that there is no difference between the state of the database pre/post downgrade.

      In essence this is a much more strict version of test_up_down_consistency, where the state of a MetaData before and after a downgrade are identical as far as alembic (autogenerate) is concerned.

    These tests will need to be enabled manually because their semantics or API are not yet guaranteed to stay the same. See the linked docs for more details!

Let us know if you have any ideas for more built-in tests which would be generally useful for most alembic histories!

Custom Tests

For more information, see the docs for custom tests (example below) or custom static data (to be inserted automatically before a given revision).

Sometimes when writing a particularly gnarly data migration, it helps to be able to practice a little timely TDD, since there’s always the potential you’ll trash your actual production data.

With pytest-alembic, you can write tests directly, in the same way that you would normally, through the use of the alembic_runner fixture.

def test_gnarly_migration_xyz123(alembic_engine, alembic_runner):
    # Migrate up to, but not including this new migration
    alembic_runner.migrate_up_before('xyz123')

    # Perform some very specific data setup, because this migration is sooooo complex.
    # ...
    alembic_engine.execute(table.insert(id=1, name='foo'))

    alembic_runner.migrate_up_one()

alembic_runner has a number of methods designed to make it convenient to change the state of your database up, down, and all around.

Installing

pip install "pytest-alembic"

pytest-alembic's People

Contributors

bluefish6 avatar dancardin avatar danfimov avatar dependabot[bot] avatar kiddten avatar kyleking avatar luke-mino-altherr avatar mgedmin avatar nlocascio avatar tschm avatar zipfile avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-alembic's Issues

Exclude some migrations by hash

Hello,

Is it possible to skip some migrations for test?

For example, i have 50 migrations and 10 of them - run big long procedure, some of them - more than 5 minutes. They are fill some tables initial data from remote source. Is it possible to exclude "upgrade" and "downgrade" for them?

Tests using the same database as pytest_alembic are failing on 0.10.7

Hi!

we are currently trying to upgrade from version 0.10.6 to 0.10.7 in our project. Unfortunately we are experiencing some problems, since the pytest_alembic tests seem to produce side effects between tests which are not getting rolled back in the new version.

An excerpt of our migration test module:

import pytest
from pytest_alembic import MigrationContext, tests
from sqlalchemy import Engine


@pytest.fixture()
def alembic_engine(our_custom_engine_fixture: Engine) -> Engine:
    return our_custom_engine_fixture


@pytest.fixture()
def alembic_config() -> dict:
    return {
        "at_revision_data": {...}
    }


def test_single_head_revision(alembic_runner: MigrationContext) -> None:
    tests.test_single_head_revision(alembic_runner)


def test_upgrade(alembic_runner: MigrationContext) -> None:
    tests.test_upgrade(alembic_runner)


def test_model_definitions_match_ddl(alembic_runner: MigrationContext) -> None:
    tests.test_model_definitions_match_ddl(alembic_runner)


def test_up_down_consistency(alembic_runner: MigrationContext) -> None:
    tests.test_up_down_consistency(alembic_runner)

We are using our own database engine throughout the entire test session, the changes are getting rolled back after each test. Before version 0.10.7 pytest_alembic rolled the database back by itself. Now our own tests are failing because there are unexpected tables in the database. We assume thats because of the following change in 0.10.7, where connection.connect() is replaced by connection.begin():

v0.10.6...v0.10.7#diff-0108f216ff6e1670b0a351075109c35eb9af378fc93b0442c06a0dd1232607b5L183-R186

We found a workaround for our use case which simply drops the alembic_version table and all our schemas after the migration test module ran:

@pytest.fixture(autouse=True, scope="module")
def _teardown_tables(our_custom_engine_fixture: Engine) -> None:
    yield
    with our_custom_engine_fixture.connect() as connection:
        Base.metadata.drop_all(bind=connection)
        connection.execute(text("DROP TABLE IF EXISTS alembic_version"))
        connection.commit()

Question is: Is this the intended behaviour of pytest_alembic?
We are aware that our setup with a single database for the entire pytest session is a bit of a special case.

Best regards and thank you for your work providing this nice lib!

Running test_model_definitions_match_ddl() via imports will not show the context

I run a normal pytest invocation and only import the tests I want:

test_migration.py

from pytest_alembic.tests import test_model_definitions_match_ddl  # noqa
PYTHON_TEST_DIRS := tests/

.PHONY: test
test:  ## Runs all tests.
	# Use high verbosity to see the diffs
	poetry run python -m pytest -vv $(PYTHON_TEST_DIRS)

the tests work, but sometimes I have (random?) failures, but it doesn't show what is actually the problem:

>               raise AlembicTestFailure(
                    "The models describing the DDL of your database are out of sync with the set of "
                    "steps described in the revision history. This usually means that someone has "
                    "made manual changes to the database's DDL, or some model has been changed "
                    "without also generating a migration to describe that change.",
                    context=[
                        (
                            "The upgrade which would have been generated would look like",
                            rendered_upgrade,
                        )
                    ],
                )
E               pytest_alembic.plugin.error.AlembicTestFailure: The models describing the DDL of your database are out of sync with the set of steps described in the revision history. This usually means that someone has made manual changes to the database's DDL, or some model has been changed without also generating a migration to describe that change.

Unfortunately, it doesn't show me what is problematic, as the context is not rendered/printed by pytest.

(funnily, after adding the rendered stuff to the message via + rendered_upgrade, the test passed in the next run, so something is definitely off in my setup resulting in flaky tests, but this prevents me from seeing that problem)

Multiple database support

Hello,

This may be possible, but I'm not sure how to get this working.

In our project, we manage migrations for 2 databases. Within our alembic folder we have two folders db1 and db2 each with their own env.py.

When attempting to run alembic tests, I'm seeing this failure:

self = <alembic.config.Config object at 0x1067fbed0>, section = 'alembic', name = 'script_location', default = None

    def get_section_option(
        self, section: str, name: str, default: Optional[str] = None
    ) -> Optional[str]:
        """Return an option from the given section of the .ini file."""
        if not self.file_config.has_section(section):
>           raise util.CommandError(
                "No config file %r found, or file has no "
                "'[%s]' section" % (self.config_file_name, section)
            )
E           alembic.util.exc.CommandError: No config file 'alembic.ini' found, or file has no '[alembic]' section

../../../Library/Caches/pypoetry/virtualenvs/service-YZCmB_cs-py3.11/lib/python3.11/site-packages/alembic/config.py:305: CommandError

I'm sure this is probably possible, but I'm not sure what configuration change is needed.

I haven't found any descriptions of this in documentation or web search.

Any insights is greatly appreciated.

pytest --test-alembic command fails with pytest 8.1.0

when I run pytest --test-alembic or pytest -m 'alembic' on pytest==8.1.0 I get the following error.

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/_pytest/main.py", line 282, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pluggy/_hooks.py", line 501, in __call__
INTERNALERROR>     return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
INTERNALERROR>            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pluggy/_manager.py", line 119, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pluggy/_callers.py", line 138, in _multicall
INTERNALERROR>     raise exception.with_traceback(exception.__traceback__)
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pluggy/_callers.py", line 121, in _multicall
INTERNALERROR>     teardown.throw(exception)  # type: ignore[union-attr]
INTERNALERROR>     ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/_pytest/logging.py", line 785, in pytest_sessionstart
INTERNALERROR>     return (yield)
INTERNALERROR>             ^^^^^
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pluggy/_callers.py", line 102, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>           ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pytest_alembic/plugin/hooks.py", line 72, in pytest_sessionstart
INTERNALERROR>     session.config.pluginmanager.register(plugin, "pytest-alembic")
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/_pytest/config/__init__.py", line 497, in register
INTERNALERROR>     plugin_name = super().register(plugin, name)
INTERNALERROR>                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pluggy/_manager.py", line 167, in register
INTERNALERROR>     self._verify_hook(hook, hookimpl)
INTERNALERROR>   File "/usr/local/lib/python3.11/site-packages/pluggy/_manager.py", line 342, in _verify_hook
INTERNALERROR>     raise PluginValidationError(
INTERNALERROR> pluggy._manager.PluginValidationError: Plugin 'pytest-alembic' for hook 'pytest_collect_file'
INTERNALERROR> hookimpl definition: pytest_collect_file(file_path, path, parent)
INTERNALERROR> Argument(s) {'path'} are declared in the hookimpl but can not be found in the hookspec

If I change back to pytest version 8.0.2, the tests work as expected.


pytest version: 8.1.0
pytest-alembic version: 0.10.7

my tests/conftest.py:

import pytest
from sqlalchemy import MetaData, create_engine


@pytest.fixture
def alembic_engine():
    POSTGRES_USER = "postgres"
    POSTGRES_PASSWORD = "password"
    POSTGRES_SERVER = "test-db"
    POSTGRES_DB = "app"
    SQLALCHEMY_DATABASE_URI = (
        f"postgresql://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{POSTGRES_SERVER}/{POSTGRES_DB}"
    )
    engine = create_engine(
        SQLALCHEMY_DATABASE_URI, pool_pre_ping=True, pool_size=30, max_overflow=40
    )

    if "test-db" in engine.url:
        m = MetaData()
        m.reflect(engine)
        m.drop_all(engine)

    return engine

`MigrationContext.previous_revision()` returns wrong revision with nonlinear history

I have the following revisions:

"""
Revision ID: 7f5b06f81aef
Revises: effc355fddff
Create Date: 2020-07-01 16:44:05.244310
"""
from alembic import op
import sqlalchemy as sa

# revision identifiers, used by Alembic.
revision = '7f5b06f81aef'
down_revision = 'effc355fddff'
branch_labels = None
depends_on = None

def upgrade():
    pass # upgrade code redacted

def downgrade():
    pass # downgrade code redacted
"""
Revision ID: 79ee02b5e895
Revises: 10483a8dd3c8, effc355fddff
Create Date: 2020-06-23 11:04:51.901600
"""
from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision = '79ee02b5e895'
down_revision = ('10483a8dd3c8', 'effc355fddff')
branch_labels = None
depends_on = None

def upgrade():
    pass # upgrade code redacted

def downgrade():
    pass # downgrade code redacted
"""
Revision ID: b55e304ba00c
Revises: 7f5b06f81aef, 79ee02b5e895
Create Date: 2020-07-08 14:57:08.448885
"""
from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision = 'b55e304ba00c'
down_revision = ('7f5b06f81aef', '79ee02b5e895')
branch_labels = None
depends_on = None

def upgrade():
    pass # upgrade code redacted

def downgrade():
    pass # downgrade code redacted

And the down revisions of 7f5b06f81aef and 79ee02b5e895.

The current revision is 7f5b06f81aef.

When I run alembic_runner.migrate_down_one() in a test I get the following error:

alembic.util.exc.CommandError: Destination 79ee02b5e895 is not a valid downgrade target from current head(s)

Which lead me to check alembic_runner.history.previous_revision(alembic_runner.current), and indeed it returns 79ee02b5e895. But as you can see that is not the previous revision, but a "sibling" of 7f5b06f81aef in b55e304ba00c.

Given the structure of my history, I would expect alembic_runner.history.previous_revision(alembic_runner.current) to return effc355fddff.

This behaviour worked on version 0.5.0, but after I've upgraded to 0.8.2 I started getting this error.

Using "alembic_runner" fixture silences application logs.

Hi! For testing the application, I created a fixture which initializes the database using alembic_runner. It looks like that and works just find (kinda slow but it does the job).

@pytest.fixture()
def _init_db(alembic_runner: MigrationContext):
    alembic_runner.migrate_up_to("heads", return_current=False)
    yield
    alembic_runner.migrate_down_to("base", return_current=False)

I discovered that after replacing metadata.create_all with alembic_runner.migrate_up_to in _init_db fixture, no application logs are shown in the console in failed tests. I spent half a day trying to solve the issue, and surprisingly the following dirty hack fixes it:

@pytest.fixture()
def _init_db(alembic_runner: MigrationContext):
    alembic_runner.migrate_up_to("heads", return_current=False)
    setup_logging(json_logs=settings.logs.json_format, log_level=settings.logs.level)
    yield
    alembic_runner.migrate_down_to("base", return_current=False)
  • setup_logging function configures structlog

If I set up logger before running migrations, I get no logs:

@pytest.fixture()
def _init_db(alembic_runner: MigrationContext):
    setup_logging(json_logs=settings.logs.json_format, log_level=settings.logs.level)
    alembic_runner.migrate_up_to("heads", return_current=False)
    yield
    alembic_runner.migrate_down_to("base", return_current=False)

Somehow running alembic_runner.migrate_up_to breaks the logger and I can't figure out why.

test_up_down_consistency broken after updating to pytest 6.2.4

I updated pytest from 6.2.3 -> 6.2.4 today and test_up_down_consistency now fails when I try to run it. Below is the output from running with 6.2.3 vs 6.2.4. Is there anything else I can provide that would help debugging? Great plugin btw.

Success run w/ pytest 6.2.3
$ pipenv run pytest -vvv -s --test-alembic -k "pytest_alembic::test_up_down_consistency"
======================================================================================== test session starts =========================================================================================
platform darwin -- Python 3.8.6, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 -- /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/bin/python
cachedir: .pytest_cache
Using --randomly-seed=2967129046
rootdir: <omitted>, configfile: pytest.ini, testpaths: tests
plugins: randomly-3.7.0, alembic-0.2.6, cov-2.11.1, snapshottest-0.6.0, mock-resources-1.5.1, timeout-1.4.2, flask-1.2.0
collected 388 items / 391 deselected

tests::pytest_alembic::test_up_down_consistency <- . running migrations
PASSED

========================================================================================== warnings summary ==========================================================================================
../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/aiodataloader.py:2
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/aiodataloader.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
    from collections import Iterable, namedtuple

../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/graphene/types/field.py:2
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/graphene/types/field.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
    from collections import Mapping, OrderedDict

../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/newrelic/console.py:84: 18 warnings
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/newrelic/console.py:84: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly
    prototype = wrapper.__name__[3:] + ' ' + inspect.formatargspec(

../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/_pytest/nodes.py:274
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/_pytest/nodes.py:274: PytestUnknownMarkWarning: Unknown pytest.mark.mysql - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    marker_ = getattr(MARK_GEN, marker)

tests::pytest_alembic::test_up_down_consistency
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/pytest_alembic/plugin/plugin.py:98: PytestDeprecationWarning: A private pytest class or function was used.
    fixture_request = fixtures.FixtureRequest(self)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================================== 1 passed, 391 deselected, 22 warnings in 21.22s ===========================================================================
Failed run w/ pytest 6.2.4
$ pipenv run pytest -vvv -s --test-alembic -k "pytest_alembic::test_up_down_consistency"
======================================================================================== test session starts =========================================================================================
platform darwin -- Python 3.8.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/bin/python
cachedir: .pytest_cache
Using --randomly-seed=1002127626
rootdir: <omitted>, configfile: pytest.ini, testpaths: tests
plugins: randomly-3.7.0, alembic-0.2.6, cov-2.11.1, snapshottest-0.6.0, mock-resources-1.5.1, timeout-1.4.2, flask-1.2.0
collected 388 items / 391 deselected

tests::pytest_alembic::test_up_down_consistency <- . running migrations
FAILED

============================================================================================== FAILURES ==============================================================================================
__________________________________________________________________ [pytest-alembic] tests::pytest_alembic::test_up_down_consistency __________________________________________________________________
Failing Revision:
    head

Alembic Error:
    Not a valid downgrade target from current heads

Errors:
E   pytest_alembic.plugin.error.AlembicTestFailure: Failed to downgrade through each revision individually.
========================================================================================== warnings summary ==========================================================================================
../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/aiodataloader.py:2
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/aiodataloader.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
    from collections import Iterable, namedtuple

../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/graphene/types/field.py:2
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/graphene/types/field.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
    from collections import Mapping, OrderedDict

../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/newrelic/console.py:84: 18 warnings
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/newrelic/console.py:84: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly
    prototype = wrapper.__name__[3:] + ' ' + inspect.formatargspec(

../../../.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/_pytest/nodes.py:274
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/_pytest/nodes.py:274: PytestUnknownMarkWarning: Unknown pytest.mark.mysql - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    marker_ = getattr(MARK_GEN, marker)

tests::pytest_alembic::test_up_down_consistency
  /Users/caseychance/.local/share/virtualenvs/website-Mh3F1bnu/lib/python3.8/site-packages/pytest_alembic/plugin/plugin.py:98: PytestDeprecationWarning: A private pytest class or function was used.
    fixture_request = fixtures.FixtureRequest(self)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
====================================================================================== short test summary info =======================================================================================
FAILED tests::pytest_alembic::test_up_down_consistency
========================================================================== 1 failed, 391 deselected, 22 warnings in 20.25s ===========================================================================
make: *** [test-alembic] Error 1

Need a clean way to disable test_up_down_consistency or limit to a specific revision

Howdy, I inherited some migrations that can't be downgraded.

Some ideas:

  1. Stop running downgrades if one raises NotImplementedError.
    This seems like a fairly clean way to tell test_up_down_consistency that you can't get there from here. :)

  2. Provide a way to specify a revision not to downgrade from and beyond.

  3. Provide a way to disable test_up_down_consistency. Is there a way and I missed it?

    I found a hacky way that I'm using for now. :)

Skip certain migrations while running test_model_definitions_match_ddl

Hi there,

First of all, thanks for all work here 🏆

I have a case where I have a migration (generated without --autogenerate option) which creates a table but turns out that I don't have any model mapped to the database (it is a table required by a 3rd party lib), causing the test to fail.

Errors:
E   pytest_alembic.plugin.error.AlembicTestFailure: The models decribing the DDL of your database are out of sync with the set of steps described in the revision history. This usually means that someone has made manual changes to the database's DDL, or some model has been changed without also generating a migration to describe that change.

I was wondering if there is a way of skipping certain migrations for that particular test.

Can't override fixtures from conftest.py within subdirectory

Hello, thanks for creating this awesome tool.

I was having some trouble overriding the alembic_config and alembic_engine fixtures in my tests. The docs don't specify how to go about overriding, so I was experimenting with anything I could think of.

My tests (as well as my conftest.py module) live within a subdirectory in my package structure. The only way I was able to get my fixtures to override the pytest-alembic fixtures was to move my conftest.py to the package root (the same directory I was running the pytest command from).

It seems like there's something funky going on with the ordering in which pytest loads fixtures from various places and plugins.

I was able to work around the problem by not running pytest --test-alembic, but rather just importing the test functions from pytest-alembic directly into a test file (where I overrode the fixtures). You can see what that looks like here: votingworks/arlo#636.

It's actually kind of nice, because I get to specify which of the builtin tests I want to run (I don't have reversible migrations, so I had to skip the downgrading test). It made me think that perhaps making the tests more of a library to pull from instead of doing fancy magic to add them onto the end of the test run might actually make for a simpler user experience overall, so I thought I'd share.

In summary:

  • It would be helpful to have documentation about overriding the fixtures
  • I don't know why conftest.py had to be at the project root to override the fixtures - any ideas?
  • Consider providing test functions as a library instead of a plugin

migrate_down_to("base") does not downgrate since v0.10.0

Hi,

We run some pytest that make use of pytest-alembic to test some migrations. At the end of such tests, we run migrate_down_to("base") and we've noted that it does not downgrade correctly since v0.10.0 (the schema persists in the test database).

This does not happen in v0.9.1 or earlier.

Thanks!

Overriding fixtures doesn't work unless conftest.py is in the root

Hi, I'm experiencing an issue when running pytest --test-alembic where my overridden alembic_engine fixture isn't having an effect, unless conftest.py is in the project root directory (and not the src/ sub-directory where all the source and test files are).

I noticed that the automatic alembic tests are triggered against a tinker.py script which I have in the root of my project directory, so I tried moving conftest.py to the root, and then the tests passed, my alembic_engine fixture was used, and everything seemed fine.

So I added an a.py to the project root, and the alembic tests then showed as running against that file instead.

Is there something else I need to set to tell pytest_alembic where to run its tests inside src/ so that the correct config is loaded?

Concept: Optional built-in test which asserts all models are imported for alembic

It's trivially easy to forget to import a model such that --autogenerate (i.e. env.py) does not automatically recognize all the tables that should be included.

Perform a "clean" import of the base model MetaData (ideally through env.py itself) which collects the set of tablenames included in the MetaData. This necessarily needs to be imported in a separate "clean" python process from the current pytest execution.

Contrast that with the set of tables on the MetaData after having imported all child modules of some root module.

If there is any diff between the set of tables defined on either metadata, then this indicates a model/table has no import path to it from the root of the model definition.

False - positives on Models match DDL test

Expected behavior:

  • When alembic revision --autogenerate generates no changes in schema, pytest-alembic should pass on this test

Actual behavior:

  • alembic generates no revisions and pytest-alembic fails this test.

image

Errors:
E   pytest_alembic.plugin.error.AlembicTestFailure: The models decribing the DDL of your database are out of sync with the set of steps described in the revision history. This usually means that someone has made manual changes to the database's DDL, or some model has been changed without also generating a migration to describe that change.

Cannot create Enum columns when running first migration

Take a first migration that has an enum column:

from alembic import op
import sqlalchemy as sa

# revision identifiers, used by Alembic.
revision = 'ee70440ea5c7'
down_revision = None
branch_labels = None
depends_on = None

def upgrade():
    op.create_table('my_table',
        sa.Column("id", sa.Integer(), nullable=False)
        sa.Column('my_enum', sa.Enum('A', 'B', name='my_enum'), nullable=False),
    )

def downgrade():
    op.drop_table('my_table')

With this, we can successfully run alembic upgrade head.

However, when we run pytest --test-alembic, test_upgrade fails:

sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DuplicateObject) type "my_enum" already exists

A common fix for this is to use the postgresql.ENUM and set create_type=False - but note the column doesn't exist yet, so we do need to create the type!

from sqlalchemy.dialects.postgresql import ENUM

...

def upgrade():
    op.create_table('my_table',
        sa.Column("id", sa.Integer(), nullable=False)
        sa.Column('my_enum', ENUM('A', 'B', name='my_enum', create_type=False), nullable=False),
    )

We now predictably end up with the type doesn't exist error:

sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) type "my_enum" does not exist

We can also create the type separately:

def upgrade():
    my_enum = sa.Enum('A', 'B', name='my_enum')
    my_enum.create(op.get_bind())
    op.create_table('my_table',
        sa.Column("id", sa.Integer(), nullable=False)
        sa.Column('my_enum', my_enum, nullable=False),
    )

But again with pytest --test-alembic, we get the first error:

sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DuplicateObject) type "my_enum" already exists

How do I create types on first migration with pytest-alembic?

AttributeError: 'Config' object has no attribute 'get'

Ok,
awesome tool. I struggle to make it work for projects with multiple schemata. Here are my schemata:

from sqlalchemy import Column, String, Integer, MetaData
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base(metadata=MetaData(schema="schemaA"))

# pylint: disable=too-few-public-methods
class User(Base):
    """
    Simple User Table
    """
    __tablename__ = 'users'
    id = Column(Integer, primary_key=True)
    name = Column(String)

and

from sqlalchemy import Column, String, Integer, MetaData
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base(metadata=MetaData(schema="schemaB"))

# pylint: disable=too-few-public-methods
class User(Base):
    """
    Simple User Table
    """
    __tablename__ = 'users'
    id = Column(Integer, primary_key=True)
    name = Column(String)

I then implemented in conftest.py

import os
import sqlalchemy
from alembic.config import Config


def resource(name):
    return os.path.join(os.path.dirname(__file__), "resources", name)


def alembic_engine_test(schema):
    engine = sqlalchemy.create_engine("sqlite:///")
    engine.execute(f"ATTACH DATABASE ':memory:' AS {schema}")
    return engine


def alembic_config_test(schema):
    alembic_cfg = Config()
    # correct absolute path of env.py, versions and script.py.mako
    assert os.path.exists(resource(schema))
    alembic_cfg.set_main_option("script_location", resource(schema)) 
    return alembic_cfg

And in my test file:

import pytest
import pytest_alembic
from common.schemata.schemaA import Base
from tests.conftest import alembic_engine_test, alembic_config_test


@pytest.fixture
def alembic_config():
    """Override this fixture to configure the exact alembic context setup required."""
    return alembic_config_test(Base.metadata.schema)


@pytest.fixture
def alembic_engine():
    """Override this fixture to provide pytest-alembic powered tests with a database handle."""
    return alembic_engine_test(Base.metadata.schema)


@pytest.fixture
def alembic_runner(alembic_config, alembic_engine):
    """Produce an alembic migration context in which to execute alembic tests."""
    with pytest_alembic.runner(config=alembic_config, engine=alembic_engine) as runner:
        yield runner

from pytest_alembic.tests import test_single_head_revision
from pytest_alembic.tests import test_upgrade
from pytest_alembic.tests import test_model_definitions_match_ddl
from pytest_alembic.tests import test_up_down_consistency

When I then run the tests with pytest I get

cls = <class 'pytest_alembic.executor.CommandExecutor'>
config = <alembic.config.Config object at 0x7f1986611af0>
    @classmethod
    def from_config(cls, config):
>       file = config.get("file", "alembic.ini")
E       AttributeError: 'Config' object has no attribute 'get'
.venv/lib/python3.9/site-packages/pytest_alembic/executor.py:22: AttributeError

I am using Alembic 1.6.5. I guess there is something wrong with my code or less likely something in Alembic is broken. However, there might be an easier approach to use pytest-alembic on projects with multiple Base objects.
Your help is very appreciated...

Issues with async tests

Hello,
I am trying to write unit tests using the pytest-alembic package where I migrate my db schema into an in-memory sqlite db.

I have defined fixtures like this:

from pytest_alembic.config import Config
from sqlalchemy.ext.asyncio import create_async_engine

@pytest.fixture
def alembic_config():
    return Config()


@pytest.fixture
def alembic_engine():
    return create_async_engine("sqlite+aiosqlite:///")

My env.py file is like this:

import asyncio
from logging.config import fileConfig

from sqlalchemy import pool
from sqlalchemy.engine import Connection
from sqlalchemy.ext.asyncio import async_engine_from_config

from alembic import context
from myproject.database.models import Base

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
    fileConfig(config.config_file_name)

# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = Base.metadata

# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.


def run_migrations_offline() -> None:
    """Run migrations in 'offline' mode.

    This configures the context with just a URL
    and not an Engine, though an Engine is acceptable
    here as well.  By skipping the Engine creation
    we don't even need a DBAPI to be available.

    Calls to context.execute() here emit the given string to the
    script output.

    """
    url = config.get_main_option("sqlalchemy.url")
    context.configure(
        url=url,
        target_metadata=target_metadata,
        literal_binds=True,
        dialect_opts={"paramstyle": "named"},
    )

    with context.begin_transaction():
        context.run_migrations()


def do_run_migrations(connection: Connection) -> None:
    context.configure(connection=connection, target_metadata=target_metadata)

    with context.begin_transaction():
        context.run_migrations()


async def run_async_migrations() -> None:
    """In this scenario we need to create an Engine
    and associate a connection with the context.

    """
    connectable = context.config.attributes.get("connection", None)
    if connectable is None:
        connectable = async_engine_from_config(
            config.get_section(config.config_ini_section, {}),
            prefix="sqlalchemy.",
            poolclass=pool.NullPool,
        )

    async with connectable.connect() as connection:
        await connection.run_sync(do_run_migrations)

    await connectable.dispose()


def run_migrations_online() -> None:
    """Run migrations in 'online' mode."""

    asyncio.run(run_async_migrations())


if context.is_offline_mode():
    run_migrations_offline()
else:
    run_migrations_online()

When I run the following test:

import pytest
from sqlalchemy.ext.asyncio.engine import AsyncEngine


@pytest.mark.asyncio
async def test_alembic_migration(alembic_runner, alembic_engine: AsyncEngine):
    alembic_runner.migrate_up_to("heads")

I get this error:

alembic_runner = MigrationContext(command_executor=CommandExecutor(alembic_config=<alembic.config.Config object at 0x28d0b9190>, stdout...c_config=None, before_revision_data=None, at_revision_data=None, minimum_downgrade_revision=None, skip_revisions=None))
alembic_engine = <sqlalchemy.ext.asyncio.engine.AsyncEngine object at 0x28ae4cfc0>

    @pytest.mark.asyncio
    async def test_alembic_migration(alembic_runner, alembic_engine: AsyncEngine):
>       alembic_runner.migrate_up_to("heads")

tests/pytest_alembic/test_database.py:7:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.11/site-packages/pytest_alembic/runner.py:196: in migrate_up_to
    return self.managed_upgrade(revision, return_current=return_current)
.venv/lib/python3.11/site-packages/pytest_alembic/runner.py:146: in managed_upgrade
    current = self.current
.venv/lib/python3.11/site-packages/pytest_alembic/runner.py:82: in current
    self.command_executor.execute_fn(get_current)
.venv/lib/python3.11/site-packages/pytest_alembic/executor.py:41: in execute_fn
    self.script.run_env()
.venv/lib/python3.11/site-packages/alembic/script/base.py:585: in run_env
    util.load_python_file(self.dir, "env.py")
.venv/lib/python3.11/site-packages/alembic/util/pyfiles.py:93: in load_python_file
    module = load_module_py(module_id, path)
.venv/lib/python3.11/site-packages/alembic/util/pyfiles.py:109: in load_module_py
    spec.loader.exec_module(module)  # type: ignore
<frozen importlib._bootstrap_external>:940: in exec_module
    ???
<frozen importlib._bootstrap>:241: in _call_with_frames_removed
    ???
alembic/env.py:91: in <module>
    run_migrations_online()
alembic/env.py:85: in run_migrations_online
    asyncio.run(run_async_migrations())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

main = <coroutine object run_async_migrations at 0x105baeb60>

    def run(main, *, debug=None):
        """Execute the coroutine and return the result.

        This function runs the passed coroutine, taking care of
        managing the asyncio event loop and finalizing asynchronous
        generators.

        This function cannot be called when another asyncio event loop is
        running in the same thread.

Any help/directions would be greatly appreciated, not entirely sure what I am doing wrong here but it obviously looks like there is something wrong in how I am setting the asyncio framework up.

Test failures resulting from env.py not being imported with the app_context()

First off - love the project :)

I believe this is an issue with using pytest-alembic out-of-the-box with Flask-SQLAlchemy / Flask-Migrate project.

I encountered a test_upgrade error ending in:

E               pytest_alembic.plugin.error.AlembicTestFailure: Failed to upgrade through each revision individually.

The relevant parts of the stacktrace above it seem to be:

    from __future__ import with_statement
    
    import logging
    from logging.config import fileConfig
    
    from flask import current_app
    
    from alembic import context
    from curation.workspace import INPUT_DATA_TABLE_PREFIX
    
    # this is the Alembic Config object, which provides
    # access to the values within the .ini file in use.
    config = context.config
    
    # Interpret the config file for Python logging.
    # This line sets up loggers basically.
    fileConfig(config.config_file_name)
    logger = logging.getLogger("alembic.env")
    
    # add your model's MetaData object here
    # for 'autogenerate' support
    # from myapp import mymodel
    # target_metadata = mymodel.Base.metadata
    config.set_main_option(
        "sqlalchemy.url",
>       str(current_app.extensions["migrate"].db.get_engine().url).replace("%", "%%"),
    )

migrations/env.py:26:

and below that:

    def _find_app():
        top = _app_ctx_stack.top
        if top is None:
>           raise RuntimeError(_app_ctx_err_msg)
E           RuntimeError: Working outside of application context.
E           
E           This typically means that you attempted to use functionality that needed
E           to interface with the current application object in some way. To solve
E           this, set up an application context with app.app_context().  See the
E           documentation for more information.

venv/lib/python3.9/site-packages/flask/globals.py:47: RuntimeError

It looks like my migrations/env.py file needs to be imported with an app_context(). Providing this context myself like below got me past the issue:

from pytest_alembic import tests

def test_upgrade(app, alembic_runner):
    with app.app_context():
        tests.test_upgrade(alembic_runner)

For reference, I'm using Flask-Migrate, and I had to override the alembic_config fixture to successfully find my alembic.ini:

@pytest.fixture
def alembic_config():
    return {"file": "migrations/alembic.ini"}

I'll also note that on this project using pytest --test-alembic always matched all of my tests, and none of the built-in tests (had the same effect as simply pytest. I had to import from pytest_alembic.tests right from the get-go to get anything running.

Thanks!

alembic_runner.current does not work with alembic_version in different schema

I'm using Postgres database, I have a schema that has all my tables inside ( including the alembic_version )
When I call alembic_runner.current I see that it failed on
alembic.runtime.migration.py:532

if not self._has_version_table():
    return ()

going a little more deep I saw that the self.version_table_schema is None because
MigrationContext.current does alembic.migration.MigrationContext.configure(connection) ( pytest_alembic.runner:75 )
by doing so, opts parameter is empty and therefore opts.get("version_table_schema", None) returns None ( alembic.runtime.migration.py:182 )

test_up_down_consistency not delivering on its promise

Hi,

Thanks for creating this plugin. I've always wanted to have such tests, but never had enough time to write them in my projects.

After introducing it to my codebase, I wanted to see examples of test failures. For test test_up_down_consistency the description states:

Assert that all downgrades succeed.
While downgrading may not be lossless operation data-wise, there’s a theory of database migrations that says that the revisions in existence for a database should be able to go from an entirely blank schema to the finished product, and back again.

The body of the test in fact runs all upgrades and then runs all downgrades. While this confirms that the downgrades can be run successfully, it doesn't test what is the result of the downgrades. For example, such migration would not result in failure of the test (while it's clear that upgrade and downgrade are not "consistent"):

def upgrade():
    op.create_table(
        "foobar",
        sa.Column("id", sa.BigInteger(), nullable=False),
        sa.PrimaryKeyConstraint("id"),
    )

def downgrade():
    pass

The above downgrade "runs successfully", but does not deliver on the "go from an entirely blank schema to the finished product, and back again [to an entirely blank schema]" part. This test does not verify that the database after upgrade+downgrade is in the same state as it was before the upgrade.

[feature request] As a minimum it would be nice to check if, after running all downgrades, the schema is (again) empty.

If you don't want to add any logic to this test, it would be good to at least change the test description to be not misleading as to what this test checks. For example:

Assert that all downgrades can be run successfully (without verifying that the state after upgrade+downgrade is the same as before upgrade).

Also, the test name should be changed from test_up_down_consistency to test_up_down_does_not_crash.

model_definitions_match_ddl should delete the generated revision

When running model_definitions_match_ddl, it calls generate_revision() which creates a new revision file to check that it's empty.

The problem is, that file is never deleted, it just stays there. Maybe I missed something from the docs but it seems like model_definitions_match_ddl should be deleting that file after checking if it was empty.

Because the file stays behind, I've had to disable model_definitions_match_ddl in order to be able to use pytest-alembic which isn't ideal.

Adjustments for Compatibility with SQLAlchemy 2.0 and Explicit Transactions

Hello,

I'm currently using pytest_alembic in my project and in the process of adapting it to the changes introduced by SQLAlchemy 2.0. While doing so, I noticed a warning related to the use of implicit transactions.

The warning occurs when I run a test case that uses alembic_runner.insert_into. Such a test case looks like this:

def test_migration(
    alembic_runner: MigrationContext
):
    alembic_runner.migrate_up_before("...")
    alembic_runner.insert_into(
        "my_table",
        {
            "id": 1,
            "name": "test",
        },
    )
    # ...

Here is the exact warning I receive:

.../lib/python3.10/site-packages/pytest_alembic/executor.py:151: RemovedIn20Warning: The current statement is being autocommitted using implicit autocommit, which will be removed in SQLAlchemy 2.0. Use the .begin() method of Engine or Connection in order to use an explicit transaction for DML and DDL statements. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
    connection.execute(table.insert().values(values))

The warning suggests that the current implicit autocommit will be removed in SQLAlchemy 2.0, and instead, the .begin() method of Engine or Connection should be used to start an explicit transaction for DML and DDL statements. I'm currently using SQLAlchemy version 1.4.48 and Alembic version 1.11.1.

In relation to my code, I noticed that the warning disappears when I change self.connection.connect() to self.connection.begin() on line 183 of the file pytest_alembic/executor.py.

Since I receive this warning in a component of pytest_alembic and not directly in my own code, it seems that adjustments need to be made to pytest_alembic itself to accommodate this change. Would it be possible to make these adjustments in a future release?

Thank you in advance!

Running with `--test-alembic` creates tests in the wrong directory?

Thank you for the wonderful plugin and framework for running tests for migrations.

I've installed the plugin with the following project structure:

app/ (application code)
tests/ (tests)

Running pytest alone defaults to running tests in the tests/ folder which is expected. However when I attempt to run it with the --test-alembic flag it doesn't utilize the alembic_engine fixture in tests/conftest.py. Usually when running pytest the tests all run like tests/** but with the above flag the alembic tests runs under app/__init__.py which makes sense why it can't use the fixture. Even when using the pytest_alembic_tests_folder flag it doesn't move the tests.

As a workaround I can just import the tests in a folder under tests/ and run pytest with an alembic marker but I was just wondering why it keeps trying to create the alembic tests in the app/ folder 🤔

How to integrate with `create_all` fixture

Hi!

First of all - thank you very much for the project, it helps a lot in solving problems with migrations.

Unfortunately, I ran into a problem when integrating into a project with existing tests.

I have a fixture that creates all the tables from the models once per session, so as not to migrate everything sequentially and it does not let the migration tests work properly. The fixture is like:

@pytest.fixture(scope="session", autouse=True)
def run_migrations(init_db, engine): 
    Model.metadata.create_all(engine)

The error I get says that the table from the first migration already exists (which is reasonable, because all the tables in the database have already been created). To get around this, I tried overriding the alembic_engine fixture to use sqlite in memory, but the tests still use the database for all other tests.

Then I tried overriding pytest_alembic_tests_folder. I created a separate directory with my conftest.py and imported alembic tests there - same errors (pytest still reads tests/conftest.py and creates all tables)

So the main question is, how would one bypass create_all for these tests?

Thanks in advance!

Is it possible to override database url for migrations?

I set up multiple fixtures, as shown in documentation:

@pytest.fixture()
def async_engine(database_settings) -> AsyncEngine:
    return create_async_engine(
        database_settings.url,
        echo=database_settings.echo,
        poolclass=NullPool, 
    )


@pytest.fixture()
def alembic_engine(async_engine):
    return async_engine


@pytest.fixture()
def alembic_config(database_settings):
    return Config(config_options={"sqlalchemy.url": database_settings.url})

database_settings is a fixture which provides actual settings for the database. URL is different from the one defined in env.py. However, when I use alembic_runner fixture, the migration is applied to the database specified in env.py (postgresql+asyncpg://user:password@localhost:5433/test), not database_settings.url (postgresql+asyncpg://user:password@localhost:5433/test_1):

@pytest.fixture()
def _init_db(alembic_runner: MigrationContext, database_settings):
    with DatabaseJanitor(
        user=database_settings.username,
        host=database_settings.host,
        port=database_settings.port,
        dbname=database_settings.name,
        version="16.0",
        password=database_settings.password,
    ):
        logger.debug(f"Database setup for {database_settings.url}")  # prints postgresql+asyncpg://user:password@localhost:5433/test_1
        current = alembic_runner.migrate_up_to("heads", return_current=True)
        logger.debug("Current revision: %s", current)
        yield
        logger.debug(f"Database teardown for {database_settings.url}").

`test_model_definitions_match_ddl` fails because of an extra mock resource table

I'm currently adding migrations to my poetry project using alembic + pytest-alembic by creating a single alembic version and everything is working well except for one test:

pytest_alembic.plugin.error.AlembicTestFailure: The models describing the DDL of your database are out of sync with the set of steps described in the revision history. This usually means that someone has made manual changes to the database's DDL, or some model has been changed without also generating a migration to describe that change.

Specifically, the issue seems to be that the postgres mock resource creates an additional table that obviously not part of my sqlalchemy orms.

# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('pytest_mock_resource_db')
# ### end Alembic commands ###

Is there a way to make this test ignore the table or ensure PostgresConfig doesn't create this table?

# tests/migrations.py
from pytest import fixture
from _pytest.assertion import truncate
from pytest_alembic.tests import (
    test_model_definitions_match_ddl, # this test fails
    test_single_head_revision, # works!
    test_up_down_consistency, # works!
    test_upgrade, # works!
)
from pytest_mock_resources import PostgresConfig, create_postgres_fixture


@fixture(scope="session")
def pmr_postgres_config():
    """
    Match postgres to image used in the docker compose and the
    expectations in `alembic.ini`
    """
    return PostgresConfig(
        image="postgres:15.1",
        port=5433,
        ci_port=5433,
        username="dev",
        password="dev",
        root_database="proj",
    )


@fixture
def alembic_config():
    return {"file": "migrate/alembic.ini"} # i'm storing alembic in the same subdirectory with `versions` and `env.py`


alembic_engine = create_postgres_fixture()

Repeated migration application

HI! I caught an error on the test_up_down_consistency when tried to supply custom before_revision_data

It actually failed inserting data because 'it already exists':

E   psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "pk_epic"
E   DETAIL:  Key (id)=(db790059-f5ef-4f39-b137-5da2d527b435) already exists.

As it turned out, alembic tests inserted it twice


To debug this issue I added some printing to the managed_upgrade function of a Runner:

    def managed_upgrade(self, dest_revision, *, current=None, return_current=True):
        """Perform an upgrade one migration at a time, inserting static data at the given points."""
        if current is None:
            current = self.current
        print(self.history.revision_window(current, dest_revision)) # here
        for current_revision, next_revision in self.history.revision_window(current, dest_revision):
            before_upgrade_data = self.revision_data.get_before(next_revision)
            print(f'INSERT {before_upgrade_data} for rev {next_revision}') # here
            self.insert_into(data=before_upgrade_data, revision=current_revision, table=None)

            if next_revision in (self.config.skip_revisions or {}):
                self.set_revision(next_revision)
            else:
                self.command_executor.upgrade(next_revision)

            at_upgrade_data = self.revision_data.get_at(next_revision)
            self.insert_into(data=at_upgrade_data, revision=next_revision, table=None)
            print(f'current is {self.current}') # and here

        if return_current:
            current = self.current
            return current
        return None

and to alembic/runtime/migration.py:

    def run_migrations(self, **kw: Any) -> None:
       self.impl.start_migrations()

       heads: Tuple[str, ...]
       if self.purge:
           if self.as_sql:
               raise util.CommandError("Can't use --purge with --sql mode")
           self._ensure_version_table(purge=True)
           heads = ()
       else:
           heads = self.get_current_heads()
           print(f'Current heads from DB: {heads}')
       ...

What I read was a clear evidence, that script tried to INSERT the same before_upgrade_data twice, both times for the same revision.

The revision script added data to was one branch of the later merged migrations tree. The same picture occurred for every branching/merging point, even without before_upgrade_data, it just didnt show up or cause any error

For example, I have the following dag in my migration history:

cd039 <- 814 <-- c04af <--c04af
      <-c915 <-- db88  <|

And it produced the log:

Current heads from DB: ('cd039ce4087c',)
[('cd039ce4087c', 'c915a959a921')]
INSERT [] for rev c915a959a921
Current heads from DB: ('cd039ce4087c',)
Current heads from DB: ('c915a959a921',)
current is c915a959a921
->c915a959a921
Current heads from DB: ('c915a959a921',)
[('c915a959a921', 'db88f0ab406b')]
INSERT [] for rev db88f0ab406b
Current heads from DB: ('c915a959a921',)
Current heads from DB: ('db88f0ab406b',)
current is db88f0ab406b
->db88f0ab406b
Current heads from DB: ('db88f0ab406b',)
[('db88f0ab406b', '814a65c9b146')]
INSERT [] for rev 814a65c9b146
Current heads from DB: ('db88f0ab406b',)
Current heads from DB: ('db88f0ab406b', '814a65c9b146')
current is db88f0ab406b >>DOESNT MOVE<<
->814a65c9b146
Current heads from DB: ('db88f0ab406b', '814a65c9b146')
[('db88f0ab406b', '814a65c9b146'), ('814a65c9b146', 'c04afaa3eff5')] >> REPEATED MIGRATION <<
INSERT [] for rev 814a65c9b146
Current heads from DB: ('db88f0ab406b', '814a65c9b146')
Current heads from DB: ('db88f0ab406b', '814a65c9b146')
current is db88f0ab406b
INSERT [] for rev c04afaa3eff5
Current heads from DB: ('db88f0ab406b', '814a65c9b146')
Current heads from DB: ('c04afaa3eff5',)
current is c04afaa3eff5
->c04afaa3eff5

Clearly, self.current_head was not moving when transferring from branch to branch, and I guess I know why:
Because it always takes the first element from the versions table!

    @property
    def current(self) -> str:
        """Get the list of revision heads."""
        current = "base"

        def get_current(rev, _):
            nonlocal current
            if rev:
                current = rev[0]

            return []

        self.command_executor.execute_fn(get_current)

        if current:
            return current
            
        return "base"

Do you know about this bug? Are there any plans to fix it?

asyncio support?

Hi,
I would like to ask if is it possible to use asynchronous sqlalchemy with pytest-alembic. If yes then how to configure it, because I'm not able to configure it proper way.

This is how I create session in tests:

@pytest.fixture()
async def async_session_empty(config):
    engine = create_async_engine(config.db.url)
    async_sess = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
    async with async_sess() as session:
        yield session

    await engine.dispose()

I tried this

@pytest.fixture()
@pytest.mark.asyncio
async def alembic_engine(async_session_empty):
    return async_session_empty


@pytest.fixture()
def alembic_config(config):
    # use this in env.py
    return {"script_location": "alembic", "db_url": config.db.url}

but I only get some pytest warning about unclosed socket

ResourceWarning: unclosed <socket.socket fd=22, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('172.20.0.6', 57792), raddr=('172.20.0.3', 5432)>

Thank you for your response.

Pytest-alembic doesn't understand the type of a mocked resource

I'm using pytest-alembic in conjunction with pytest-mock-resources, and I love the vision, but I'm missing one crucial integration. When I use create_postgres_fixture, pytest-alembic doesn't get that the database is Postgres and assumes it's SQLite.

In my conftest.py:

alembic_engine = create_postgres_fixture(models.Base)

In models.py:

Base = declarative_base()


class TestResult(Base):
    """Schema for DB"""

    __tablename__ = "test_results"
    test_uuid = Column(Text)
    test_results = Column(JSONB)

When I run pytest-alembic, test_upgrade fails with:

AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSONB'

alembic_engine fixture with postgres

In conftest.py, I try to use:

import pytest
from pytest_mock_resources import create_postgres_fixture


@pytest.fixture
def alembic_engine():
    return create_postgres_fixture()

And I get the error

cls = <class 'alembic.runtime.migration.MigrationContext'>
connection = <function create_postgres_fixture.<locals>._sync at 0x7f9c3304fca0>
url = None, dialect_name = None, dialect = None, environment_context = None
dialect_opts = {}, opts = {}

    @classmethod
    def configure(
        cls,
        connection: Optional["Connection"] = None,
        url: Optional[str] = None,
        dialect_name: Optional[str] = None,
        dialect: Optional["Dialect"] = None,
        environment_context: Optional["EnvironmentContext"] = None,
        dialect_opts: Optional[Dict[str, str]] = None,
        opts: Optional[Any] = None,
    ) -> "MigrationContext":
        """Create a new :class:`.MigrationContext`.
    
        This is a factory method usually called
        by :meth:`.EnvironmentContext.configure`.
    
        :param connection: a :class:`~sqlalchemy.engine.Connection`
         to use for SQL execution in "online" mode.  When present,
         is also used to determine the type of dialect in use.
        :param url: a string database url, or a
         :class:`sqlalchemy.engine.url.URL` object.
         The type of dialect to be used will be derived from this if
         ``connection`` is not passed.
        :param dialect_name: string name of a dialect, such as
         "postgresql", "mssql", etc.  The type of dialect to be used will be
         derived from this if ``connection`` and ``url`` are not passed.
        :param opts: dictionary of options.  Most other options
         accepted by :meth:`.EnvironmentContext.configure` are passed via
         this dictionary.
    
        """
        if opts is None:
            opts = {}
        if dialect_opts is None:
            dialect_opts = {}
    
        if connection:
            if isinstance(connection, Engine):
                raise util.CommandError(
                    "'connection' argument to configure() is expected "
                    "to be a sqlalchemy.engine.Connection instance, "
                    "got %r" % connection,
                )
    
>           dialect = connection.dialect
E           AttributeError: 'function' object has no attribute 'dialect'

Do you know how to fix it?

Bug: during mid-migration check, database has data from latest migration

Hello!

On test_upgrade (and other except test_single_head_revision) my 44'th migration fails on unique violation error insert. Looks like on this step all insertions in this table already in db, because if i select data in this migration - i see all data, in different migrations after, include latest, which can be there only after all migrations apply. Maybe, you have thread parallel job, which migrate to head?

UPD: next migrations also failed like create table (which occurs only once in my migrations) . Not only insert issue

Tested on 0.10.0, 0.9.1
python 3.9

AttributeError: 'async_generator' object has no attribute 'dialect'

I have configured my project following the setup instruction.
In my project I use asyncpg, sqlalchemy and alembic.

Running command pytest --test-alembic produces this output:

FAILED tests::pytest_alembic::test_model_definitions_match_ddl - AttributeError: 'async_generator' object has no attribute 'dialect'
FAILED tests::pytest_alembic::test_up_down_consistency
FAILED tests::pytest_alembic::test_upgrade - AttributeError: 'async_generator' object has no attribute 'dialect'

my env.py:

import asyncio

from alembic import context
from asyncpg import pool
from sqlalchemy import engine_from_config
from sqlalchemy.ext.asyncio.engine import AsyncEngine

from bot.db import Base


def run_migrations_online():
    connectable = context.config.attributes.get("connection", None)

    if connectable is None:
        connectable = AsyncEngine(
            engine_from_config(
                context.config.get_section(context.config.config_ini_section),
                prefix="sqlalchemy.",
                poolclass=pool.NullPool,
                future=True,
            )
        )

    # Note, we decide whether to run asynchronously based on the kind of engine we're dealing with.
    if isinstance(connectable, AsyncEngine):
        asyncio.run(run_async_migrations(connectable))
    else:
        do_run_migrations(connectable)


# Then use their setup for async connection/running of the migration
async def run_async_migrations(connectable):
    async with connectable.connect() as connection:
        await connection.run_sync(do_run_migrations)

    await connectable.dispose()


def do_run_migrations(connection):
    context.configure(connection=connection, target_metadata=Base.metadata)

    with context.begin_transaction():
        context.run_migrations()


# But the outer layer still allows sychronous execution also.
run_migrations_online()

The fixture gets slower on each revision I create

Hi there,

For a long time now, I've noticed my tests were getting slow. But it has recently come to a point where I had to investigate.
It comes down to a very slow upgrade from the plugin compared to running alembic on its own.

I have 36 revisions and it takes 15 seconds to upgrade them and 7 seconds to downgrade them with the plugin. But only 5 seconds and 3 seconds respectively when I run alembic itself.

My basic test now takes 30 seconds and most of that time is spent in upgrading/downgrading.

I was wondering if you had notice this as well and if there was anything I could do to improve the issue?

I'm running the upgrade/downgrade dance on each test rather than per-session, I guess I could move to a session fixture and find a way to empty the db per test?

Any feedback will be welcome :)

TypeError: table() missing 1 required positional argument: 'connection'

version: 0.8.0 (most recent version on conda)

When using alembic_runner.table_at_revision(table_name) the following exception raises:

`
self.connection_executor.table(revision=revision, name=name, schema=schema)

TypeError: table() missing 1 required positional argument: 'connection'
`

Downgrading to 0.3.1 works fine.

[Question] how can i access data by data-hook?

The tool is great, but the documentation isn't clear and even confusing.

I now have some data set through alembic Config, but how can I access it in the test case? What do I need to do? I don't see any examples in the documentation.

I found some inspiration with the pytest_alembic.tests.test_upgrade code, but it doesn't work in the Asyncio test case.

This is my unit test

I already configured async according to the Using Asyncio with Alembic

conftest.py :

# ... some code

@pytest.fixture()
def alembic_config():
    return {
        'script_location': str(Path(alembic.__file__).parent),
        "before_revision_data": {
            "fb4d3ab5f38d": [],  # first version, nothing in db
        },
        "at_revision_data": {
            # secondary version, some table in db, and no other version.
            "cc14dfc64697": [
                {"__tablename__": "ip_address",
                 "id": 1,
                 "ip": "127.0.0.1",
                 },
            ]
        },
    }

test_repository.py :

# ... some code

@pytest.mark.asyncio
async def test_get_all(alembic_runner):
    alembic_runner.migrate_up_to("heads")
    repo = IpAddressRepository()
    res = await repo.get_all()
    assert res

An error was reported after running

/home/kevin/.virtualenvs/myproject-B8OT5Rev/bin/python /opt/softwares/jetbrains-apps/apps/PyCharm-P/ch-0/213.5744.248/plugins/python/helpers/pycharm/_jb_pytest_runner.py --path /home/kevin/workspaces/develop/python/crawlerstack/myproject/tests/test_repository.py
Testing started at 下午7:23 ...
Launching pytest with arguments /home/kevin/workspaces/develop/python/crawlerstack/myproject/tests/test_repository.py --no-header --no-summary -q in /home/kevin/workspaces/develop/python/crawlerstack/myproject/tests

============================= test session starts ==============================
collecting ... collected 1 item

test_repository.py::TestBaseRepository::test_get_all 2021-12-28 19:23:16,103 DEBUG asyncio 20350 139815745447744 Using selector: EpollSelector
2021-12-28 19:23:16,103 DEBUG asyncio 20350 139815745447744 Using selector: EpollSelector
FAILED              [100%]
tests/test_repository.py:12 (TestBaseRepository.test_get_all)
self = <tests.test_repository.TestBaseRepository object at 0x7f29603d8c10>
alembic_runner = MigrationContext(command_executor=CommandExecutor(alembic_config=<alembic.config.Config object at 0x7f296037f0d0>, std...ename__': 'ip_address', 'id': 1, 'ip': '127.0.0.1'}]}, minimum_downgrade_revision=None), connection=Engine(sqlite:///))

    @pytest.mark.asyncio
    async def test_get_all(self, alembic_runner):
>       alembic_runner.migrate_up_to("heads")

test_repository.py:15: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/pytest_alembic/runner.py:120: in migrate_up_to
    return self.managed_upgrade(revision)
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/pytest_alembic/runner.py:100: in managed_upgrade
    current = self.current
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/pytest_alembic/runner.py:70: in current
    current = self.command_executor.run_command("current")
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/pytest_alembic/executor.py:40: in run_command
    executable_command(self.alembic_config, *args, **kwargs)
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/alembic/command.py:543: in current
    script.run_env()
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/alembic/script/base.py:563: in run_env
    util.load_python_file(self.dir, "env.py")
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/alembic/util/pyfiles.py:92: in load_python_file
    module = load_module_py(module_id, path)
/home/kevin/.virtualenvs/myproject-B8OT5Rev/lib/python3.10/site-packages/alembic/util/pyfiles.py:108: in load_module_py
    spec.loader.exec_module(module)  # type: ignore
<frozen importlib._bootstrap_external>:883: in exec_module
    ???
<frozen importlib._bootstrap>:241: in _call_with_frames_removed
    ???
../src/crawlerstack_proxypool/alembic/env.py:91: in <module>
    asyncio.run(run_migrations_online())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

main = <coroutine object run_migrations_online at 0x7f296023c740>

    def run(main, *, debug=None):
        """Execute the coroutine and return the result.
    
        This function runs the passed coroutine, taking care of
        managing the asyncio event loop and finalizing asynchronous
        generators.
    
        This function cannot be called when another asyncio event loop is
        running in the same thread.
    
        If debug is True, the event loop will be run in debug mode.
    
        This function always creates a new event loop and closes it at the end.
        It should be used as a main entry point for asyncio programs, and should
        ideally only be called once.
    
        Example:
    
            async def main():
                await asyncio.sleep(1)
                print('hello')
    
            asyncio.run(main())
        """
        if events._get_running_loop() is not None:
>           raise RuntimeError(
                "asyncio.run() cannot be called from a running event loop")
E           RuntimeError: asyncio.run() cannot be called from a running event loop

/usr/local/lib/python3.10/asyncio/runners.py:33: RuntimeError










============================== 1 failed in 0.05s ===============================
sys:1: RuntimeWarning: coroutine 'run_migrations_online' was never awaited

Process finished with exit code 1

I hope you can give me some ideas, or a small sample code. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.