Fixing Alembic Migrations with Dependency Inversion in Flask

by Damian Piatkowski 5 min read
Debugging Python Flask SQLAlchemy Database Migrations
Hero image for Fixing Alembic Migrations with Dependency Inversion in Flask

In Architecture Patterns with Python: Enabling Test-Driven Development, Domain-Driven Design, and Event-Driven Microservices by Harry Percival and Bob Gregory, the authors present a powerful approach to managing dependencies in Python applications through dependency inversion. In the context of SQLAlchemy, this technique lets you design domain models independently of the database schema, making your code more modular, testable, and flexible. By defining the database schema separately and mapping it explicitly to your domain models, you achieve a clean separation of concerns — central to building robust and maintainable architectures.

However, when applying this pattern in Flask applications, you might run into a problem: Alembic, the database migration tool, and Flask-Migrate, its Flask integration, may fail to detect changes in your models. This happens because dependency inversion keeps your domain models unaware of the underlying database schema, which can prevent migration scripts from being generated correctly.

To fix this, you’ll need to make a small adjustment to your migrations/env.py file so Alembic can properly detect and manage schema changes without breaking the principles of dependency inversion.

Why Bother with Dependency Inversion?

Chances are, you’ve found this article while looking for a fix — not necessarily for an explanation of why the Dependency Inversion Principle (DIP) might be a good idea in your Flask app’s database design. If that’s the case, feel free to skip ahead to the next two sections.

But for those wondering whether it’s worth the effort, I wanted to include a brief rationale. After all, why not just map your classes directly to the ORM? What are the odds you’ll ever need to switch database backends? In very small apps, this extra layer might seem unnecessary. Yet small apps have a way of growing over time — and when they do, it’s often too late for a painless refactor that could have been avoided with a bit of foresight during the initial design.

“Inverting the Dependency: ORM Depends on Model

[…] The alternative is to define your schema separately, and to define an explicit mapper for how to convert between the schema and our domain model, what SQLAlchemy calls a classical mapping. […]

  1. The ORM imports (or “depends on” or “knows about”) the domain model, and not the other way around.
  2. We define our database tables and columns by using SQLAlchemy’s abstractions.
  3. When we call the mapper function, SQLAlchemy does its magic to bind our domain model classes to the various tables we’ve defined.

The end result will be that, if we call start_mappers, we will be able to easily load and save domain model instances from and to the database. But if we never call that function, our domain model classes stay blissfully unaware of the database.”

Harry Percival and Bob Gregory, Architecture Patterns with Python

The Setup

In my implementation of this web application, the following components work together to demonstrate the pattern:

app/domain/blog_post.py

from datetime import datetime
from typing import List, Optional


class BlogPost:
    """Represents a blog post in the domain model.


    This class captures all relevant information for a blog post, including metadata
    for SEO and estimated read time. It serves as a clean domain representation used
    throughout the application, distinct from raw database rows.
    """


    def __init__(
            self,
            title: str,
            html_content: str,
            slug: str,
            drive_file_id: str,
            created_at: datetime,
            updated_at: Optional[datetime],
            read_time_minutes: int,
            meta_description: str,
            keywords: Optional[List[str]],
            categories: Optional[List[str]] = None
    ) -> None:
        """Initializes a new BlogPost instance with all relevant attributes.


        Args:
            title (str): The title of the blog post.
            html_content (str): The HTML content of the blog post.
            slug (str): A unique, URL-friendly identifier for the blog post.
            drive_file_id (str): The unique file ID from Google Drive.
            created_at (datetime): Timestamp when the post was created.
            updated_at (Optional[datetime]): Timestamp of the last update.
            read_time_minutes (int): Estimated time in minutes to read the blog post.
            meta_description (str): A short description of the post for SEO.
            keywords (Optional[List[str]]): A list of keyword strings for SEO metadata.
            categories (Optional[List[str]]): List of categories. Defaults to an empty list.
        """
        self.title = title
        self.html_content = html_content
        self.slug = slug
        self.drive_file_id = drive_file_id
        self.created_at = created_at  # Sourced from the database
        self.updated_at = updated_at  # Sourced from the database
        self.read_time_minutes = read_time_minutes  # Sourced from the database
        self.meta_description = meta_description
        self.keywords = keywords or []
        self.categories = categories or []

app/models/tables/blog_post.py

from sqlalchemy import Column, Integer, JSON, MetaData, String, Table, Text, TIMESTAMP, text, FetchedValue


metadata = MetaData()


blog_posts = Table(
    'blog_posts', metadata,
    Column('id', Integer, primary_key=True, autoincrement=True),
    Column('title', String(255), nullable=False, unique=True),
    Column('slug', String(255), nullable=False, unique=True),
    Column('html_content', Text, nullable=False),
    Column('drive_file_id', String(255), nullable=False, unique=True),
    Column('meta_description', String(255), nullable=False),  # SEO description
    Column('keywords', JSON, nullable=False, default=[]),  # SEO keywords
    Column('read_time_minutes', Integer, nullable=False),  # Estimated reading time
    Column('categories', JSON, nullable=False, default=[]),  # Optional category tags
    Column(
        'created_at',
        TIMESTAMP(timezone=True),
        nullable=False,
        server_default=text('CURRENT_TIMESTAMP')
    ),
    Column(
        'updated_at',
        TIMESTAMP(timezone=True),
        nullable=False,
        server_default=text('CURRENT_TIMESTAMP'),
        server_onupdate=FetchedValue()
    ),
)

app/orm.py

from sqlalchemy.orm import registry


from app.domain.blog_post import BlogPost
from app.domain.log import Log
from app.models.tables.blog_post import blog_posts
from app.models.tables.log import logs


mapper_registry = registry()


def start_mappers(app):
    if not app.config.get('MAPPERS_INITIALIZED', False):
        mappings = {
            Log: logs,
            BlogPost: blog_posts,
        }


        for model, schema in mappings.items():
            mapper_registry.map_imperatively(model, schema)


        app.config['MAPPERS_INITIALIZED'] = True

The start_mappers function glues the domain models to their corresponding SQLAlchemy tables inside the application factory (app/__init__.py).

The Alembic Migration Detection Problem

In a typical Alembic workflow, I would start by initializing the migrations directory with the flask db init command:

(.venv) PS C:\Users\Dami_An\Repos\damianpiatkowski.com> flask db init
2025-10-03 08:55:28,456 - INFO - Starting application v1.2.3 in development environment.
Creating directory 'C:\\Users\\Dami_An\\Repos\\damianpiatkowski.com\\migrations' ...  done
Creating directory 'C:\\Users\\Dami_An\\Repos\\damianpiatkowski.com\\migrations\\versions' ...  done
Generating C:\Users\Dami_An\Repos\damianpiatkowski.com\migrations\alembic.ini ...  done
Generating C:\Users\Dami_An\Repos\damianpiatkowski.com\migrations\env.py ...  done
Generating C:\Users\Dami_An\Repos\damianpiatkowski.com\migrations\README ...  done
Generating C:\Users\Dami_An\Repos\damianpiatkowski.com\migrations\script.py.mako ...  done
Please edit configuration/connection/logging settings in 'C:\\Users\\Dami_An\\Repos\\damianpiatkowski.com\\migrations\\alembic.ini' before proceeding.

Everything looks fine so far. The problem starts at the next step — running flask db migrate. On the first run, this command generates an initial migration script inside the migrations/versions/ directory, capturing your database schema as it currently exists.

(.venv) PS C:\Users\Dami_An\Repos\damianpiatkowski.com> flask db migrate
2025-10-03 09:11:27,398 - INFO - Starting application v1.2.3 in development environment.
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.env] No changes in schema detected.

Even though the models are correctly defined and ready for detection, Alembic fails to pick them up because of the dependency inversion setup — your domain models don’t directly expose the database schema, so the migration autogeneration process has nothing to compare against.

Adjusting env.py for Dependency Inversion

The default migrations/env.py generated by flask db init looks like this:

import logging
from logging.config import fileConfig


from flask import current_app


from alembic import context


# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config


# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger('alembic.env')




def get_engine():
    try:
        # this works with Flask-SQLAlchemy<3 and Alchemical
        return current_app.extensions['migrate'].db.get_engine()
    except (TypeError, AttributeError):
        # this works with Flask-SQLAlchemy>=3
        return current_app.extensions['migrate'].db.engine




def get_engine_url():
    try:
        return get_engine().url.render_as_string(hide_password=False).replace(
            '%', '%%')
    except AttributeError:
        return str(get_engine().url).replace('%', '%%')




# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
config.set_main_option('sqlalchemy.url', get_engine_url())
target_db = current_app.extensions['migrate'].db


# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.




def get_metadata():
    if hasattr(target_db, 'metadatas'):
        return target_db.metadatas[None]
    return target_db.metadata




def run_migrations_offline():
    """Run migrations in 'offline' mode.


    This configures the context with just a URL
    and not an Engine, though an Engine is acceptable
    here as well.  By skipping the Engine creation
    we don't even need a DBAPI to be available.


    Calls to context.execute() here emit the given string to the
    script output.


    """
    url = config.get_main_option("sqlalchemy.url")
    context.configure(
        url=url, target_metadata=get_metadata(), literal_binds=True
    )


    with context.begin_transaction():
        context.run_migrations()




def run_migrations_online():
    """Run migrations in 'online' mode.


    In this scenario we need to create an Engine
    and associate a connection with the context.


    """


    # this callback is used to prevent an auto-migration from being generated
    # when there are no changes to the schema
    # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html
    def process_revision_directives(context, revision, directives):
        if getattr(config.cmd_opts, 'autogenerate', False):
            script = directives[0]
            if script.upgrade_ops.is_empty():
                directives[:] = []
                logger.info('No changes in schema detected.')


    conf_args = current_app.extensions['migrate'].configure_args
    if conf_args.get("process_revision_directives") is None:
        conf_args["process_revision_directives"] = process_revision_directives


    connectable = get_engine()


    with connectable.connect() as connection:
        context.configure(
            connection=connection,
            target_metadata=get_metadata(),
            **conf_args
        )


        with context.begin_transaction():
            context.run_migrations()




if context.is_offline_mode():
    run_migrations_offline()
else:
    run_migrations_online()

Fixing Alembic migrations for a dependency-inverted Flask app requires a few key adjustments:

  • Load configuration dynamically. Create a temporary Flask app using your factory function (typically create_app) so Alembic can access the correct SQLALCHEMY_DATABASE_URI even outside an app context. This makes migrations context-independent. Previously, Alembic relied on from flask import current_app, which only works when a Flask application context is already active.
  • Handle environment-based configuration explicitly. Define a CONFIG_MAPPING dictionary to ensure Alembic loads the same configuration class your Flask app uses for the current environment (development, testing, or production).
  • Combine table definitions manually. Instead of relying on Base.metadata, we manually combine our table definitions into a single MetaData object using tometadata(), and initialize ORM mappings explicitly with start_mappers(current_app). This ensures Alembic is aware of all tables and their mappings, even when our domain models are decoupled from the database schema — making schema discovery explicit and fully compatible with dependency inversion.
  • Clean up empty migrations. The migration context uses a custom process_revision_directives callback to skip empty revisions when no schema changes are detected, keeping your migration history clean. This callback isn’t required for the DIP setup to work, but it’s a convenient safeguard that keeps your migration history clean and easier to maintain. Alembic’s autogenerate sometimes produces empty migration files when no schema changes are detected. process_revision_directives intercepts these cases and skips generating a useless revision, keeping your migration history clean.
  • Set the database URL explicitly. Alembic’s database URL is set explicitly via config.set_main_option('sqlalchemy.url', get_alembic_url()), decoupling it from Flask-Migrate internals and ensuring consistent configuration loading.

My final env.py:

import logging
import os
from logging.config import fileConfig


from alembic import context as alembic_context
from flask import current_app
from sqlalchemy import MetaData


from app import create_app
from app.config import DevelopmentConfig, ProductionConfig, TestingConfig
from app.models.tables.blog_post import blog_posts
from app.models.tables.log import logs
from app.orm import start_mappers


# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = alembic_context.config


# Interpret the config file for Python logging.
fileConfig(config.config_file_name)
logger = logging.getLogger('alembic.env')


# Define a dictionary to map FLASK_ENV strings to your config classes
CONFIG_MAPPING = {
    'development': DevelopmentConfig,
    'production': ProductionConfig,
    'testing': TestingConfig,
    'default': DevelopmentConfig,  # Fallback if FLASK_ENV is not set or recognized
}




def get_alembic_url():
    """Dynamically loads Flask app config to get SQLAlchemy URL for Alembic."""
    try:
        # Try to get the URL from current_app if already within an app context.
        # This is what Flask-Migrate tries to set up for online migrations.
        if current_app:
            return current_app.config.get('SQLALCHEMY_DATABASE_URI')
    except RuntimeError:
        # If not in an app context (e.g., initial env.py load or offline mode),
        # create a temporary minimal app instance to get the config.
        pass  # We will handle this outside the try-except


    # Get FLASK_ENV from alembic.ini (if set) or environment variables
    flask_env = config.get_main_option('FLASK_ENV') or os.environ.get('FLASK_ENV', 'default')


    ConfigClass = CONFIG_MAPPING.get(flask_env)
    if not ConfigClass:
        raise ValueError(f"Unknown FLASK_ENV '{flask_env}'. Add it to CONFIG_MAPPING in env.py or set FLASK_ENV.")


    # Create a temporary Flask app instance to load the configuration
    temp_app = create_app(ConfigClass)
    with temp_app.app_context():
        return temp_app.config.get('SQLALCHEMY_DATABASE_URI')




def process_revision_directives(_context, _revision, directives):
    if getattr(config.cmd_opts, 'autogenerate', False):
        script = directives[0]
        if script.upgrade_ops.is_empty():
            directives[:] = []
            logger.info('No changes in schema detected, migration skipped.')




def get_engine():
    try:
        return current_app.extensions['migrate'].db.get_engine()
    except (TypeError, AttributeError):
        return current_app.extensions['migrate'].db.engine




def get_engine_url():
    try:
        return get_engine().url.render_as_string(hide_password=False).replace(
            '%', '%%')
    except AttributeError:
        return str(get_engine().url).replace('%', '%%')




def get_target_metadata():
    combined_metadata = MetaData()
    for table in [blog_posts, logs]:
        table.tometadata(combined_metadata)
    return combined_metadata




target_metadata = get_target_metadata()


print("Tables in target_metadata (from env.py):", target_metadata.tables.keys())
logger.info("Tables in target_metadata (from env.py): %s", target_metadata.tables.keys())


# Set the SQLAlchemy URL for Alembic using the safe function
alembic_config_url = get_alembic_url()
if not alembic_config_url:
    raise ValueError("SQLALCHEMY_DATABASE_URI not found in Flask app configuration.")
config.set_main_option('sqlalchemy.url', alembic_config_url)


# Initialize conf_args, using current_app.extensions['migrate'] as it should be
# available when run_migrations_online is called.
conf_args = getattr(current_app.extensions['migrate'], 'configure_args', {})


# Set the process_revision_directives if not already set
if conf_args.get("process_revision_directives") is None:
    conf_args["process_revision_directives"] = process_revision_directives




def run_migrations_offline():
    """Runs migrations in 'offline' mode."""
    url = config.get_main_option("sqlalchemy.url")
    alembic_context.configure(
        url=url, target_metadata=target_metadata, literal_binds=True
    )


    with alembic_context.begin_transaction():
        alembic_context.run_migrations()




def run_migrations_online():
    """Runs migrations in 'online' mode."""
    start_mappers(current_app)


    connectable = get_engine()
    with connectable.connect() as connection:
        alembic_context.configure(
            connection=connection,
            target_metadata=target_metadata,
            **conf_args
        )


        with alembic_context.begin_transaction():
            alembic_context.run_migrations()




if alembic_context.is_offline_mode():
    run_migrations_offline()
else:
    run_migrations_online()

Now it works as expected:

(.venv) PS C:\Users\Dami_An\Repos\damianpiatkowski.com> flask db migrate
2025-10-06 10:40:11,045 - INFO - Starting application v1.2.3 in development environment.
Tables in target_metadata (from env.py): dict_keys(['blog_posts', 'logs'])
INFO  [alembic.env] Tables in target_metadata (from env.py): dict_keys(['blog_posts', 'logs'])
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate.compare] Detected added table 'blog_posts'
INFO  [alembic.autogenerate.compare] Detected added table 'logs'
Generating C:\Users\Dami_An\Repos\damianpiatkowski.com\migrations\versions\2ec651104117_.py ...  done