ADR-012 — Zero-Downtime Database Migrations
ADR |
ADR-012 |
|---|---|
Title |
Zero-Downtime Database Migrations |
State |
Accepted |
Author |
klenkes74 |
Decision Body |
klenkes74 |
Valid from |
2026-03-26 |
Expires |
./. |
1. Context
`pandurart` is deployed using rolling updates and canary deployments (see ↑ADR-002). During such deployments, multiple versions of the application run simultaneously against the same PostgreSQL database (see ↑ADR-004).
A naive schema change — for example directly renaming or dropping a column — would break the old application version that is still running during the rollout window. This applies to any schema change that is not fully backward-compatible.
In addition, the application itself must not perform schema changes.
The JPA configuration uses ddl-auto: validate, meaning Hibernate only validates the schema but never modifies it.
Schema migrations must therefore be executed by a dedicated, controlled process.
2. Decision Drivers
-
The application must be deployable without downtime.
-
Old and new application versions must be able to run in parallel against the same database schema.
-
Schema changes must be reproducible, version-controlled, and auditable.
-
Schema migrations must be decoupled from the application lifecycle.
-
The migration tooling must integrate with existing CI/CD pipelines.
3. Decision
All database schema changes in `pandurart` follow the Parallel Change pattern ↑PARALLEL-CHANGE (also known as Expand and Contract ↑EXPAND-CONTRACT, as described in Evolutionary Database Design ↑EVODB).
Every breaking schema change is decomposed into three separate, individually deployable steps:
| Step | Phase | Description |
|---|---|---|
1 |
Expand |
New columns, tables, or indexes are added alongside existing ones. New columns must be nullable or carry a default value so the old application version continues to function without modification. |
2 |
Migrate |
The new application version is deployed. It reads from and writes to both the old and the new structures simultaneously. Existing rows may be backfilled by a dedicated Liquibase changeset. |
3 |
Contract |
Once all old instances have been replaced, the old columns or tables are removed in a subsequent, separate deployment. |
Schema migrations are executed exclusively by the db-updater project, a dedicated CLI tool that runs Liquibase ↑LIQUIBASE changesets.
db-updater is the only component authorised to modify the database schema.
It is executed as a Kubernetes Job before the new application version is rolled out.
The detailed rules, naming conventions, and worked examples are documented in Concept: Zero-Downtime Database Migrations.
4. Consequences
Positive:
-
Deployments are fully non-disruptive — no maintenance window required.
-
Schema history is explicit, version-controlled, and auditable via Liquibase’s
DATABASECHANGELOGtable. -
Old application versions are never broken mid-rollout.
-
The separation between
db-updaterand the application prevents accidental schema drift.
Negative / Trade-offs:
-
Every breaking schema change requires at least two separate releases (Expand in one release, Contract in a later release), increasing the number of deployment steps.
-
Developers must be aware of the pattern and consciously plan the three phases upfront.
-
The transition period (Phase 2) doubles the write load to the affected columns temporarily.
-
The
db-updaterjob must succeed before the new application version can be deployed, adding a hard dependency to the pipeline.