Of all the microservices rules, “each service owns its own database” is the one teams skip most often. It feels like unnecessary complexity — until the first time a shared schema blocks three teams’ deploys for a week. This article is about why the rule exists, how teams violate it, and the patterns that make independence practical.
The rule
Each service has its own database. No other service reads or writes it directly. If service B needs data from service A, it asks A over an API or subscribes to A’s events.
Phrased negatively: a shared database is a distributed monolith with higher latency.
Why teams violate it
Honest reasons:
- Legacy — the services share a DB because they used to be one monolith
- Convenience — “I need this data once, a SELECT is the fastest path”
- Transactional integrity — cross-service transactions need a single DB
- Performance — network hop to another service is slower than a JOIN
All of these feel reasonable. All of them eventually cost more than the convenience saved.
What actually breaks
Coordinated schema changes. You add a column. Two other services also use that table; their code doesn’t know about the column, or worse, it breaks when the column is NOT NULL. Every schema change becomes a cross-team coordination.
Hidden coupling. You refactor an internal table. Another service’s code breaks. You didn’t know they were reading it.
Performance interference. Service A’s batch job runs at night, locks rows. Service B’s real-time workload grinds to a halt. No one’s fault; everyone’s problem.
Scaling ceilings. Adding a read replica helps service B’s reads but also A’s. Can’t scale them independently.
Deploys entangled. Service A deploys a migration. Service B holds references that now fail. Roll back service A; re-roll service B. Complicated.
Each of these has a fix when the DBs are separate. None have a clean fix when they’re shared.
What “per service” actually means
Not literally “per Docker container”. The right grain is per bounded context — the team-owned unit.
- Too fine: every tiny microservice has its own Postgres instance. Operational overhead explodes.
- Right: each bounded context (Orders, Payments, Inventory) owns its schema; potentially colocated in one DB cluster using separate schemas with strict access control.
- Too coarse: one DB for everything. The shared-DB anti-pattern.
Bounded-context-per-schema with tightly controlled access (DB users with grants only on their schema) gets most of the isolation benefit without proliferating DB instances.
Getting data across services
Three patterns, in preference order:
1. API call. Service B calls service A when it needs A’s data. Simple, synchronous, tightly coupled temporally.
2. Event-driven replication. A publishes events (via outbox). B subscribes and builds its own local read model. Eventually consistent but loosely coupled.
3. Shared cache / read-only materialized view. A maintains a denormalized view that B reads. Rare but sometimes the right answer for specific high-read scenarios.
Most real systems use a mix. Hot paths use local replicas (event-driven); ad-hoc queries use API calls; a few specific high-volume reads may use materialized views.
Reference data
What about countries, currencies, categories — slow-changing shared data?
Options:
- Copy everywhere. Each service keeps its own list, refreshed nightly from a “reference data” service. Works well.
- Shared read-only view. One service owns it; others read its API. Simple but creates a dependency.
- Config-as-code. Small reference data baked into the service, updated via deploy. For truly static data (country codes), often easiest.
Avoid: multiple services writing to the same “countries” table.
Transactions across services
You cannot have them in any useful form. Give up. The patterns that replace them:
- Saga — multi-step workflow with compensations
- Eventual consistency with idempotent consumers — each service does its part independently
- Outbox + events — DB update + event publication in one local transaction; downstream reacts
These are more work than BEGIN ... COMMIT across services, but they’re the price of independent scaling and deploy.
Migration from shared to per-service
A shared database, split over time:
- Identify the bounded context boundaries. Run event storming, draw the seams.
- Establish API ownership per table. “Orders service owns the orders table; nobody else reads or writes it directly.”
- Enforce via GRANTs. Database-level access control — other services literally cannot SELECT that table.
- Route existing queries through the owning service’s API. Remove direct references one at a time.
- Split the schema. Move tables into separate schemas (still in the same DB).
- Eventually split the instances. Move owning service’s schema to a separate DB cluster when scale demands.
Full migration might take a year for a non-trivial shared DB. Worth it.
When “per service” is overkill
- Single team, small system. Independence doesn’t buy anything when there’s no team boundary to protect.
- Truly tightly-coupled domain. Some domains have such strong invariants that splitting them creates more pain than sharing. Accept it and don’t split.
- Short-lived internal tools. Build with whatever is easiest; revisit if they stick around.
The test: “Does splitting unblock teams or solve a scaling constraint?” If no, don’t split.
Performance note
Fetching data via API is 10-50ms; via local JOIN is <1ms. That’s real.
Mitigations:
- Local read models. Keep the data you read often as a subscribed-to projection in your own DB.
- Cache aggressively. Most cross-service data is cacheable with short TTLs.
- Batch calls. 20 individual lookups = 20 API calls. One batch lookup = 1 API call.
After these mitigations, the latency penalty of DB-per-service is usually 5-15ms per request. Acceptable for almost any backend.
Closing note
Database-per-service is the rule most teams either skip entirely or follow ritualistically. The middle ground — separate schemas with strict access, event-based replication for cross-service needs, APIs for everything else — captures most of the value with moderate cost. Get this right and independent deploys, scaling, and team ownership all work as advertised. Get it wrong and microservices deliver the complexity without the benefits.