Overview
Monolith-to-microservices is the most common enterprise modernization pattern — and the one with the highest failure rate. Teams that attempt it without deep architectural understanding of the monolith tend to fail in predictable ways: they draw service boundaries in the wrong places, they underestimate shared data dependencies, and they end up with a “distributed monolith” that’s worse than what they started with. This variant playbook provides pattern-specific guidance for Phase 5 (Iterative Execution) of the Code Modernization workflow. It covers the unique challenges of monolith decomposition: finding natural service boundaries, introducing the Strangler Fig façade, decomposing shared databases, choosing communication patterns, and executing the Transform → Coexist → Eliminate cycle for each extracted service. CoreStory’s role is critical here because the monolith’s internal structure — the coupling patterns, data access hotspots, and hidden dependencies — is precisely what determines where services can and cannot be separated. CoreStory operates as an Oracle for understanding what the monolith actually does (as opposed to what the architecture diagram says it does), and as a Navigator for guiding each service extraction step by step. This playbook executes the work packages defined in Decomposition & Sequencing. If you haven’t yet broken the modernization plan into sequenced work packages, start there. This playbook picks up at the point where you have a specific component to extract and need pattern-specific guidance for the monolith-to-microservices migration. Who this is for: Engineers and architects executing a monolith-to-microservices migration. This playbook assumes you’ve already completed the assessment (Phase 1), business rules inventory (Phase 2), target architecture decision (Phase 3), and decomposition/sequencing (Phase 4). If you haven’t, start with the Code Modernization hub. What you’ll get: A concrete methodology for extracting services from a monolith — from identifying service boundaries through database decomposition to the façade-based execution pattern that makes incremental extraction safe.When to Use This Playbook
- Your Phase 3 decision selected Re-architect or Refactor with a microservices target architecture
- You’re executing work packages from Phase 4 that involve extracting services from a monolith
- You need to identify service boundaries within a monolith that has unclear or undocumented module boundaries
- You need to decompose a shared database as part of service extraction
- You’re introducing an API gateway or façade layer for the Strangler Fig pattern
When to Skip This Playbook
- The target architecture is not microservices — if you’re modernizing within a monolith (refactoring without extraction), use Spec-Driven Development directly
- The system is already service-oriented and you’re re-platforming (e.g., moving from on-prem to cloud without architectural changes)
- You haven’t completed Phases 1–4 — go back to the Code Modernization hub
- The monolith is small enough that extraction doesn’t make sense — not every monolith needs to become microservices
Prerequisites
- A completed Target Architecture decision (Phase 3) specifying microservices as the target
- A completed Decomposition & Sequencing plan (Phase 4) with ordered work packages
- A completed Business Rules Inventory (Phase 2) for behavioral verification during extraction
- A CoreStory account with the monolith codebase ingested and ingestion complete
- An AI coding agent with CoreStory MCP configured (see Supercharging AI Agents for setup)
- (Recommended) Access to the monolith’s database schema — either via CoreStory ingestion or direct access
- (Recommended) Infrastructure for running extracted services — container orchestration, API gateway, service mesh, or at minimum a reverse proxy for the façade layer
- (Recommended) Observability tooling — distributed tracing and centralized logging are essential for debugging issues during the Coexist phase
How It Works
CoreStory MCP Tools Used
| Tool | Step(s) | Purpose |
|---|---|---|
list_projects | 1 | Confirm the target project |
create_conversation | 1 | Start a dedicated extraction thread per service |
send_message | 2, 3, 4, 5, 6 | Query CoreStory for boundary analysis, dependency mapping, and extraction guidance |
list_conversations | 1 | Find prior phase conversations |
get_conversation | 1 | Retrieve prior findings for cross-reference |
get_project_techspec | 1, 2 | Retrieve Tech Spec for understanding monolith structure |
get_project_prd | 2 | Retrieve PRD for business domain context |
rename_conversation | 6 | Mark completed thread with “RESOLVED” prefix |
The Monolith-to-Microservices Workflow
Note: The steps below are internal to this playbook. They are sub-steps of Phase 5 in the six-phase modernization framework, not a separate numbering system.This playbook follows a six-step pattern for each service extraction:
- Context Loading — Load the work package definition, prior phase findings, and establish the extraction scope.
- Service Boundary Identification — Use CoreStory to validate and refine the service boundary defined in Phase 4. Identify exactly what code, data, and logic belongs to the new service.
- Database Decomposition Planning — Map shared data dependencies and plan the data separation strategy for this service extraction.
- Façade & Communication Design — Design the Strangler Fig façade layer and the communication patterns between the extracted service and the remaining monolith.
- Service Extraction Execution — Execute the Transform → Coexist → Eliminate cycle using Spec-Driven Development for the delta spec.
- Verification & Cutover — Verify behavioral equivalence and execute the cutover from monolith to extracted service.
The Strangler Fig Pattern for Service Extraction
The Strangler Fig pattern is the execution model for monolith-to-microservices migration. Named after the strangler fig plant that grows around a host tree until it can support itself, the pattern incrementally replaces monolith functionality with extracted services:- Introduce the façade. Place an API gateway, reverse proxy, or routing layer in front of the monolith. Initially, it routes all traffic to the monolith unchanged.
- Extract one service. Build the new microservice alongside the monolith. The façade routes requests for that service’s domain to the new service instead of the monolith.
- Coexist. Both the monolith (handling everything else) and the extracted service (handling its domain) run simultaneously. The façade manages the routing.
- Verify and eliminate. Once the extracted service is verified, remove the corresponding code from the monolith. The façade continues routing.
- Repeat. Extract the next service. The monolith shrinks with each extraction until only the façade remains (or the façade becomes the API gateway for the service mesh).
HITL Gate
Before each service extraction: The engineering lead reviews the service boundary, data decomposition plan, and façade design before extraction begins. This is especially important for the first extraction — it establishes the pattern that subsequent extractions will follow.
Step-by-Step Walkthrough
Step 1: Context Loading
Start by loading the work package definition and all relevant context from prior phases. Load the work package:Step 2: Service Boundary Identification
Phase 4 defined the work package boundary at a high level. Now refine it to the specific code, data, and logic that will move to the new service. Map the domain boundary:Step 3: Database Decomposition Planning
Shared databases are the hardest part of monolith-to-microservices. This phase plans how to separate data ownership for the service being extracted. Map data dependencies:Step 4: Façade & Communication Design
Design the routing layer that enables the Strangler Fig pattern and the communication patterns between the extracted service and the monolith. Design the façade layer:Step 5: Service Extraction Execution
Execute the Transform → Coexist → Eliminate cycle for this service. This is where the actual extraction happens. Transform: Build the new service Use Spec-Driven Development to create the delta spec for the new service:Step 6: Verification & Cutover
Final verification before the extraction is considered complete. Post-elimination verification:Key Patterns & Strategies
Service Boundary Identification Patterns
Bounded Context alignment. The strongest service boundaries align with Domain-Driven Design bounded contexts — areas of the codebase where a consistent domain model and ubiquitous language apply. Ask CoreStory: “What are the natural domain boundaries in this monolith based on the business concepts each module operates on?” Data ownership alignment. Services should own their data. If two modules both write to the same tables, they likely belong in the same service (or the shared data needs to be decomposed). Ask CoreStory: “Which modules in this monolith access the same database tables? Map data ownership.” Team ownership alignment. In organizations where Conway’s Law applies (most of them), service boundaries should align with team boundaries. A service owned by two teams is a service that will diverge. This is a human decision, but CoreStory can inform it by showing which code modules are most cohesive. Change frequency alignment. Code that changes together should be in the same service. Code that changes independently is a candidate for separation. Ask CoreStory: “Which modules in this codebase have historically changed together? Which change independently?”Database Decomposition Strategies
Database-per-service is the target end state but rarely the first step. It requires resolving all cross-service joins, foreign keys, and shared writes. Best for services with clear data ownership and no cross-service joins. Shared database with schema ownership is a practical intermediate step. Each service “owns” specific tables (enforced by convention or access control) but shares the same database server. This avoids the complexity of data replication while establishing ownership boundaries. Change Data Capture (CDC) enables data synchronization between services without tight coupling. The owning service writes to its database; CDC propagates changes to consuming services’ databases as events. Tools like Debezium, AWS DMS, or database-native logical replication handle the mechanics. The Outbox Pattern provides reliable event publishing alongside database writes. Instead of publishing events directly (which risks inconsistency if the publish fails), the service writes both the data change and the event to the same database transaction. A separate process reads the outbox and publishes events. This guarantees at-least-once delivery.Distributed Transaction Management
When a business operation spans multiple services, the single database transaction that guaranteed ACID properties in the monolith no longer exists. This is one of the most common failure modes in monolith-to-microservices migrations: teams extract a service, then discover that a critical workflow relied on a transaction that spanned what are now two separate databases. The Saga pattern manages distributed transactions through a sequence of local transactions with compensating actions for rollback. Choreography (event-driven): Each service publishes events that trigger the next step. No central coordinator. Service A completes its local transaction and publishes an event; Service B listens for that event, performs its local transaction, and publishes the next event.- Use when: Few services involved (2–3), low coordination complexity, the team is comfortable with event-driven debugging
- Advantages: No single point of failure, loose coupling, each service is fully autonomous
- Drawbacks: Harder to reason about the overall flow, harder to debug when something fails mid-saga, can create implicit coupling through event schemas
- Watch for: Circular event chains, missing compensating transactions, inconsistent event ordering
- Use when: Many services involved (4+), complex coordination logic, the team needs visibility into the overall workflow state
- Advantages: Easier to understand and debug, clear ownership of the coordination logic, centralized error handling
- Drawbacks: The orchestrator is a single point of failure (mitigate with redundancy), tighter coupling between orchestrator and services
- Tool support: Temporal, AWS Step Functions, Camunda, Azure Durable Functions
Communication Patterns
API Gateway routing is the simplest Strangler Fig implementation. The gateway routes requests to the monolith or the extracted service based on URL path, header, or other criteria. This works well for HTTP-based entry points. Branch by abstraction works for internal coupling. Introduce an interface/abstraction in the monolith where the extracted component is called. Initially, the implementation calls the monolith code. After extraction, swap the implementation to call the new service. This is the Strangler Fig pattern applied at the code level rather than the network level. Event-driven decoupling replaces synchronous internal calls with asynchronous events. The monolith publishes events when state changes; the extracted service subscribes. This is more work to implement but reduces runtime coupling and enables independent scaling.Prompting Patterns Reference
Boundary Identification Patterns
| Pattern | Example |
|---|---|
| Domain boundary | ”What are the natural domain boundaries in this monolith? Which modules operate on the same business concepts?” |
| Data ownership | ”Which modules write to the same database tables? Map the write access patterns.” |
| Coupling analysis | ”How many direct calls exist between [ModuleA] and [ModuleB]? What would need to change to make them independent?” |
| Shared logic | ”What code is shared between [ModuleA] and [ModuleB]? Can it be extracted into a shared library, or does it belong in one service?” |
Database Decomposition Patterns
| Pattern | Example |
|---|---|
| Table ownership | ”For each database table, which module is the primary writer? Which modules only read?” |
| Cross-boundary joins | ”Which SQL queries join tables that belong to different modules? These are the joins that must be decomposed.” |
| Foreign key mapping | ”What foreign key relationships cross the proposed service boundary? Which can be replaced with IDs and API lookups?” |
| Data volume | ”How many rows are in each table that needs to migrate? What’s the write frequency? This determines the migration strategy.” |
Extraction Patterns
| Pattern | Example |
|---|---|
| Entry point mapping | ”What HTTP routes, internal calls, and background jobs currently invoke [ComponentName] logic?” |
| Anti-corruption layer | ”Where do the monolith’s data model and the new service’s model diverge? What translations are needed?” |
| Rollback planning | ”If the extracted service fails under production load, what is the fastest path to routing all traffic back to the monolith?” |
| Contract testing | ”What contract tests should exist between the extracted service and its consumers to catch breaking changes?” |
Distributed Transaction Patterns
| Pattern | Example |
|---|---|
| Transaction boundary analysis | ”Which operations in [ComponentName] write to multiple database tables within a single transaction? Which of those tables will belong to different services?” |
| Compensating action design | ”If the [ServiceA] step of [Operation] succeeds but the [ServiceB] step fails, what compensating action reverses the [ServiceA] change?” |
| Consistency requirement | ”Does [Operation] require strong consistency (all-or-nothing, immediate) or can it tolerate eventual consistency (brief window of inconsistency)?” |
| Saga variant selection | ”For the [Operation] workflow spanning [ServiceA], [ServiceB], [ServiceC]: is the coordination logic simple enough for choreography, or does the number of steps and error scenarios warrant orchestration?” |
Best Practices
Extract the smallest viable service first. Your first extraction should be the simplest possible: low coupling, clear data ownership, few consumers. The goal is to prove the extraction pattern works — establishing the façade, the deployment pipeline, the monitoring, and the team’s muscle memory — before tackling harder extractions. Getting the infrastructure right on an easy service is far cheaper than getting it wrong on a hard one. Don’t break the monolith’s database on day one. Start with shared database/schema ownership. Let the extracted service access the monolith’s database through a well-defined data access layer. Decompose the database later, once the service boundary is proven stable. Premature database decomposition is one of the most common causes of monolith-to-microservices failure. The façade is your safety net — invest in it. The Strangler Fig façade (API gateway, reverse proxy, or routing layer) is what makes incremental extraction safe. It should support percentage-based routing (for gradual rollout), circuit breaking (for automatic fallback), and request mirroring (for comparison testing). Invest in making this layer robust before the first extraction. Design for independent deployment from day one. The extracted service must be deployable without coordinating with monolith deployments. This means: independent CI/CD pipeline, independent configuration, independent database migrations, independent monitoring. If deploying the service requires a synchronized monolith deployment, you’ve built a distributed monolith. Contract tests are non-negotiable. Every interaction between the extracted service and the monolith (or other services) must have a contract test. Consumer-driven contract testing (Pact, Spring Cloud Contract) ensures that the service’s API doesn’t break its consumers and that the service’s expectations of its dependencies are met. Plan for the data synchronization tax. During the Coexist phase, data often needs to be synchronized between the monolith’s database and the new service’s database. This synchronization is complex, error-prone, and temporary. Budget for it explicitly in your work packages, and design it to be removable. Monitor business metrics, not just technical metrics. Error rates and latency are necessary but insufficient. Monitor the business outcomes: order completion rate, payment success rate, user conversion. If a business metric drops after extraction, there’s a behavioral regression that technical metrics might not catch.Agent Implementation Guides
Claude Code
Claude Code
Setup
- Configure the CoreStory MCP server in your Claude Code settings (see CoreStory MCP Server Setup Guide).
- Add the skill file:
.claude/skills/monolith-to-microservices/SKILL.md with the content from the skill file below.- Commit to version control:
Usage
Tips
- This skill is a Phase 5 variant that plugs into the broader modernization workflow. It expects Phases 1–4 to be complete.
- For the first extraction, spend extra time on Step 2 (boundary identification) and Step 4 (façade design) — these establish the pattern for all subsequent extractions.
- Keep the SKILL.md under 500 lines for reliable loading.
Skill File
Save as.claude/skills/monolith-to-microservices/SKILL.md:GitHub Copilot
GitHub Copilot
Add the following to (Optional) Add a reusable prompt file. Create
.github/copilot-instructions.md:.github/prompts/monolith-to-microservices.prompt.md:Cursor
Cursor
Create
.cursor/rules/monolith-to-microservices/RULE.md:Factory.ai
Factory.ai
Create
.factory/droids/monolith-to-microservices.md:Troubleshooting
Can’t find clean service boundaries — everything is tightly coupled. This is the most common challenge in monolith decomposition. Start by identifying the modules with the lowest fan-in (fewest inbound dependencies) — these are the easiest to extract. If no clean boundaries exist, consider an intermediate step: introduce module boundaries within the monolith (a “modular monolith”) before extracting services. Ask CoreStory: “Which modules have the fewest inbound dependencies from other modules?” Shared database tables can’t be cleanly assigned to one service. Use schema ownership as a transitional strategy: both services access the same database, but each “owns” specific tables. Enforce ownership via convention or database-level access control. Over time, decompose the database fully using CDC or the outbox pattern for data synchronization. Performance degrades after service extraction — API calls are slower than in-process calls. This is expected. Network calls replace in-process calls, adding latency. Mitigations: add caching in the extracted service for frequently-read data, batch API calls where possible, use gRPC instead of REST for internal service communication, and consider whether the service boundary is correct — if two services call each other constantly, they might belong together. The façade introduces a single point of failure. Use a highly-available API gateway or load balancer for the façade layer. Most cloud providers offer managed API gateways with built-in redundancy (AWS API Gateway, Azure API Management, GCP API Gateway). For self-hosted, use a cluster of reverse proxies (Nginx, Envoy, HAProxy) behind a load balancer. Data consistency issues during the Coexist phase. Dual-write scenarios (where both the monolith and the extracted service can write to shared data) are inherently risky. Prefer single-writer patterns: one system is the source of truth for each piece of data, and the other reads via API or receives updates via events. If dual-write is unavoidable, use the outbox pattern with at-least-once delivery and idempotent consumers. The extracted service works in testing but fails under production load. This usually means the service wasn’t tested with production-scale data or traffic patterns. Use shadow traffic (mirror production requests to the new service without serving the response) during the Coexist phase to validate performance before cutover. Also check: database connection pooling, thread/goroutine limits, memory allocation, and external dependency timeouts. Agent can’t access CoreStory tools. See the Supercharging AI Agents troubleshooting section for MCP connection issues. Verify the project has completed ingestion by callinglist_projects and checking the status.