Monoliths are not inherently bad. Plenty of billion dollar companies run on monolithic codebases. The problem starts when the monolith becomes the bottleneck: deployments take hours, a bug in the billing module crashes the entire application, and every new feature requires touching code that nobody on the team fully understands anymore.
We have guided multiple teams through monolith decomposition, and the number one lesson is this: you do not need to stop building features to modernize your architecture. In fact, the teams that try to do a "big bang rewrite" almost always fail. The ones that succeed take an incremental approach.
Why Big Bang Rewrites Fail
The instinct is understandable. Your monolith is a mess, so you want to start fresh. You spin up a new repo, assign half the team to rebuild the product from scratch, and plan a dramatic switchover date six months out.
Here is what actually happens. The rewrite team falls behind because they are re discovering edge cases the original codebase already handles. Meanwhile, the maintenance team keeps shipping fixes and features to the old system, which means the rewrite target keeps moving. Six months turns into twelve. Morale drops. Stakeholders lose patience. The project gets cancelled, and you are back where you started, except now your best engineers are burned out.
We wrote about this dynamic in detail in our post on when to rebuild versus refactor. The short version: unless your codebase is truly unsalvageable, incremental migration beats a full rewrite almost every time.
The Strangler Fig Pattern
The most reliable approach to monolith migration is the Strangler Fig pattern. Named after the tropical fig that grows around an existing tree and eventually replaces it, this strategy lets you build new services alongside the monolith and gradually route traffic to them.
Here is how it works in practice:
1. Identify a bounded context to extract. Pick something with clear boundaries: authentication, notifications, payments, or a specific business domain.
2. Build the new service alongside the monolith. It runs independently with its own database, its own deployment pipeline, and its own test suite.
3. Route traffic from the monolith to the new service. This usually means placing a reverse proxy or API gateway in front of both systems. Requests for the extracted domain go to the new service. Everything else still hits the monolith.
4. Repeat until the monolith is either small enough to maintain comfortably or fully replaced.
The beauty of this approach is that at every step, you have a working system. If the new service has issues, you can route traffic back to the monolith instantly. There is no big switchover, no single point of failure.
Choosing What to Extract First
Not all modules are equally good candidates for extraction. We recommend scoring each candidate on three factors:
Business value. Will extracting this module unlock something your team cannot do today? For example, extracting the notification system might let you scale email delivery independently, or extracting the search module might let you swap in a purpose built search engine like Elasticsearch.
Coupling. How tangled is this module with the rest of the codebase? Modules with clean interfaces and minimal shared state are easier to extract. Modules that read from 40 database tables and get called from every controller are not.
Risk tolerance. Some modules are safer to experiment with. Extracting the image processing pipeline carries less risk than extracting the order management system. Start with lower risk modules to build confidence and refine your process before tackling the core business logic.
In our experience, authentication and user management is often a good first candidate because most modern identity providers (Auth0, Supabase Auth, Clerk) can replace custom auth code entirely. This eliminates a large chunk of monolith code with minimal custom development.
The Data Problem
The hardest part of monolith decomposition is not the code. It is the data. Monoliths typically share a single database, and modules are coupled through direct table joins, shared stored procedures, and implicit relationships.
When you extract a service, you need to figure out who owns which data. This usually means:
- Splitting tables so that each service owns its own data
- Introducing an event bus (Kafka, RabbitMQ, or even a simple webhook system) so that services can notify each other of changes without direct database access
- Accepting temporary data duplication during the transition period, with eventual consistency between the old and new systems
We have seen teams spend months trying to design the "perfect" data model upfront. Do not do this. Start with a pragmatic split, accept some duplication, and refine as you learn how the services actually communicate in production.
Keeping the Team Shipping
The organizational side matters as much as the technical side. Here is how we structure teams during a migration:
Dedicate a small migration squad. Two to four engineers focused on extraction work. They own the new services, the routing layer, and the data migration scripts. This is not a side project. It is their primary job.
Everyone else keeps building features. The rest of the team continues working on the monolith as if nothing is changing. New features land in the monolith first, and the migration squad moves them to the new architecture when the relevant service is ready.
Set up parallel deployment pipelines early. Each extracted service needs its own CI/CD pipeline, its own monitoring, and its own alerting. We use infrastructure as code and automated pipelines from day one so that deploying a new service is as easy as deploying the monolith.
Feature flags are non negotiable. Every new routing rule should be behind a feature flag so you can toggle traffic between old and new systems instantly. This is your safety net.
What Modern Architecture Actually Looks Like
"Modern architecture" does not automatically mean microservices. Plenty of teams over correct, going from one monolith to 50 microservices that are harder to manage than what they started with. We covered this in our piece on software architecture mistakes startups make.
For most teams, the right target is somewhere between a monolith and full microservices:
- A small number of well defined services (3 to 8 for most products), each owning a clear business domain
- An API gateway or reverse proxy handling routing, authentication, and rate limiting
- Event driven communication between services for anything that does not need a synchronous response
- Shared infrastructure (logging, monitoring, CI/CD) managed as a platform, not duplicated per service
- Containerized deployments with autoscaling, whether that is Kubernetes, ECS, or a serverless platform depending on your workload profile
The goal is not architectural purity. The goal is a system where teams can deploy independently, failures are isolated, and the codebase is small enough for any engineer to understand their part of it.
How Long Does This Take
Honest answer: 6 to 18 months for most mid sized applications, depending on the size of the monolith and the level of data coupling. We have completed smaller migrations in 3 months and larger ones that took over a year.
The key metric is not "when is the monolith fully gone." It is "when does the team feel faster." In most of our engagements, teams report a meaningful improvement in deployment speed and developer confidence within the first 2 to 3 months, even before the migration is complete.
When to Bring In Help
If your team built the monolith, they understand the domain better than anyone. But monolith decomposition is a specialized skill. Most application developers have never designed an event bus, set up service discovery, or managed data migration between independent databases.
This is where system architecture consulting makes a real difference. We work alongside your team, handle the infrastructure and migration patterns, and transfer knowledge so your engineers own the result.
If your monolith is slowing you down and you want to modernize without gambling on a rewrite, reach out to us. We will help you build a migration plan that keeps your product live and your team shipping.