The Microservices Trap: When Monoliths Are Actually Better (With Real-World Case Studies)
In March 2023, the Amazon Prime Video team published a blog post that sent shockwaves through the software engineering world. Their audio/video monitoring service — which had been built as a textbook microservices architecture using AWS Step Functions and Lambda — was too expensive and too slow. Their solution? They ripped out the microservices and replaced them with a single monolithic application. The result: a 90% reduction in infrastructure costs and dramatically improved performance. The post went viral. Engineers who had quietly questioned the microservices dogma for years suddenly felt vindicated. Engineers who had built their entire careers around distributed systems felt attacked.
I remember reading it and thinking: finally. Not because microservices are bad — they're genuinely the right choice in certain situations. But because our industry had spent the better part of a decade treating "microservices" as a synonym for "good architecture," and anyone who suggested otherwise was dismissed as someone who hadn't worked at scale. The Amazon Prime Video case was the most prominent example, but it wasn't an outlier. Shopify, Segment, Basecamp, and dozens of less-famous companies had already discovered the same thing: the microservices architecture they adopted to solve scaling problems was actually creating problems they didn't have before.
This article is about the gap between microservices theory and microservices reality. It's about when distributed architectures genuinely help, when they actively hurt, and how to make a sane decision for your team and product — without getting swept up in conference talk hype or resume-driven development.
The Numbers First
Before we get into case studies and opinions, let's look at actual data — because one of the problems with the microservices debate is that it's usually conducted through anecdotes and vibes rather than numbers.
The CNCF 2024/2025 Annual Survey, which polls thousands of organizations on their cloud-native practices, revealed a striking trend: 42% of organizations that adopted microservices have consolidated at least some of their services back into larger units. This isn't a niche finding — it represents nearly half of all microservices adopters discovering that they went too granular, too fast, or in the wrong places.
Infrastructure costs are where the microservices tax hits hardest. According to research compiled by Java Code Geeks, organizations running microservices architectures report infrastructure costs that are 3.75x to 6x higher than comparable monolithic deployments. This includes compute costs (each service needs its own resources), networking costs (inter-service communication generates massive internal traffic), observability costs (distributed tracing, log aggregation, and monitoring across dozens or hundreds of services), and orchestration costs (Kubernetes clusters, service meshes, API gateways).
The operational complexity is equally measurable. A ByteIota analysis of microservices challenges found that teams adopting microservices reported a 60-80% increase in operational complexity compared to monolithic architectures. This manifests as more on-call rotations, more deployment pipelines to maintain, more integration testing to write, and more debugging time spent tracing requests across service boundaries.
On the developer experience side, AWS in Plain English published an analysis showing that developer productivity in microservices environments often decreases by 20-40% in the first 12-18 months of migration, and may never fully recover for small teams. The cognitive overhead of understanding service boundaries, managing distributed transactions, handling network failures, and maintaining consistent data across services is genuinely enormous.
The financial picture in salary terms is also relevant. According to Glassdoor, a Microservices Architect in the US commands $140,000-$210,000 in base salary, compared to $110,000-$170,000 for a general Backend Engineer. In emerging markets — Eastern Europe, Central Asia, Southeast Asia — the premium is even larger in relative terms: microservices architects earn $50,000-$100,000 compared to $25,000-$60,000 for backend engineers. This salary premium reflects real complexity, but it also creates a perverse incentive: engineers benefit financially from advocating for architectures that require their specialized (and expensive) skills.
Case Study 1: Amazon Prime Video — The Post Heard Round the World
Let's start with the most famous example because it's the most instructive. In early 2023, the Amazon Prime Video engineering team published a case study about their audio/video monitoring service. This service analyzed every stream on Prime Video for quality issues — audio sync problems, video artifacts, buffering patterns. It was a critical piece of infrastructure.
The original architecture was a microservices dream on paper: separate services for media conversion, defect detection, and real-time notification, all orchestrated by AWS Step Functions and connected through Amazon S3 for data passing between stages. Each service could scale independently. Each had its own deployment pipeline. Each was a clean, isolated unit of business logic.
The reality was a nightmare. The Step Functions orchestration layer hit account limits at scale because the service was processing millions of streams. The data passing between services through S3 created enormous I/O costs — every frame of every stream had to be written to and read from S3 between processing stages. The per-invocation costs of Lambda added up astronomically at Prime Video's scale.
The team's solution was ruthlessly pragmatic: they combined all the components into a single monolithic service running on Amazon ECS. The media conversion, defect detection, and notification logic all lived in the same process. The data that previously had to be serialized, written to S3, read from S3, and deserialized between services was now just... passed between functions in memory. The result was a 90% cost reduction and a system that was simpler to operate, debug, and extend.
The key lesson wasn't "monoliths are always better." It was that the microservices architecture was solving a problem they didn't have (independent team scaling) while creating a problem they did have (enormous data transfer costs between tightly coupled processing stages). The services weren't really independent — they always ran together, in sequence, on the same data. Making them separate services added latency, cost, and complexity without providing any of the benefits microservices are supposed to deliver.
Case Study 2: Shopify — The $200B Modular Monolith
Shopify's story is arguably more interesting than Amazon's because it wasn't a failure-and-reversal narrative — it was a deliberate, contrarian architectural choice that has scaled to one of the largest e-commerce platforms on earth.
Shopify runs on a monolithic Ruby on Rails application. As of 2024, this single application handles millions of merchants, billions of dollars in transactions, and traffic spikes that would terrify most distributed systems engineers (Black Friday/Cyber Monday is essentially a DDoS attack from real customers). When Shopify's engineering team evaluated microservices — and they did, extensively — they concluded that the costs outweighed the benefits for their specific situation.
Instead, Shopify pioneered what they call the "modular monolith" approach. The application is a single deployable unit, but internally it's organized into strictly bounded modules (they call them "components") with enforced boundaries. Components communicate through well-defined interfaces. Database tables are owned by specific components and can't be accessed directly by others. Circular dependencies are detected and prevented by tooling.
The advantages over microservices for Shopify were substantial:
- Single deployment pipeline: One CI/CD pipeline instead of hundreds. One deployment to coordinate instead of orchestrating rolling updates across dozens of services.
- No network boundary tax: Component-to-component calls are function calls, not HTTP requests. No serialization/deserialization. No network latency. No retry logic for failed network calls.
- Transactional consistency: Database transactions can span multiple components when needed, because they share a database. In microservices, you'd need distributed transactions (sagas), which are notoriously complex and error-prone.
- Simple debugging: A stack trace shows you the entire call chain. In microservices, you need distributed tracing tools like Jaeger or Zipkin, and even then the picture is often incomplete.
- Developer onboarding: New engineers learn one codebase, one set of conventions, one deployment process. In a microservices environment, onboarding to a single service is easy, but understanding the system is nearly impossible.
Shopify's system handles over $200 billion in GMV annually. If a modular monolith can handle that kind of scale, the argument that "you need microservices to scale" starts to look very thin.
Case Study 3: Segment — 140 Microservices, Then Back Again
Segment (the customer data platform, now part of Twilio) has one of the most honest and detailed accounts of microservices failure. Their CTO, Calvin French-Owen, published "Goodbye Microservices" in 2018 — a remarkably candid post about what went wrong.
Segment's product is conceptually simple: receive customer data from sources (websites, apps, servers) and send it to destinations (analytics tools, marketing platforms, data warehouses). When they started scaling, they made what seemed like the obvious architectural choice: one microservice per destination. Each destination (Google Analytics, Mixpanel, Amplitude, etc.) had its own service, its own deployment pipeline, its own scaling configuration.
At first, this was great. Teams could work on individual destinations independently. A bug in one destination's integration couldn't crash another. Scaling was granular — high-volume destinations got more resources.
Then they hit 140 destinations. And the architecture collapsed under its own weight. Every operational task had to be done 140 times. Updating a shared library meant 140 pull requests, 140 code reviews, 140 deployments. Each service had slightly different error handling, slightly different retry logic, slightly different monitoring. The engineering team was spending more time on infrastructure maintenance than on product development.
Their solution was a system they called "Centrifuge" — essentially a single, unified service with a plugin architecture. Each destination became a configuration rather than a service. The team went from maintaining 140 deployment pipelines to maintaining one. Engineer velocity increased dramatically. Operational overhead plummeted.
The lesson from Segment is about the difference between logical boundaries and deployment boundaries. Their destinations were logically separate (different APIs, different data formats), but operationally identical (receive data, transform it, send it). Making them separate services confused "different business logic" with "needs independent deployment and scaling," and the cost was enormous.
Case Study 4: Basecamp/37signals — The Unapologetic Monolith
Basecamp (now 37signals) has never used microservices. Not because they haven't considered it, but because David Heinemeier Hansson (DHH) — the creator of Ruby on Rails and Basecamp's CTO — has been one of the most vocal critics of the microservices movement since its inception.
Basecamp runs a monolithic Rails application serving millions of users across multiple products (Basecamp, HEY email, ONCE). Their infrastructure team consists of a handful of people, not the dozens or hundreds that companies of comparable scale typically require. They moved off the cloud entirely in 2023, saving over $7 million over five years according to DHH's calculations, by running their monolith on owned hardware.
The Basecamp case is important not because their approach is universally right, but because it demonstrates that a well-structured monolith, operated by a small team, can serve millions of users reliably and profitably. Their entire engineering team is roughly 20 people. Many companies with similar user counts have 200+ engineers and a dedicated platform team just to keep the microservices infrastructure running.
DHH's framing is characteristically provocative but contains a real insight: microservices are a solution to an organizational problem (how do you let hundreds of engineers work on the same product without stepping on each other?) rather than a technical problem (how do you make software that performs well?). If you don't have the organizational problem — if you have a small team that communicates well — then you're adding enormous technical complexity to solve a problem you don't have.
When Microservices ARE the Right Choice
I've spent 2,000 words on microservices failures, so let me be clear: microservices are not a mistake for everyone. They're a mistake for most companies that adopt them, but the companies where they work tend to have very specific characteristics.
Netflix is the canonical success story, and it's worth understanding why it works for them. Netflix has over 2,000 engineers working on the platform. Their services handle over 200 million subscribers across 190+ countries with radically different network conditions, device types, and content libraries. Their recommendation engine, content delivery, encoding pipeline, and user interface all have genuinely different scaling requirements and development velocities. Independent teams deploying independent services independently is a real organizational necessity for them, not an architectural preference.
Here's when microservices genuinely make sense:
- You have 50+ engineers working on the same product and coordination costs are becoming the primary bottleneck. Conway's Law is real — your architecture will mirror your organization structure whether you want it to or not.
- You have genuinely independent domains with different scaling requirements. If your payment processing gets 10x traffic during sales while your user profile service stays flat, independent scaling has real value.
- You need independent deployment velocity. If one team needs to deploy 20 times a day and another deploys weekly, coupling them into a single deployment pipeline creates real friction.
- You have different technology requirements per domain. If your ML pipeline needs Python, your API layer needs Go, and your real-time features need Rust, microservices let you use the right tool for each job. (But honestly, most companies don't actually need this — it's often resume-driven technology selection.)
- You need fault isolation at the infrastructure level. If a failure in one area truly cannot be allowed to affect another area (think: a bug in search results can't be allowed to crash the checkout flow), service boundaries provide hard isolation.
Notice that none of these criteria mention company valuation, trendiness, or what big tech companies do. They're all about specific, measurable organizational and technical needs.
When Monoliths Win (Which Is Most of the Time)
If microservices are right for large organizations with complex domains and independent teams, monoliths are right for... almost everyone else. Here's the decision matrix:
| Factor | Monolith Wins | Microservices Win |
|---|---|---|
| Team size | 1-50 engineers | 50+ engineers, multiple autonomous teams |
| Domain complexity | Single domain, tightly coupled features | Multiple distinct domains with different lifecycles |
| Deployment frequency | Uniform across the product | Varies significantly by team/feature |
| Scaling requirements | Uniform or predictable | Highly variable across components |
| Data consistency needs | Strong consistency required | Eventual consistency acceptable |
| Infrastructure budget | Limited / cost-sensitive | Substantial / scale justifies cost |
| Operational maturity | Small ops team, limited tooling | Dedicated platform team, mature observability |
| Product stage | MVP, product-market fit search, early growth | Proven product with stable domain boundaries |
The brutal truth is that most startups and mid-size companies fit the left column on nearly every factor. A 15-person startup with one product, one deployment pipeline, and a shared database has no business running microservices. They're paying the complexity tax without getting the organizational benefits.
Java Code Geeks estimates that a startup running 20 microservices on Kubernetes spends $15,000-$40,000/month on infrastructure that a well-optimized monolith could handle for $2,000-$5,000/month. For a seed-stage startup burning through a $2M raise, that's the difference between 18 months and 10 months of runway. Architectural choices can literally determine whether a company survives.
The Modular Monolith: The Compromise That Actually Works
If "full microservices" is too complex for most teams and "spaghetti monolith" is genuinely problematic, the answer for most organizations is the modular monolith — and it's not a compromise in the pejorative sense. It's a genuinely superior architecture for the majority of use cases.
A modular monolith gives you:
- Clean boundaries without network boundaries. Modules have explicit interfaces and enforced contracts, but they communicate through function calls, not HTTP requests. You get the organizational clarity of microservices without the distributed systems nightmares.
- Single deployment with modular development. One CI/CD pipeline, one deployment artifact, but teams work in their own modules with clear ownership.
- Easy extraction path. If a module genuinely needs to become an independent service later — because it has different scaling requirements or needs a different technology — you can extract it. The module boundaries are already defined. Going from modular monolith to microservices (for the parts that need it) is relatively straightforward. Going from spaghetti monolith to microservices is a nightmare.
- Transactional integrity by default. ACID transactions across modules are trivial because they share a database. In microservices, you need sagas, eventual consistency, and compensating transactions — patterns that are genuinely difficult to implement correctly.
Shopify's implementation is the gold standard. They use Ruby's Packwerk gem to enforce module boundaries at the code level, with automated tools that detect and prevent boundary violations. Their "components" have explicit public APIs, private implementations, and enforced dependency rules. The result looks like microservices from a code organization perspective but operates like a monolith from a deployment and runtime perspective.
Other frameworks have similar tools: Java has the Java Module System (JPMS) and frameworks like Spring Modulith; .NET has modular monolith patterns built into its project structure; Go has packages with explicit imports; Python can enforce boundaries through import policies and namespace packages. The modular monolith isn't a Ruby-specific pattern — it's applicable across the stack.
The Hidden Costs Nobody Warns You About
Let me catalog the specific costs of microservices that teams consistently underestimate, because they're the ones that kill you:
1. Distributed debugging is a completely different skill. In a monolith, you set a breakpoint, step through the code, and see the entire execution flow. In microservices, a user request might touch 8 services. A bug could be in any of them — or in the interaction between them. You need distributed tracing (Jaeger, Zipkin), centralized logging (ELK stack, Datadog), and service mesh observability (Istio, Linkerd). These tools cost money and — more importantly — take months to set up correctly. Most teams underinvest in observability and then spend hours manually correlating logs across services.
2. Integration testing becomes exponentially harder. In a monolith, your integration tests run against one application. In microservices, you need to either run all dependent services locally (which gets unwieldy fast) or maintain contract tests (like Pact), which add their own complexity. Many teams give up on proper integration testing and rely on production monitoring to catch issues — which means bugs reach users more often.
3. Data management is a nightmare. The microservices orthodoxy says each service should own its data. But in practice, data doesn't respect service boundaries. A "user" has data relevant to the profile service, the billing service, the notification service, and the analytics service. Keeping this data consistent across services requires event-driven synchronization, which means dealing with eventual consistency, duplicate events, ordering guarantees, and the entire zoo of distributed data problems. ByteIota's analysis found that data consistency issues are the #1 operational problem reported by microservices teams.
4. Developer experience degrades. Running a monolith locally is typically straightforward: install dependencies, start the server, done. Running microservices locally means starting 5-20 services, their databases, message brokers, and service discovery. Docker Compose helps but adds its own complexity. Many developers end up testing against shared staging environments instead of running services locally, which slows down the feedback loop and introduces flaky test environments.
5. Organizational overhead scales with service count. Each service needs: a code repository (or a directory in a monorepo), a CI/CD pipeline, monitoring dashboards, alerting rules, documentation, an on-call rotation, runbooks for common failures, and an owner. Multiply this by 50, 100, or 200 services, and you have a full-time job just maintaining the organizational infrastructure around the services — before writing a line of product code.
Common Misconceptions — Let's Fight About It
Misconception #1: "Monoliths don't scale." This is flatly wrong. A monolith can scale horizontally (multiple instances behind a load balancer) just as easily as microservices can. Shopify, Stack Overflow, and Basecamp all serve millions of users from monolithic architectures. The argument that microservices scale better is only true if you have specific components with radically different scaling needs. If your whole application scales together, horizontal scaling of a monolith is simpler and cheaper.
Misconception #2: "Microservices are easier to maintain." Individual services are simpler, yes. But the system is dramatically more complex. You've traded code complexity for operational complexity, and operational complexity is harder to see, harder to test, and harder to fix. As the AWS in Plain English analysis puts it: "Microservices move complexity from the code to the infrastructure. The total complexity doesn't decrease — it just becomes someone else's problem."
Misconception #3: "You can always split later, so start with microservices to avoid migration pain." This is backward. Starting with microservices means you're defining service boundaries before you understand your domain. Those boundaries will be wrong — you can't know the right boundaries until you've built the product and learned where the natural seams are. Premature decomposition is worse than premature optimization. Starting with a well-structured monolith and extracting services when you have evidence they're needed is almost always the right approach. Martin Fowler has called this "MonolithFirst" — and he's the person who literally coined the term "microservices."
Misconception #4: "Tech companies use microservices, so we should too." Netflix uses microservices because they have 2,000+ engineers and radically different scaling needs across services. Google uses microservices because they process billions of requests per second across fundamentally different products. If your company has 15 engineers building a B2B SaaS product, you are not Netflix. Copying their architecture is like copying their office furniture — it won't make you perform like them.
Misconception #5: "Microservices enable faster development." They enable faster independent development for large, autonomous teams. For small teams, they slow development dramatically because every feature that spans two services requires coordination, versioned APIs, backward compatibility, and deployment orchestration. A full-stack feature in a monolith might be a single pull request. The same feature across three microservices is three PRs, an API contract negotiation, and a coordinated deployment.
The Resume Problem: Microservices as Career Capital
Here's the uncomfortable truth nobody talks about: one of the strongest drivers of microservices adoption is resume-driven development. Engineers want "microservices" and "Kubernetes" and "distributed systems" on their resumes because those keywords command premium salaries. Architects recommend microservices because it creates more architectural work (and job security). Platform teams advocate for Kubernetes because it justifies their team's existence.
The salary data supports this dynamic:
| Role | US Salary Range | Emerging Markets |
|---|---|---|
| Backend Engineer (monolith experience) | $90,000 - $160,000 | $20,000 - $55,000 |
| Backend Engineer (microservices + K8s) | $120,000 - $200,000 | $35,000 - $80,000 |
| Microservices / Distributed Systems Architect | $150,000 - $230,000 | $50,000 - $110,000 |
| Platform / Infrastructure Engineer | $130,000 - $210,000 | $40,000 - $90,000 |
| DevOps/SRE supporting microservices | $120,000 - $195,000 | $30,000 - $75,000 |
The premium for microservices experience is real — 20-40% higher salaries. And I don't blame any individual engineer for wanting those keywords on their resume. But as a decision-maker, you need to be aware that the people recommending microservices have a personal financial incentive to do so, even if it's subconscious.
Here's my counter-argument for career strategy: the engineers who can actually ship products — who can take a product from zero to revenue, who can build things that work reliably and cheaply — are rarer and more valuable than engineers who can recite the microservices pattern catalog. Being the person who says "we don't need microservices for this, and here's why" is a more impressive display of engineering judgment than reflexively reaching for the complex solution.
What I Actually Think
I've gone back and forth on this over the years, and here's where I've landed:
Microservices are a scaling solution for organizations, not for technology. They solve the problem of "how do 200 engineers work on the same product without creating a coordination nightmare?" If you have that problem, microservices are genuinely the answer — nothing else works as well at that scale. But if you don't have that problem (and most companies don't), microservices add complexity without solving anything.
The modular monolith should be the default starting architecture for 90% of new projects. It gives you the code organization and boundary enforcement of microservices with the operational simplicity of a monolith. When (if!) specific modules need to be extracted into independent services, the clean boundaries make that extraction straightforward.
The "start with microservices" advice is actively harmful for startups. I've watched startups burn 6 months of runway building Kubernetes infrastructure for a product that has 50 users. That's not engineering — it's architecture cosplay. Build the monolith, get users, find product-market fit, and then solve the scaling problems you actually have (not the ones you imagine you'll have).
The industry is course-correcting, but slowly. The 42% consolidation rate from the CNCF survey is the market admitting it went too far. But there's a lot of institutional inertia — companies that have invested millions in microservices infrastructure and Kubernetes platforms are reluctant to admit the investment wasn't necessary. Sunk cost fallacy is a powerful force in software architecture.
The best engineers I know are pragmatists, not ideologues. They don't have a "preferred architecture." They have a preferred outcome (working software, delivered efficiently, that's cheap to operate) and they pick the architecture that achieves that outcome for their specific situation. Sometimes that's microservices. Usually it's not.
Decision Framework: Should You Use Microservices?
Here's the framework I use when advising teams on this decision. Answer honestly:
1. How many engineers do you have?
- Fewer than 20: Monolith. No question.
- 20-50: Modular monolith. Start enforcing module boundaries.
- 50-100: Consider extracting 2-3 genuinely independent services. Keep the core as a modular monolith.
- 100+: Microservices are likely necessary for organizational reasons. Invest in the platform team and tooling to support them.
2. Do you have a dedicated platform/infrastructure team?
- No: Don't use microservices. You'll spend engineering time on infrastructure instead of product.
- Yes, but small (1-3 people): Use a modular monolith with maybe 1-2 extracted services for genuinely independent workloads.
- Yes, substantial (5+ people): Microservices are viable if other criteria also apply.
3. Are your domain boundaries stable and well-understood?
- No (early-stage product, pivoting, domain still evolving): Monolith. You'll draw the wrong service boundaries.
- Yes (mature product with clear, stable domains): Extraction candidates are identifiable.
4. Do different parts of your system have genuinely different scaling needs?
- Not really — everything scales together: Monolith scales horizontally just fine.
- Yes — specific workloads spike independently: Those specific workloads are extraction candidates.
5. What's your infrastructure budget tolerance?
- Lean / cost-sensitive: Monolith. Microservices infrastructure costs 3.75-6x more.
- Substantial / revenue justifies it: Microservices cost is manageable.
If you answered the left column for 3+ questions, you should not be using microservices. If you answered the right column for 4+ questions, microservices are worth considering. If you're in the middle, the modular monolith is your sweet spot.
The Bottom Line
The microservices movement was, at its core, an attempt to solve a real problem: how do large organizations build and maintain complex software systems? For organizations that genuinely face that problem — hundreds of engineers, multiple autonomous teams, complex domains with independent scaling needs — microservices remain the best answer we have.
But for the vast majority of software projects — startups, mid-size companies, small teams building focused products — microservices are a solution in search of a problem. They add 3-6x infrastructure costs, 60-80% more operational complexity, and months of reduced developer productivity, in exchange for organizational benefits that only materialize at scales most companies never reach.
The 42% consolidation rate tells you everything you need to know: nearly half the companies that adopted microservices realized they'd overcorrected. The modular monolith — a single deployable application with enforced internal boundaries — gives you the code organization benefits of microservices without the distributed systems tax. It's the architecture most teams should start with, and many should stay with forever.
Build the simplest thing that works. Optimize for shipping. Extract services when you have evidence you need them — not when a conference speaker tells you they're the future. The future is boring software that works.
Sources
- Amazon Prime Video Tech Blog — Scaling Up the Prime Video Monitoring Service
- CNCF Annual Survey 2024/2025
- Java Code Geeks — The Hidden Costs of Microservices
- ByteIota — Microservices Challenges and Solutions
- AWS in Plain English — The Hidden Costs of Microservices
- Shopify Engineering — Deconstructing the Monolith
- Segment Blog — Goodbye Microservices: From 100s of Problem Children to 1 Superstar
- DHH — We Have Left the Cloud
- VentureBeat — The Microservices Backlash
- Glassdoor — Microservices Architect Salary Data
BirJob.com aggregates tech and non-tech job listings from 90+ sources across Azerbaijan and beyond — whether you're building monoliths, microservices, or something in between. Find your next role at birjob.com.
