TL;DR
- Kubernetes won the orchestration argument years ago. The question is no longer “should we use Kubernetes.” It is “should this particular team, with this particular workload, with this particular budget, pay the operational tax.”
- For genuinely large, multi-tenant, multi-region platforms with dedicated infrastructure teams, the answer is still mostly yes. The ecosystem maturity is unmatched and the alternatives lose at scale.
- For mid-sized engineering organisations, the answer in 2026 is probably not, and increasingly not. Managed serverless, container platforms like Fly and Railway, and the new generation of platform-as-a-service offerings are competitive in ways they were not three years ago.
- For startups and small teams, the answer is almost always no, and stop pretending otherwise.
- The honest read in 2026: Kubernetes is the right answer to fewer questions than it used to be, and being honest about that is now a competitive advantage rather than a heresy.
How We Got Here
Kubernetes was the right idea at the right time. By the late 2010s, every serious engineering team needed an answer to “how do we run containers in production.” Kubernetes provided one, it was open, it was backed by a credible foundation, and the cloud providers all blessed it. Within five years it was the default. Within ten years it was the assumption.
But somewhere along the way, the conversation shifted. Kubernetes stopped being an answer and started being a question. Specifically, it started being a question every engineering org had to answer regardless of whether they actually had the workload that justified it. Five-engineer teams adopted Kubernetes because that was what serious teams used. Companies running three services migrated to Kubernetes because that was what the platform team was hired to build.
A lot of those decisions look worse in hindsight. The complexity got real. The on-call got real. The cloud bill got real. And the alternatives got better.
What The Complexity Tax Actually Costs
The cost of running Kubernetes in production is rarely the line item people think it is. In rough order of importance:
Cognitive overhead. Kubernetes has its own object model, its own networking abstractions, its own storage abstractions, and its own RBAC system. Every engineer who touches the platform has to learn enough of this to be effective. For a team running ten services this is overhead that does not pay back.
Operational headcount. A “real” Kubernetes platform - one with proper observability, secret management, ingress, certificate rotation, autoscaling, and security policies - is at minimum a 0.5-1 FTE for a small environment and rapidly more as the cluster grows. Most teams under-resource this and pay for it later.
Ecosystem churn. Tools you depend on break, deprecate, or get superseded. The CNCF landscape adds and retires projects faster than most teams can keep up. Pinning versions and standing still works for a while, then it does not.
Cloud cost surprise. Managed Kubernetes nodes are not cheap. Add load balancers, persistent volumes, log aggregation, and observability tooling, and the bill for a single environment routinely exceeds what the same workload would cost on a serverless or platform-as-a-service equivalent.
Security surface. A Kubernetes cluster is a sprawling system with a large default attack surface. Hardening it takes work. Most clusters are not hardened to the standard their security teams would write down on paper.
None of this is unique to Kubernetes. It is the cost of running infrastructure. The question is whether you get value back for that cost.
When Kubernetes Still Earns Its Keep
There are categories of work where Kubernetes is genuinely the best choice in 2026 and probably will be for years.
Multi-tenant platforms at scale. If you are building something where many internal or external customers share infrastructure, Kubernetes’ scheduling and isolation primitives are unmatched. The big SaaS platforms run on Kubernetes for a reason.
Workloads with strong locality requirements. Stateful workloads, data systems, GPU-bound workloads, and anything that benefits from explicit scheduling all do well on Kubernetes. The serverless platforms struggle here.
Hybrid and multi-cloud. If you genuinely need to run the same workload across AWS, GCP, on-prem, and the edge, Kubernetes is the closest thing to a portable substrate. Most teams who say they need this do not actually need it. Some do.
Heavily regulated environments. Banks, governments, and other organisations where the cloud account itself is a compliance boundary often end up on Kubernetes because the alternatives do not give them the control they need.
Teams with existing Kubernetes expertise and momentum. If you already have a working Kubernetes platform, a team that knows it, and a roadmap of value built on it, ripping it out is rarely justified. Sunk-cost bias is real, but so is the cost of switching.
When It Stops Earning Its Keep
The clearer picture in 2026 is when Kubernetes is the wrong choice.
Small product teams with small footprints. If you have under a dozen services, fewer than 10 engineers, and a single cloud account, you almost certainly do not need Kubernetes. You need a managed runtime that boots containers on push.
Teams optimising for time to market. Every hour spent on cluster operations is an hour not spent on the product. Early-stage companies that adopt Kubernetes routinely lose 6-12 months of product velocity to platform work.
Workloads that fit cleanly into serverless. If your workload is request-response with predictable shapes, managed serverless on a major cloud is cheaper, simpler, and more reliable than the equivalent Kubernetes setup. The latency and cold-start arguments against serverless have been substantially mitigated by the providers in the last few years.
Internal tools and dashboards. A lot of internal infrastructure that ended up on Kubernetes belonged on a single VM or a managed PaaS the whole time.
The Alternatives In 2026
The alternatives have improved meaningfully and that is the part of the conversation that has actually changed.
Managed serverless - AWS Lambda, Cloudflare Workers, Google Cloud Run - has continued to mature. Cold starts are largely a solved problem for most workloads. Pricing is competitive for variable traffic. The developer experience is clean.
Container platforms like Fly.io and Railway have carved out a real niche between serverless and Kubernetes. You write a Dockerfile, you push, you get a globally distributed runtime with sane defaults. Operations are not zero, but they are roughly an order of magnitude lower than Kubernetes.
Managed Kubernetes done right - GKE Autopilot, EKS Fargate, and similar - lets you keep the Kubernetes API surface without managing nodes, which removes a meaningful chunk of the operational burden. This is a credible middle path for teams that want Kubernetes’ ecosystem without all of its toil.
Modern platform-as-a-service offerings have come back into fashion. Some of them are simply Heroku reimplemented for the 2020s, and that turns out to be a great product for a lot of teams.
The point is that in 2024 a serious mid-sized company defaulting to “we run Kubernetes” was a reasonable choice. In 2026 it should be a deliberate one.
The AI Wrinkle
One thing that pulled some teams back toward Kubernetes in 2025-2026 is AI workloads. GPU scheduling, model serving, training orchestration, and the operational requirements of inference platforms tend to look more like classical infrastructure problems than like serverless workloads. Tools like KServe, Ray on Kubernetes, and the various LLM-serving platforms assume Kubernetes underneath.
If your roadmap includes hosting your own models or running serious AI infrastructure, the Kubernetes case gets stronger again. If your AI strategy is “call APIs,” it does not.
This is one of the reasons it is hard to give a single answer to the question. The same company can be on the wrong side of the Kubernetes cost-benefit curve for its application workload and the right side for its AI workload. Splitting the difference is allowed and increasingly common.
The Test I Would Apply
If you are deciding from scratch in 2026, the questions I would actually ask:
- Do you have at least one engineer whose job description includes operating Kubernetes? If no, do not adopt it.
- Do you have workloads that genuinely benefit from Kubernetes-specific features - stateful sets, custom resources, GPU scheduling, multi-tenancy? If no, look at managed runtimes first.
- Is your scale large enough that the per-workload overhead of Kubernetes is amortised? If you have three services, the overhead per service is enormous. If you have three hundred, it might be efficient.
- Are you paying for portability you actually use? If you are on a single cloud and likely to remain so, the multi-cloud argument is hypothetical.
- What is the second-order cost of not adopting Kubernetes? Sometimes the answer is “we lose access to a bunch of CNCF tooling we want.” That is a real cost. Sometimes the answer is “nothing.” Be honest.
If you cannot answer “yes” to at least two of those, the complexity tax is probably not worth it.
The Honest Verdict
Kubernetes is excellent infrastructure software. It is also more infrastructure than most teams need.
The cultural shift I have watched happen in 2025 and 2026 is healthy: it is becoming acceptable to say “we are not running Kubernetes and we are not going to.” A few years ago that statement would have read as either negligent or nostalgic. Today it reads as deliberate.
The teams I see making the best infrastructure decisions in 2026 are the ones who have stopped treating Kubernetes as the default and started treating it as one option among several. That posture lets them choose Kubernetes when it actually fits and skip it when it does not. Either outcome is fine. The only bad outcome is paying the complexity tax and not getting value back.
If your platform team can articulate, in plain English, what specific Kubernetes features they depend on and why those features do not exist on the alternative they are dismissing - keep doing what you are doing. If they cannot, the conversation is overdue.