Kubernetes in 2026 Complexity Tax Banner

Kubernetes in 2026 - Is It Still Worth the Complexity Tax?

TL;DR Kubernetes won the orchestration argument years ago. The question is no longer “should we use Kubernetes.” It is “should this particular team, with this particular workload, with this particular budget, pay the operational tax.” For genuinely large, multi-tenant, multi-region platforms with dedicated infrastructure teams, the answer is still mostly yes. The ecosystem maturity is unmatched and the alternatives lose at scale. For mid-sized engineering organisations, the answer in 2026 is probably not, and increasingly not. Managed serverless, container platforms like Fly and Railway, and the new generation of platform-as-a-service offerings are competitive in ways they were not three years ago. For startups and small teams, the answer is almost always no, and stop pretending otherwise. The honest read in 2026: Kubernetes is the right answer to fewer questions than it used to be, and being honest about that is now a competitive advantage rather than a heresy. How We Got Here Kubernetes was the right idea at the right time. By the late 2010s, every serious engineering team needed an answer to “how do we run containers in production.” Kubernetes provided one, it was open, it was backed by a credible foundation, and the cloud providers all blessed it. Within five years it was the default. Within ten years it was the assumption. ...

May 3, 2026 · 8 min · James M

What Comes After Artemis: The Road to a Lunar Gateway

The Gateway Concept When most people think of returning to the Moon, they imagine Artemis astronauts landing, collecting samples, and returning home - just like Apollo. That’s the goal for Artemis III and IV. But NASA is building something different for what comes after: the Lunar Gateway. It’s not a destination in itself. It’s infrastructure - a way station in lunar orbit that changes how humans explore the Moon forever. ...

April 9, 2026 · 9 min · James M

Token Economics: Why the Cost of AI Isn't Going Down

TL;DR Inference cost is architectural - generating each token requires loading massive models into GPU memory, and that fundamental constraint doesn’t disappear with scale or competition Despite Moore’s Law expectations, flagship model prices (Claude 3, GPT-4) have remained flat for 18+ months because demand growth absorbs any efficiency gains The true cost of using AI is 1.5 - 2.5x the raw token price once you factor in monitoring, retries, fine-tuning, and compliance overhead Providers convert efficiency gains into better features (longer context, faster inference, multimodal) rather than lower prices - you get more value per dollar, not fewer dollars Stop waiting for cheaper AI; treat token costs as fixed infrastructure spend and optimise usage with tools like prompt caching instead There’s a persistent myth in tech: AI will get cheaper. The argument is straightforward - Moore’s Law, scale effects, competition, and raw compute efficiency improvements mean costs should plummet. Yet in April 2026, Claude costs roughly what it did in 2024. GPT-4 Turbo pricing hasn’t moved in eighteen months. Gemini’s cost structure remains sticky. Why? ...

April 9, 2026 · 8 min · James M

Local AI vs Cloud AI: The Tradeoff Landscape in 2026

By early 2026, the “Local vs. Cloud” debate has moved past the experimental phase. We are no longer just “trying to see if Llama runs on a Mac.” Instead, professional engineers are building sophisticated Hybrid AI Stacks where local and cloud models work in tandem. The landscape has shifted because the hardware caught up to the software. With the prevalence of unified memory on Apple Silicon and the accessibility of 24GB+ VRAM cards like the RTX 50-series, the “local” ceiling has been smashed. ...

April 9, 2026 · 5 min · James M

Polkadot 2026: From Infrastructure to Applications

The Pivot Year: Polkadot’s Strategic Shift in 2026 Polkadot has undergone a fundamental transformation in 2025-2026. After years of building infrastructure layers, the ecosystem is making a decisive pivot toward user-facing applications. This isn’t just a narrative shift - it’s embedded in technical upgrades, tokenomics redesigns, and validator economics that reflect a maturing network ready to compete at the application layer. Timing: This transformation arrives as traditional finance begins acknowledging blockchain infrastructure, and as the broader crypto market cycle approaches a pivotal moment for adoption. ...

April 4, 2026 · 5 min · James M

Stargate

TL;DR Stargate is a $500B AI infrastructure programme announced in January 2025 - a joint effort between Microsoft, OpenAI, SoftBank, and Oracle Construction has already started in Texas with more sites planned, aimed at training and serving the next generation of frontier AI models The scale signals where compute spend is heading - tens of billions per cluster is becoming the price of admission at the frontier Combines OpenAI’s models, Microsoft’s cloud, SoftBank’s capital, and Oracle’s enterprise infrastructure into one of the largest tech buildouts ever announced Worth tracking as a useful proxy for how seriously the industry takes the compute side of the AGI race About Stargate is a $500 billion AI infrastructure project announced in January 2025. The initiative is a collaboration between Microsoft, OpenAI, SoftBank, and Oracle to build a series of massive AI supercomputers and data centers. ...

March 29, 2024 · 2 min · James M