The Free Intelligence Era Banner

The Free Intelligence Era: What Breaks When Thinking Costs Nothing

TL;DR The marginal cost of AI intelligence is halving roughly every two months and heading toward a level where rationing stops making sense - similar to how bandwidth and storage became effectively unconstrained This will break pricing models built on scarce cognition: anything billed per word, per hour, or per consult faces a hard ceiling set by what machines charge for the same work The Jevons paradox means total cognitive work in the economy likely goes up, not down - cheaper thinking means we apply thinking to far more problems, not the same problems more cheaply Three categories of human work survive: accountability (being the named responsible party), taste (choosing well from infinite AI-generated options), and real-world coupling (a body in a place, a relationship that took years to build) The political question of who captures the surplus and who absorbs the transition cost is still open - it will be decided by institutions and policy, not by the technology itself This is a personal reflection, not a forecast dressed up as one. I am writing about a trend I think is real, but the second-order consequences are guesses, and I am sure some of them are wrong. ...

April 28, 2026 · 14 min · James M
The Year 3026 Banner

The Year 3026: Thinking Seriously About a Thousand Years From Now

TL;DR Over a thousand years, the substrate of civilisation changes beyond recognition, but the human core - love, grief, storytelling, the search for meaning - almost certainly does not Computation and energy will have hit their physical cost floors by 3026; intelligence is ambient, woven into the environment so thoroughly that “using AI” becomes as meaningless a phrase as “using oxygen” The built environment is almost certainly at solar-system scale - with the Earth a protected biosphere and heavy industry, compute, and energy capture distributed across the inner solar system No company, currency, or nation founded in 2026 is likely to survive in any meaningful continuity; the middle layer of institutions gets hollowed out, leaving fewer but far longer-lived structures The decisions being made right now - on AI safety, climate, and coordination - have genuinely astronomical consequences, because they determine whether there is a 3026 worth having at all Most writing about the future of AI stops at ten years. A few brave pieces stretch to fifty. I wrote one of the ten-year ones myself in The Next Decade of AI, and the honest reason the horizon stays short is that the uncertainty gets unmanageable much past that. Forecasting even the shape of the economy in 2040 is already mostly vibes. ...

April 20, 2026 · 14 min · James M
The Year 2126 Banner

The Year 2126: What the Next Hundred Years Actually Looks Like

TL;DR By 2126, clean energy, most infectious disease, and routine cognitive work are almost certainly solved - the AI transition will look as obvious in hindsight as the car replacing the horse Climate is the hardest unsolved problem: the outcome depends on decisions made in the next thirty years, and 2126 inherits either a managed problem or a civilisation in partial retreat The demographic inversion is one of the most structurally important facts - global population peaks around 2060-2080 then declines, leaving a world where a hundred-year-old is ordinary and a child is rare and socially valued Human work shifts toward human-presence roles, stewardship of powerful systems, physical craft, meaning-making, and accountability - the categories that cannot be automated The decade we are in now is one that 2126 will study closely; the decisions made about AI safety, climate, and institutional reform are visibly reflected in the outcome a century later A hundred years is a useful distance. Long enough that the current news cycle is ancient history, short enough that some people alive in 2126 will have living memory of people who were alive in 2026. The children being born this week have a non-trivial chance of being interviewed, in their late nineties, about what the early AI era was actually like. That matters. It makes the 100-year horizon a question about the world people we know will inherit, not an abstract one. ...

April 20, 2026 · 17 min · James M
Reading the Signals Four Futures Banner

Reading the Signals: Which of the Four Futures Is Actually Emerging?

TL;DR Scoring four future scenarios against real-world signals: winner-take-most has the clearest corporate and capital logic behind it as of April 2026, driven by vertical integration across chips, data centres, models, and distribution Broad abundance gets partial credit - inference costs have fallen two orders of magnitude and open-weight models are competitive, but institutional-level gains in healthcare and education haven’t materialized Techno-feudalism is quietly accumulating through agentic platform lock-in (Claude Code, Cursor, Devin) and payment rail consolidation, with competition enforcement as the main counterweight Managed transition is the weakest scenario - UBI pilots haven’t scaled nationally, compute taxation remains a proposal, and institutional response cycles are mismatched with AI deployment speed The three signals that will determine where this goes: whether the open-weight frontier gap widens or closes, whether agentic memory becomes portable or platform-owned, and whether any serious economy moves past pilot-scale on redistribution I recently mapped four plausible futures for the machine-speed economy and listed the signals to watch for each. The obvious next question is the one I deliberately held back from answering: which signals are actually firing right now, and what does the mix say about where we’re heading? ...

April 20, 2026 · 7 min · James M
Four Futures Machine Speed Economy Banner

Four Futures for the Machine-Speed Economy

TL;DR AI is collapsing build times across the entire software stack, meaning small teams can now ship in weeks what once required 50-person organisations working for a year Four plausible futures are mapped: Broad Abundance (gains widely distributed), Winner-Take-Most (rents accrue to infrastructure owners), Techno-Feudalism (intelligence rented from platform landlords), and Managed Transition (governments respond with UBI and regulation) Signals to watch include open-source model performance, vertical integration of chips and data centres, platform lock-in of agentic workflows, and serious UBI pilots at national scale Leading AI researchers including Geoffrey Hinton and Yoshua Bengio argue the critical variable is no longer how capable models become, but how gains are distributed and how fast institutions adapt Across most scenarios, the things that hold their value are consistent: trust, relationships, physical presence, and creativity rooted in specific human experience The pace of AI development over the past three years is genuinely unlike anything in recent economic history. The Stanford AI Index has tracked frontier model capability roughly doubling on a yearly cadence, and private AI investment has reached levels that dwarf the dot-com peak in inflation-adjusted terms. What’s less widely understood is what that pace actually means for competition, investment, and the structure of the economy. ...

April 19, 2026 · 5 min · James M