In-depth exploration of AI in practice: building and deploying AI agents that work, designing developer workflows around Claude and other LLMs, critical analysis of AI safety and reliability, and the real shifts happening in careers, skills, and how we work. This section mixes tactical guides (how to actually build with AI), strategic analysis (what’s hype vs. what matters), and deeper dives into the tools and systems reshaping software development and knowledge work.

AI Law and Regulation

AI Law Is No Longer Theoretical: What's Here, What's Coming, and What It Means

TL;DR The EU AI Act is now in force with full enforcement of high-risk AI requirements from August 2026, carrying fines of up to 7% of global turnover - this is no longer a distant deadline Over fifty copyright lawsuits against AI developers are working through US courts, and the EU Copyright Directive puts the burden of verifying training data rights on the AI developer, not the rights holder Courts in multiple jurisdictions are consistently finding that deploying AI does not transfer liability to the vendor - “the AI did it” is not a defence that holds up The US has no comprehensive federal AI law; instead, businesses must navigate a patchwork of state statutes (California, Colorado, New York, Texas) alongside existing federal agency enforcement from the FTC, CFPB, and FDA The “move fast and figure out the legal stuff later” era is over - enough of the legal framework has arrived that the gaps are no longer a safe place to operate For the past few years, AI law has been one of those topics that felt perpetually five minutes away. Governments would announce frameworks. Committees would publish white papers. Experts would debate what the rules should eventually look like. ...

April 22, 2026 · 9 min · James M
Home AI Agent Memory That Lasts Banner

Giving Your Home AI Agent Memory That Lasts

TL;DR Problem: a home agent with tools but no memory is a very well-read goldfish. Every morning it re-meets you. Answer: split memory into three layers - working, episodic, and semantic - and give each layer its own store and its own rules for what gets written. Where it lives: SQLite for episodic and facts, a local vector store for semantic search, and a tiny policy file that decides what is worth remembering in the first place. How it plugs in: a memory MCP server that exposes recall, remember, and forget - nothing else. Result: the agent can say “last Tuesday we tried restarting the Postgres container and it worked” and mean it. It also knows what not to store. The Goldfish Problem The home agent I built over the last few weeks can do real things now. It can read my mail, move files around my workspace, turn lights off, and check my calendar. What it could not do, until this week, was remember any of it. ...

April 22, 2026 · 9 min · James M
SpaceX Cursor Deal Banner

SpaceX Buys the Right to Buy Cursor for $60 Billion

TL;DR SpaceX has signed an option to acquire Cursor (made by Anysphere) for $60 billion, or pay $10 billion for the joint work if it walks away Cursor’s valuation has risen 24x in fifteen months - from $2.5 billion in January 2025 to a $60 billion option price in April 2026 The deal sits under SpaceX rather than xAI directly, because SpaceX holds the balance sheet after the SpaceX - xAI merger valued at $1.25 trillion For xAI, buying Cursor is a faster route to developer relevance than out-marketing OpenAI’s Codex or Anthropic’s Claude Code If the acquisition closes, three of the main AI coding interfaces will sit inside three frontier labs - raising questions about model neutrality and pricing pressure on independent tools It’s rare to see an option contract make the front page, but that is what landed on 21 April 2026. SpaceX disclosed that it has signed a deal with Cursor - the AI coding tool made by Anysphere - giving it the right to buy the startup outright for $60 billion later this year, or to walk away with a $10 billion payment for the joint work the two teams are doing in the meantime. ...

April 22, 2026 · 6 min · James M
MCP Servers for a Home AI Agent Banner

Giving Your Home AI Agent Real Tools: MCP Servers on a Mac Studio

TL;DR Problem: a local agent that can only chat is a toy. The value is in what it can do. Answer: Model Context Protocol servers, running locally on the Mac Studio, expose filesystem, calendar, mail, notes, and a handful of custom tools. Runtime: one supervisord config, a small router, and per-server allowlists so nothing escapes its box. Security posture: no tool runs without a policy, secrets live in the macOS Keychain, and every call is logged to a local SQLite file I can grep at 11pm. Result: I can phone the agent (see How to Phone Your Home AI Agent), ask “move the CI failure email to triage and put a 15 minute hold on my calendar at 4”, and it actually does it. Why MCP and Not “Just Functions” Before MCP I had a directory of half-finished Python shims. Each one spoke a slightly different dialect: one took JSON arguments, one took positional args, one returned markdown and one returned a dict. Adding a new tool meant editing the agent prompt, the router, and the caller. ...

April 22, 2026 · 8 min · James M
Phone Your Home AI Agent Banner

How to Phone Your Home AI Agent Running on a Mac Studio

TL;DR Goal: Call a real phone number and have a proper back-and-forth with my Mac Studio agent while walking the dog. Hardware: Mac Studio (M2 Ultra, 128 GB) running a local model via Ollama or MLX. Voice pipeline: Twilio SIP in, LiveKit Agents orchestrating STT / LLM / TTS, Whisper for transcription, Piper or ElevenLabs for speech. Brain: A local 30B-class model for chat plus tool calls, with Claude API as a fallback for the harder reasoning. Reach: Tailscale between the Mac and a tiny VPS so I never punch a hole in my home router. Outcome: I can ring a UK landline number, ask “what’s failing on the CI pipeline?” and get a spoken answer in ~2 seconds. Why bother phoning your own agent? Typing is great at a desk. Outside the desk, it’s hopeless. I wanted the simplest possible interface to the box sat under my desk at home - dial a number, talk, hang up. No app, no login, no VPN dance on my phone. ...

April 21, 2026 · 10 min · James M
AI Tooling Learning Path Banner

An AI Tooling Learning Path: Logical Phases for 2026

TL;DR The order you learn AI tools matters as much as which tools you learn - most people start with terminal agents or editors before they understand how models actually fail The seven-phase path runs: fundamentals, chat interfaces, AI-native editors, terminal agents, local models, orchestration, and review and evaluation Terminal agents (Claude Code, Cline, Aider) represent the biggest mindset shift - you move from driving with suggestions to specifying and letting the model execute Local models via Ollama belong in phase five, once you have felt the pain of API costs and know which tasks actually need frontier capability Review, evaluation, and capture (phase seven) is the phase most developers skip - and the one that separates AI-curious from AI-competent The hardest part of learning AI tooling in 2026 is not any single tool. It is the order you meet them in. ...

April 21, 2026 · 10 min · James M
Amazon Banner

Amazon Doubles Down: The $25 Billion Anthropic Bet

TL;DR Amazon announced up to $25 billion in additional investment in Anthropic on April 20, 2026, bringing total committed capital past $33 billion In return, Anthropic committed to spending over $100 billion on AWS over the next decade - effectively a closed loop where Amazon’s capital funds Anthropic’s compute bill The deal gives Amazon a flagship AI workload to prove out its Trainium custom silicon against Nvidia, while countering Microsoft’s OpenAI advantage on Azure For developers building with Claude, expect more capacity, more aggressive pricing on Bedrock, and deeper AWS service integration as the compute comes online The arrangement signals that frontier AI has fully consolidated into a small number of hyperscaler-aligned labs - the era of independent AI startups is effectively over On April 20, 2026, Amazon announced it would invest up to an additional $25 billion in Anthropic, stacking on top of the $8 billion it has already poured into the AI startup over recent years. In return, Anthropic committed to spending more than $100 billion on Amazon Web Services over the next ten years. ...

April 21, 2026 · 6 min · James M
The Year 3026 Banner

The Year 3026: Thinking Seriously About a Thousand Years From Now

TL;DR Over a thousand years, the substrate of civilisation changes beyond recognition, but the human core - love, grief, storytelling, the search for meaning - almost certainly does not Computation and energy will have hit their physical cost floors by 3026; intelligence is ambient, woven into the environment so thoroughly that “using AI” becomes as meaningless a phrase as “using oxygen” The built environment is almost certainly at solar-system scale - with the Earth a protected biosphere and heavy industry, compute, and energy capture distributed across the inner solar system No company, currency, or nation founded in 2026 is likely to survive in any meaningful continuity; the middle layer of institutions gets hollowed out, leaving fewer but far longer-lived structures The decisions being made right now - on AI safety, climate, and coordination - have genuinely astronomical consequences, because they determine whether there is a 3026 worth having at all Most writing about the future of AI stops at ten years. A few brave pieces stretch to fifty. I wrote one of the ten-year ones myself in The Next Decade of AI, and the honest reason the horizon stays short is that the uncertainty gets unmanageable much past that. Forecasting even the shape of the economy in 2040 is already mostly vibes. ...

April 20, 2026 · 14 min · James M
The Year 2126 Banner

The Year 2126: What the Next Hundred Years Actually Looks Like

TL;DR By 2126, clean energy, most infectious disease, and routine cognitive work are almost certainly solved - the AI transition will look as obvious in hindsight as the car replacing the horse Climate is the hardest unsolved problem: the outcome depends on decisions made in the next thirty years, and 2126 inherits either a managed problem or a civilisation in partial retreat The demographic inversion is one of the most structurally important facts - global population peaks around 2060-2080 then declines, leaving a world where a hundred-year-old is ordinary and a child is rare and socially valued Human work shifts toward human-presence roles, stewardship of powerful systems, physical craft, meaning-making, and accountability - the categories that cannot be automated The decade we are in now is one that 2126 will study closely; the decisions made about AI safety, climate, and institutional reform are visibly reflected in the outcome a century later A hundred years is a useful distance. Long enough that the current news cycle is ancient history, short enough that some people alive in 2126 will have living memory of people who were alive in 2026. The children being born this week have a non-trivial chance of being interviewed, in their late nineties, about what the early AI era was actually like. That matters. It makes the 100-year horizon a question about the world people we know will inherit, not an abstract one. ...

April 20, 2026 · 17 min · James M
Reading the Signals Four Futures Banner

Reading the Signals: Which of the Four Futures Is Actually Emerging?

TL;DR Scoring four future scenarios against real-world signals: winner-take-most has the clearest corporate and capital logic behind it as of April 2026, driven by vertical integration across chips, data centres, models, and distribution Broad abundance gets partial credit - inference costs have fallen two orders of magnitude and open-weight models are competitive, but institutional-level gains in healthcare and education haven’t materialized Techno-feudalism is quietly accumulating through agentic platform lock-in (Claude Code, Cursor, Devin) and payment rail consolidation, with competition enforcement as the main counterweight Managed transition is the weakest scenario - UBI pilots haven’t scaled nationally, compute taxation remains a proposal, and institutional response cycles are mismatched with AI deployment speed The three signals that will determine where this goes: whether the open-weight frontier gap widens or closes, whether agentic memory becomes portable or platform-owned, and whether any serious economy moves past pilot-scale on redistribution I recently mapped four plausible futures for the machine-speed economy and listed the signals to watch for each. The obvious next question is the one I deliberately held back from answering: which signals are actually firing right now, and what does the mix say about where we’re heading? ...

April 20, 2026 · 7 min · James M