- Artificial Intelligence (LLMs, AI agents, and the future of human expertise)
- Blockchain (Decentralized infrastructure, networks, and ecosystem evolution)
- Data Engineering (Building data infrastructure that actually scales)
- DevOps (Infrastructure, automation, and operational philosophy)
- General (Culture, science, and the miscellaneous)
- Retro Computing (The machines and culture that shaped computing)
- Music Production (Gear, sound design, and creative workflow)
- Personal Development (Expertise, craft, and the engineering mindset)
- Space (Infrastructure and vision for human expansion beyond Earth)
Apache Iceberg in 2026: The Open Table Format That Won
In 2023, the question was “which open table format will survive - Iceberg, Delta, or Hudi?” In 2026, that debate is over. Apache Iceberg won, and it won for reasons that have almost nothing to do with its raw performance. It won because it is the only format that both Snowflake and Databricks now treat as a first-class citizen, because the vendors picked sides on catalogs rather than table formats, and because enterprise buyers decided that multi-engine portability was worth more than a small performance edge. ...
Giving Your Home AI Agent Real Tools: MCP Servers on a Mac Studio
TL;DR Problem: a local agent that can only chat is a toy. The value is in what it can do. Answer: Model Context Protocol servers, running locally on the Mac Studio, expose filesystem, calendar, mail, notes, and a handful of custom tools. Runtime: one supervisord config, a small router, and per-server allowlists so nothing escapes its box. Security posture: no tool runs without a policy, secrets live in the macOS Keychain, and every call is logged to a local SQLite file I can grep at 11pm. Result: I can phone the agent (see How to Phone Your Home AI Agent), ask “move the CI failure email to triage and put a 15 minute hold on my calendar at 4”, and it actually does it. Why MCP and Not “Just Functions” Before MCP I had a directory of half-finished Python shims. Each one spoke a slightly different dialect: one took JSON arguments, one took positional args, one returned markdown and one returned a dict. Adding a new tool meant editing the agent prompt, the router, and the caller. ...
How to Phone Your Home AI Agent Running on a Mac Studio
TL;DR Goal: Call a real phone number and have a proper back-and-forth with my Mac Studio agent while walking the dog. Hardware: Mac Studio (M2 Ultra, 128 GB) running a local model via Ollama or MLX. Voice pipeline: Twilio SIP in, LiveKit Agents orchestrating STT / LLM / TTS, Whisper for transcription, Piper or ElevenLabs for speech. Brain: A local 30B-class model for chat plus tool calls, with Claude API as a fallback for the harder reasoning. Reach: Tailscale between the Mac and a tiny VPS so I never punch a hole in my home router. Outcome: I can ring a UK landline number, ask “what’s failing on the CI pipeline?” and get a spoken answer in ~2 seconds. Why bother phoning your own agent? Typing is great at a desk. Outside the desk, it’s hopeless. I wanted the simplest possible interface to the box sat under my desk at home - dial a number, talk, hang up. No app, no login, no VPN dance on my phone. ...
An AI Tooling Learning Path: Logical Phases for 2026
The hardest part of learning AI tooling in 2026 is not any single tool. It is the order you meet them in. Most people start in the wrong place. They install a terminal agent before they have ever sat with a chat UI long enough to understand how models fail. They buy a Cursor subscription before they have written a single decent prompt. They wire up local models with Ollama before they know which tasks actually benefit from running offline. ...
Amazon Doubles Down: The $25 Billion Anthropic Bet
On April 20, 2026, Amazon announced it would invest up to an additional $25 billion in Anthropic, stacking on top of the $8 billion it has already poured into the AI startup over recent years. In return, Anthropic committed to spending more than $100 billion on Amazon Web Services over the next ten years. This isn’t just another funding round. It’s one of the largest compute-for-capital deals in the history of the technology industry. ...
The Year 3026: Thinking Seriously About a Thousand Years From Now
Most writing about the future of AI stops at ten years. A few brave pieces stretch to fifty. I wrote one of the ten-year ones myself in The Next Decade of AI, and the honest reason the horizon stays short is that the uncertainty gets unmanageable much past that. Forecasting even the shape of the economy in 2040 is already mostly vibes. A thousand years, then, is almost ridiculous as a frame. But almost is not quite the same as is. There is a specific kind of value in trying to think at this distance, precisely because it forces you to let go of the things that cannot survive the journey - companies, currencies, programming languages, probably nations - and look at what, if anything, does. ...
The Year 2126: What the Next Hundred Years Actually Looks Like
A hundred years is a useful distance. Long enough that the current news cycle is ancient history, short enough that some people alive in 2126 will have living memory of people who were alive in 2026. The children being born this week have a non-trivial chance of being interviewed, in their late nineties, about what the early AI era was actually like. That matters. It makes the 100-year horizon a question about the world people we know will inherit, not an abstract one. ...
Reading the Signals: Which of the Four Futures Is Actually Emerging?
I recently mapped four plausible futures for the machine-speed economy and listed the signals to watch for each. The obvious next question is the one I deliberately held back from answering: which signals are actually firing right now, and what does the mix say about where we’re heading? The honest answer is that all four scenarios have real evidence in their favour, which is part of why this moment is so hard to read. But the weights aren’t equal. Some signals are strengthening; others have stalled or reversed. Here’s what the dashboard looks like this week. ...
Hermes Agent: Persistent Autonomy That Learns and Grows
Most AI agents are forgettable. You ask them to do something, they do it, you close the window. The next time you need help, they start from zero - no context, no learning, no continuity. Hermes Agent works differently. Nous Research built it as a persistent system that remembers what it learns and gets measurably more capable the longer it runs. This is a meaningful shift in how we think about autonomous systems. ...
MacWhisper vs Wispr Flow vs Superwhisper: The 2026 Dictation Stack Compared
Voice input on the Mac used to mean fighting with the built-in Dictation feature or paying Nuance a small fortune. In 2026, the landscape looks completely different. A handful of indie and venture-backed apps have turned Whisper-class models into genuinely fast, accurate tools that sit quietly in your menu bar until you hold a hotkey. The three names that come up in every Mac productivity thread are MacWhisper, Wispr Flow, and Superwhisper. They all transcribe speech. They are not the same product. ...