• Artificial Intelligence (LLMs, AI agents, and the future of human expertise)
  • Blockchain (Decentralized infrastructure, networks, and ecosystem evolution)
  • Data Engineering (Building data infrastructure that actually scales)
  • DevOps (Infrastructure, automation, and operational philosophy)
  • General (Culture, science, and the miscellaneous)
  • Retro Computing (The machines and culture that shaped computing)
  • Music Production (Gear, sound design, and creative workflow)
  • Personal Development (Expertise, craft, and the engineering mindset)
  • Space (Infrastructure and vision for human expansion beyond Earth)
AI as Analogy Engine Banner

AI as Analogy Engine: Synthesis, Invention, and the Combinatorial Frontier

A common dismissal of modern AI goes like this: “It is just a fancy autocomplete. It memorises text and stitches it back together. There is no real understanding, only retrieval.” It is a comforting story, and it has the shape of a critique that ought to be true. But spend enough time with frontier systems and a different picture starts to form. The thing that large models actually seem to be good at is not memorisation. It is something stranger and arguably more important: the formation of analogies, the combination of distant concepts, and the generation of conceptual relationships that were not explicitly present in any one place in the training data. ...

May 16, 2026 · 13 min · James M
Dario Amodei - The Anthropic CEO Betting on Safety as Strategy Banner

Dario Amodei: The Anthropic CEO Betting on Safety as Strategy

Dario Amodei is one of the few frontier-lab CEOs whose public talking points have not changed materially in five years. The same message he gave to small audiences in 2021 - that powerful AI is coming faster than people think, that the safety problem is real, and that the companies building it have an obligation to do so carefully - is the message he is giving to Congress and Davos in 2026. The thing that has changed is that he now runs the company most aggressively turning that message into a commercial position. ...

May 14, 2026 · 13 min · James M
AI in Scientific Research - From AlphaFold to the Long Tail Banner

AI in Scientific Research: From AlphaFold to the Long Tail

AlphaFold’s release in 2021 was the AI-for-science moment that broke through to the general public. A computational solution to a 50-year-old problem in biology - predicting protein structure from sequence - that produced a tool used by hundreds of thousands of researchers. The narrative around AI-for-science crystallised: deep learning would produce a series of similar breakthroughs across scientific domains. The 2026 reality is more interesting and less clean. AlphaFold-class breakthroughs have been rarer than the early narrative suggested. But AI has spread across scientific practice in subtler ways that, in aggregate, have done more to change how science is actually done than the few headline breakthroughs. ...

May 13, 2026 · 7 min · James M
The Causal Inference Comeback Banner

The Causal Inference Comeback: Why Correlation-Era ML Hit a Wall

For most of the deep-learning era, the answer to “why is this happening in our data?” was “we do not actually care - we care that our model predicts well.” For a wide range of problems, that pragmatism worked. The models did predict well. The business outcomes followed. The causal questions were left to academics and economists. The mood has shifted in 2026. The cases where prediction-without-understanding fails are now visible enough, and expensive enough, that causal inference has moved back from the academic margins to something practitioners need to know about. It is not displacing predictive ML - it is filling in the gap that became unignorable. ...

May 12, 2026 · 5 min · James M
AI Energy Crisis - Why Data Center Power Will Define the Next Decade Banner

The AI Energy Crisis: Why Data Center Power Will Define the Next Decade

For most of the AI conversation in 2024 and 2025, the binding constraints on the build-out were chips and capital. By 2026 the conversation has shifted, and the constraint that gets discussed most seriously inside the hyperscalers is electricity. Not the cost of electricity. The actual physical availability of electrons - at gigawatt scale, in the places where the data centres need to be, on the schedule the model labs need them to be. The story does not have a single villain or a single number, but it has a shape, and the shape is becoming the story of the second half of the decade. ...

May 11, 2026 · 14 min · James M
Inference Hardware Insurgents - Cerebras, Groq, SambaNova Banner

Cerebras, Groq, SambaNova: The Inference Hardware Insurgents

For most of the last decade, talking about AI hardware meant talking about Nvidia. In 2026 that has stopped being true at the inference layer. Three companies - Cerebras, Groq, and SambaNova - have built genuinely different chips around the same insight: that the workload economics of running models in production are not the same as the workload economics of training them, and that the chip architecture should follow the workload. The bet has been right enough that Nvidia has now licensed pieces of it. ...

May 11, 2026 · 11 min · James M
Open Weight Models Renaissance Banner

The Open Weight Models Renaissance: Llama, Mistral, Qwen, DeepSeek

For most of the LLM era the open-weight story was framed as a trailing one. Open models were cheaper, smaller, and a generation behind. That framing has not survived 2026. The gap between the best open-weight model and the best closed model is now narrow enough on most workloads that the choice is no longer “settle for less” - it is “decide what you actually need.” TL;DR Open weights have closed the headline gap. Top open-weight models are within striking distance of closed frontier models on reasoning, coding, and general knowledge benchmarks. The economics changed first. DeepSeek’s R1 made it credible that a frontier model could be trained for tens of millions, not billions - and that the weights could be released for free. Llama, Mistral, Qwen, and DeepSeek lead on different axes: Llama for broad ecosystem support, Mistral for European deployment and tool use, Qwen for multilingual and long-context work, DeepSeek for raw reasoning. Inference flexibility is the underrated win. Open weights mean you can run on your own hardware, fine-tune freely, and avoid surprises from a closed provider’s roadmap. The remaining closed-model advantages are real but narrowing - agentic depth, multimodal performance, and the polished tool-use stacks around them. Where the gap actually is in 2026 Benchmarks are imperfect, but the picture they sketch is consistent. On standard reasoning suites - MMLU, GPQA, MATH - open-weight models are within a few percentage points of the closed frontier. On coding - HumanEval, SWE-Bench - the gap is similar. On long-context retrieval, the gap is mostly gone. ...

May 10, 2026 · 4 min · James M
Stream vs Batch Processing Banner

Real-Time Data Processing: Stream Processing vs Batch Processing

If you spend enough time in data engineering, you will eventually encounter the conviction that batch processing is dying and streaming is the future. This is the third or fourth time the industry has had this conversation in my career, and the answer has been the same every time. Streaming is not the future. Batch is not the past. They are different tools with different operational profiles, and the systems that age well use both, with discipline about which is the right choice for which problem. ...

May 10, 2026 · 9 min · James M
Multimodal AI in 2026 Banner

Multimodal AI in 2026: Vision + Text + Audio - What's Actually Useful

TL;DR Document understanding is the unglamorous killer application - invoices, contracts, and scanned PDFs that were painful to extract data from are now tractable without dedicated pipelines Vision models still under-deliver on precise spatial reasoning, object counting, and subtle medical or scientific imagery - these remain jobs for specialist models Audio is the modality with the most upside: beyond transcription, it carries tone, pace, and hesitation that text loses, enabling fault detection, emotional analysis, and richer inputs The teams getting real value treat multimodal as an invisible enabling capability within a workflow, not a feature to demo - and they verify high-stakes outputs just as they would text The right question when evaluating multimodal is not “can we use this” but “what specific user problem becomes tractable that previously was not” When the first multimodal frontier models shipped, the demos were genuinely impressive. A photo of a fridge interior with the model suggesting a recipe. A handwritten napkin sketch becoming working code. A short audio clip of a meeting being transcribed, summarised, and structured. It looked, briefly, like the boundary between modalities had collapsed and we were entering a new regime in which models could reason fluidly across text, images, and sound. ...

May 9, 2026 · 10 min · James M
Prompt Caching Banner

Prompt Caching: The Quiet Performance Win for LLM Applications

TL;DR Prompt caching saves the computed representation of a prompt’s static prefix so subsequent requests reuse it rather than recompute it - cached tokens cost roughly 10% of normal input token prices The savings are highest when prompts have a long, identical prefix across requests - system prompts, tool definitions, and few-shot examples can make up 80-90% of total input cost The most common mistake is interpolating variables into the system prompt, which breaks caching silently; fix it by moving all static content to the top and dynamic content to the end Cache lifetimes are bounded (minutes to a few hours per provider) and any change to the prefix - including whitespace - creates a new cache miss Track your cache hit rate explicitly on every LLM dashboard; a dropping hit rate usually signals unintended prompt construction changes, and fixing it is the highest-leverage cost optimisation available If you build LLM applications for any length of time, you eventually notice that you are paying to have the model read the same instructions over and over again. The system prompt, the tool definitions, the few-shot examples, the structured output schema - all of it goes back into the model on every single request, and you pay for the input tokens every single time. For a chatbot doing one or two thousand requests a day this is annoying. For an agent doing tens of thousands of requests with long contexts, it is the dominant cost line. ...

May 9, 2026 · 10 min · James M