Claude connected to Ableton Live and Push

Connecting Claude to Ableton: Why the New Knowledge Connector Matters

On 28 April 2026 Anthropic shipped a batch of nine creative-tool connectors for Claude, and one of them is the Ableton Knowledge connector. It is a small thing on the surface and a big thing underneath. Here is what it does, what it does not do, and why it matters if you spend your evenings inside Live or staring at a Push. What the Connector Actually Does The official Ableton connector grounds Claude’s answers in Ableton’s own product documentation for Live and Push. That is the whole pitch, and it is more useful than it sounds. ...

April 30, 2026 · 4 min · James M
A year of AI agents

A Year of Agents, and What is Coming Next

A year ago, in April 2025, “AI in your workflow” mostly meant a chat window in a browser tab and an autocomplete plugin in your editor. You typed, it suggested, you accepted or rejected. The interaction model was small. The blast radius was small. The verb was “ask”. In April 2026, the verb is “delegate”. That is the headline, and it is not subtle once you go looking for it. The tools you use day to day no longer wait for prompts. They run for minutes at a time, open files, edit them, run shells, spin up sub-agents, browse the web, and come back with a result that is either roughly right or visibly wrong. You are no longer in the loop on every keystroke. You are in the loop on the outcome. ...

April 30, 2026 · 11 min · James M
AI Safety From First Principles Banner

AI Safety From First Principles: What Actually Matters vs What's Hype

The AI safety conversation has reached the point where the phrase has stopped meaning anything specific. In the same week, you will see “AI safety” used to describe content moderation on a chat product, the alignment of frontier models toward human values, the question of whether superintelligence ends civilisation, and a regulatory paper about copyright. These are not the same problem. Treating them as one conversation is the reason the conversation never resolves. ...

April 30, 2026 · 9 min · James M
AI Skills banner

AI Skills: One Folder, Any Model

Most of the tooling I have written about over the last year has been provider-specific. A particular model, a particular harness, a particular set of features. The thing I find interesting about agent skills is that they are not. A skill is a folder. Inside the folder is a SKILL.md file with some YAML frontmatter and some markdown instructions. That is the whole format. Anthropic shipped them in Claude Code, open-sourced the spec, and at this point you can drop the same folder into Cursor, the Gemini CLI, OpenAI Codex, OpenHands, Goose, VS Code, GitHub Copilot, and a couple of dozen other tools, and they all do roughly the same thing with it. ...

April 30, 2026 · 8 min · James M
AI-Augmented Design Workflow Banner

My AI-Augmented Design Workflow: A 10-Minute Loop From Discussion to Documented Decision

Since making a combination of Cursor in the IDE and Claude Code and Codex in the terminal the centre of my working day - with ChatGPT for general questions and GitHub Spec Kit holding the design contract - the way I move from a question on Slack to a documented design decision has changed beyond recognition. What used to be a multi-day shuffle of meetings, follow-ups, written summaries, and “I will circle back on that” is now a tight loop that closes in five to ten minutes. The result is not just faster - it feels qualitatively different. There is no context switching, no postponed thinking, no half-finished docs. The decision is made, captured, validated against the rest of the design, and rendered into diagrams before the next meeting starts. ...

April 29, 2026 · 13 min · James M
When to Fine-Tune vs When to RAG Banner

When to Fine-Tune vs When to RAG: Choosing Your AI Architecture

The question I get asked most often by engineers starting to build with language models is some variation of: “should we fine-tune or should we do RAG?” It is almost always the wrong question, but it is the wrong question in an instructive way. The reason it gets asked so much is that the choice feels architectural, and architectural choices feel like the kind of thing you commit to once and live with. In practice, the choice is closer to “should I use a database or a cache” - the answer is usually some of both, applied to different problems, and the ratio shifts as the system matures. ...

April 29, 2026 · 10 min · James M
The Free Intelligence Era Banner

The Free Intelligence Era: What Breaks When Thinking Costs Nothing

This is a personal reflection, not a forecast dressed up as one. I am writing about a trend I think is real, but the second-order consequences are guesses, and I am sure some of them are wrong. The single most important economic fact about AI is not that the models are getting smarter. It is that intelligence is getting cheap. For all of human history, thinking has been expensive. Doctors, lawyers, engineers, accountants, researchers, designers, programmers, analysts - the entire knowledge economy was built on the premise that competent cognition is a scarce resource you have to pay for. Universities exist to credential it. Firms exist to ration it. Salaries exist to compensate it. Whole cities exist because the cognitive workers cluster in them. Strip away the abundance of competent thinking and the post-industrial world stops making sense. ...

April 28, 2026 · 13 min · James M
Junior Developer Pipeline Problem Banner

The Junior Developer Pipeline Problem: Where Do Tomorrow's Seniors Come From?

The views in this post are my own personal reflections on the industry as a whole, written in my own time. They are not about any specific employer, team, or colleague, past or present. Almost every confident take on the future of software engineering assumes a particular kind of person at the centre of it. A senior. Someone who can read a generated diff and feel which line is wrong before they can articulate why. Someone with taste, judgement, and a working theory of the system in their head. Someone who can curate, review, and steer fleets of agents. ...

April 28, 2026 · 10 min · James M
AI Hallucinations Understanding and Mitigating False Outputs Banner

AI Hallucinations: Understanding and Mitigating False Outputs

The word “hallucination” is one of the most successful pieces of accidental marketing in our industry. It is a soft, almost endearing way to describe an LLM stating with full confidence that a function exists when it does not, that a court case was decided when it was not, that a paper was written by an author who has never published in that field. It makes the failure sound like a quirk rather than the central reliability problem of the entire technology. ...

April 28, 2026 · 13 min · James M
GPT-5.5 release illustration

GPT-5.5 Is Here: Real Step Forward or Quiet Iteration?

OpenAI released GPT-5.5 on April 23, 2026, weeks after GPT-5.4 and only months after GPT-5. The cadence is starting to feel relentless. Codenamed “Spud” internally, GPT-5.5 is the first fully retrained base model since GPT-4.5 - architecture, pretraining corpus, and agent-oriented objectives all reworked from scratch. The question worth asking is whether any of this is actually significant, or if we’ve reached the part of the curve where every new release looks like a small step. ...

April 24, 2026 · 5 min · James M