AI-Augmented Design Workflow Banner

My AI-Augmented Design Workflow: A 10-Minute Loop From Discussion to Documented Decision

TL;DR A combination of Cursor in the IDE, Claude Code and Codex in the terminal, and GitHub Spec Kit as the living contract has collapsed the discuss-design-document loop from days to under ten minutes Every meeting is transcribed and checked into GitHub alongside the design corpus, giving AI agents access to the full historical record - not just curated decisions but the debates that shaped them Model selection matters: cheaper, faster models for throwaway sketches and small refactors; expensive models (Opus) for large cross-repo work where the cost of a wrong answer is high The real transformation is cognitive flow - removing friction between thinking and recording means decisions get made and captured while the problem is still fresh, with almost no context switching AI is now suggesting improvements faster than the author can implement them; the next bottleneck is compaction, not generation - asking the model to reduce documents to their load-bearing claims rather than produce more content Since making a combination of Cursor in the IDE and Claude Code and Codex in the terminal the centre of my working day - with ChatGPT for general questions and GitHub Spec Kit holding the design contract - the way I move from a question on Slack to a documented design decision has changed beyond recognition. ...

April 29, 2026 · 14 min · James M
AI Tooling Learning Path Banner

An AI Tooling Learning Path: Logical Phases for 2026

TL;DR The order you learn AI tools matters as much as which tools you learn - most people start with terminal agents or editors before they understand how models actually fail The seven-phase path runs: fundamentals, chat interfaces, AI-native editors, terminal agents, local models, orchestration, and review and evaluation Terminal agents (Claude Code, Cline, Aider) represent the biggest mindset shift - you move from driving with suggestions to specifying and letting the model execute Local models via Ollama belong in phase five, once you have felt the pain of API costs and know which tasks actually need frontier capability Review, evaluation, and capture (phase seven) is the phase most developers skip - and the one that separates AI-curious from AI-competent The hardest part of learning AI tooling in 2026 is not any single tool. It is the order you meet them in. ...

April 21, 2026 · 10 min · James M

What Actually Belongs in My AI Dev Stack in 2026

TL;DR A single AI tool cannot handle everything - a proper AI dev stack in 2026 needs distinct layers for spec writing, fast editing, heavy agentic work, cheap model tasks, review, research, and capture Spec-driven development is the most underused part: writing requirements and acceptance criteria before generation dramatically improves AI output and reduces wasted iterations Tools like Cursor AI handle fast, in-flow editing while Claude Code or Cline are better suited to multi-file refactors and autonomous implementation from specs Letting the same model that generated code also review it is a weak loop - a separate review pass with a different model or explicitly critical prompt is essential The real shift is treating AI not as a bolt-on assistant but as part of the workflow architecture itself, with each tool assigned a clear, specific responsibility There is a big difference between using AI for development and having an actual AI development stack. ...

April 6, 2026 · 9 min · James M

GitHub Spec Kit in 2026: SDD Goes Mainstream 🚀

TL;DR GitHub Spec Kit reached v0.5.0 in 2026, evolving from a documentation toolkit into a full extensibility platform for AI-assisted development Claude Code CLI is now a native skill within Spec Kit, making spec-to-code pipelines seamless and built-in The ecosystem has exploded with dedicated tools like AWS Kiro and Tessl, while multi-agent support covers Copilot, Cursor, Gemini CLI, and more Spec-Driven Development prevents architectural drift by making the spec the single source of truth - versioned, reviewable, and respected by AI agents Getting started is now low-effort: write a spec.md, pick any AI tool, and let the spec drive implementation Six months ago, we explored how GitHub Spec Kit was beginning to reshape software development. In early 2026, that promise isn’t just materializing - it’s accelerating. The project has hit version 0.5.0, the ecosystem has exploded, and Spec-Driven Development has transitioned from “interesting idea” to actual industry standard. ...

April 4, 2026 · 5 min · James M

Claude Code Source Leak: Anthropic's 2,000-File Exposure and What It Means

TL;DR An internal debugging file was accidentally included in a public package update, exposing a compressed archive of roughly 500,000 lines of code across around 2,000 files - not a breach, but a packaging mistake The leaked material revealed unreleased features including persistent memory, an always-on autonomous background assistant, and multi-device remote access Competitors gained rare visibility into Anthropic’s development pipeline and longer-term product direction, which is the primary competitive damage The incident undermines Anthropic’s safety-first positioning, particularly because it was the second such exposure in just over a year The broader lesson for the AI industry: internal operational security is becoming as critical as defending against external threats, especially as AI tools target enterprise customers Anthropic’s Claude Code has been making waves as one of the most capable AI coding assistants available, but a significant internal leak has exposed the underlying technology behind the platform for the second time in just over a year. The incident raised fresh concerns about how the company handles sensitive internal information and operational security. ...

April 1, 2026 · 4 min · James M

Hitting Claude Code Limits? Here’s the Setup I’m Moving Toward

TL;DR Hitting Claude Code Pro usage limits does not mean upgrading to the $200/month plan - a hybrid AI stack is a smarter and cheaper alternative The tiering strategy: local models (free) for quick edits, cheap cloud APIs for general coding, and frontier models only for architecture or complex multi-file reasoning Tools like Ollama or LM Studio with coding models such as DeepSeek Coder or Qwen2.5 handle the majority of everyday tasks locally at no cost Cheap cloud inference providers (Groq, Together AI, DeepInfra) offer capable open models at fractions of a cent per session for heavier work A realistic usage split of 80% local / 15% cheap APIs / 5% frontier models dramatically reduces limit burn while keeping Claude available when it genuinely matters I keep running into the same problem with Claude Code Pro ($20/month): I burn through the usage limits faster than I expect. The obvious solution is upgrading to the $200/month plan, but that feels excessive for how I actually use it. ...

March 9, 2026 · 4 min · James M