MCP Servers for a Home AI Agent Banner

Giving Your Home AI Agent Real Tools: MCP Servers on a Mac Studio

TL;DR Problem: a local agent that can only chat is a toy. The value is in what it can do. Answer: Model Context Protocol servers, running locally on the Mac Studio, expose filesystem, calendar, mail, notes, and a handful of custom tools. Runtime: one supervisord config, a small router, and per-server allowlists so nothing escapes its box. Security posture: no tool runs without a policy, secrets live in the macOS Keychain, and every call is logged to a local SQLite file I can grep at 11pm. Result: I can phone the agent (see How to Phone Your Home AI Agent), ask “move the CI failure email to triage and put a 15 minute hold on my calendar at 4”, and it actually does it. Why MCP and Not “Just Functions” Before MCP I had a directory of half-finished Python shims. Each one spoke a slightly different dialect: one took JSON arguments, one took positional args, one returned markdown and one returned a dict. Adding a new tool meant editing the agent prompt, the router, and the caller. ...

April 22, 2026 · 8 min · James M
AI Tooling Learning Path Banner

An AI Tooling Learning Path: Logical Phases for 2026

The hardest part of learning AI tooling in 2026 is not any single tool. It is the order you meet them in. Most people start in the wrong place. They install a terminal agent before they have ever sat with a chat UI long enough to understand how models fail. They buy a Cursor subscription before they have written a single decent prompt. They wire up local models with Ollama before they know which tasks actually benefit from running offline. ...

April 21, 2026 · 9 min · James M

Running AI Models Locally with Ollama: From Setup to OpenClaw

Running AI Models Locally with Ollama: From Setup to OpenClaw Ollama has quietly become the go-to tool for developers who want to run large language models on their own machines without relying on APIs. No cloud costs, no rate limits, no sending your prompts to third-party servers. Just you, your hardware, and a surprisingly capable AI model running locally. What is Ollama? Ollama is a lightweight platform designed to make running open-source language models accessible. It handles the complexity of model management - downloading, optimization, memory management - so you just run a command and start prompting. ...

April 8, 2026 · 4 min · James M