LLM-Powered Personal Productivity Banner

LLM-Powered Personal Productivity: Building a Private Automation Stack

TL;DR The interesting question in 2026 is not “can a local model do this”, it is “which jobs should you give it”. My stack: Ollama for inference, Letta for persistent agent memory, Obsidian as the second brain, Home Assistant for the physical world, and a small router that decides where each thought goes. Three jobs are the sweet spot for local: inbox triage, note enrichment, and routine automation. Each one is repetitive, private, and tolerant of a bit of latency. Two jobs are still worth handing to a frontier cloud model: anything novel-and-hard, and anything where you want the best draft on the first attempt. The bit nobody talks about is the router. The model is not the product. The thing that decides which model gets which job is the product. Why Local Got Interesting For years the answer to “should I run an LLM locally” was “no, just use the API”. The API was cheaper, faster, smarter, and you did not have to think about VRAM. The only reason to go local was privacy, and most people did not actually care about privacy enough to give up the quality gap. ...

May 3, 2026 · 9 min · James M