Apache Iceberg in 2026

Apache Iceberg in 2026: The Open Table Format That Won

In 2023, the question was “which open table format will survive - Iceberg, Delta, or Hudi?” In 2026, that debate is over. Apache Iceberg won, and it won for reasons that have almost nothing to do with its raw performance. It won because it is the only format that both Snowflake and Databricks now treat as a first-class citizen, because the vendors picked sides on catalogs rather than table formats, and because enterprise buyers decided that multi-engine portability was worth more than a small performance edge. ...

April 22, 2026 · 11 min · James M
Hermes Agent Banner

Hermes Agent: Persistent Autonomy That Learns and Grows

TL;DR Hermes Agent by Nous Research is an open-source persistent autonomous system that builds memory across conversations, auto-generates reusable skills from repeated tasks, and compounds in capability over time Unlike stateless agents, Hermes accumulates project context - learning codebase quirks, team conventions, and recurring workflows so it stops asking questions it has already answered It works across Telegram, Discord, Slack, WhatsApp, Signal, Email, and CLI - meeting teams on the platforms they already use rather than requiring a dedicated app Running cost is roughly $20 to $60 per month for a solo developer (a $5-$10 VPS plus LLM API calls); it is MIT licensed with no seat fees or vendor lock-in The honest trade-off: Hermes beats alternatives on persistence and learning depth, but raises open questions about memory scaling, skill auditing, and what happens when an agent learns something wrong Most AI agents are forgettable. You ask them to do something, they do it, you close the window. The next time you need help, they start from zero - no context, no learning, no continuity. Hermes Agent works differently. Nous Research built it as a persistent system that remembers what it learns and gets measurably more capable the longer it runs. ...

April 20, 2026 · 9 min · James M
Snowflake Icon

Snowflake Storage for Apache Iceberg: Enterprise Open Data Comes to AWS and Azure

A New Era for Open Data Formats Snowflake has announced the general availability of Snowflake Storage for Apache Iceberg on both AWS and Azure, marking a significant shift in how enterprises can build open, interoperable data lakehouses. This development combines Snowflake’s enterprise reliability and governance capabilities with the flexibility and openness of Apache Iceberg, one of the most promising open table formats in the data ecosystem. For a deeper look at Iceberg itself, see Apache Iceberg in 2026, and for where this sits in the broader platform picture see The modern lakehouse stack. ...

April 18, 2026 · 4 min · James M

Cline: The Next Generation AI Coding Assistant

An exploration of Cline, the autonomous AI coding agent that lives in your IDE and handles complex, multi-step engineering tasks through tool-use and agency.

April 9, 2026 · 4 min · James Myddelton

The Rise of Small Language Models: Why Size Isn't Everything

TL;DR Small language models (typically under 15B parameters) trained on high-quality data can match or outperform much larger models on many real-world tasks, thanks to distillation, instruction tuning, and quantization The key advantages are speed (milliseconds vs seconds), cost (no per-token API charges), privacy (data stays on your hardware), and offline capability Standout models include Mistral 7B for speed, Phi-3 for edge devices, and OpenClaw for code and reasoning - all usable locally via Ollama The industry is moving toward a multi-tier approach: small models (7-13B) for 80% of workloads, medium models as a step-up, and large models reserved only for complex reasoning tasks where they genuinely outperform Large models still win on deep multi-step reasoning, breadth of knowledge, and few-shot generalization - the shift is about matching model size to task, not replacing large models entirely The Rise of Small Language Models: Why Size Isn’t Everything For years, the narrative was simple: bigger is better. GPT-4 was massive, Claude was massive, and the race seemed to be about who could train the largest model on the most data. But that story is changing. Small language models - typically under 15 billion parameters - are proving that you don’t need 175 billion parameters to solve real problems. ...

April 9, 2026 · 8 min · James M

Open WebUI: A Polished Interface for Local and Remote LLMs

TL;DR Open WebUI is an open-source, ChatGPT-style web interface that connects to local Ollama instances, OpenAI’s API, or any OpenAI-compatible backend It eliminates the friction of command-line LLM tools and supports features like RAG with document uploads, web search, custom prompts, model switching, and multi-user permissions Deployment is a single Docker command; maintenance is lightweight with persistent storage and optional PostgreSQL for multi-instance setups The primary appeal is full data ownership - queries never leave your infrastructure - making it well suited for privacy-conscious users and compliance-bound organizations Open WebUI adds minimal latency since the bottleneck is always the inference engine behind it, not the web interface itself If you’ve spent time running language models locally through Ollama or another inference engine, you’ve probably discovered the same friction point: the command-line experience works, but it’s clunky. You’re juggling terminal windows, managing conversation context manually, managing files through the filesystem. ...

April 8, 2026 · 6 min · James M

Running AI Models Locally with Ollama: From Setup to OpenClaw

TL;DR Ollama is a lightweight tool for running open-source language models locally with no cloud costs, rate limits, or data leaving your machine Models are managed with simple commands (ollama pull, ollama run) and can be queried via a local HTTP API on localhost:11434 Popular models include Mistral 7B for speed, Llama 2 for all-around performance, and OpenClaw for code and reasoning tasks Running models locally delivers privacy, zero per-token cost, lower latency, and full offline capability You don’t need a GPU to start - a 7B model runs on 8GB of RAM, and Ollama automatically uses 4-bit quantization for larger models Running AI Models Locally with Ollama: From Setup to OpenClaw Ollama has quietly become the go-to tool for developers who want to run large language models on their own machines without relying on APIs. No cloud costs, no rate limits, no sending your prompts to third-party servers. Just you, your hardware, and a surprisingly capable AI model running locally. ...

April 8, 2026 · 4 min · James M

GitHub Is Now Officially Backing OpenClaw

TL;DR GitHub became an official sponsor of OpenClaw, the fastest-growing open source project in history, breaking React’s 10-year GitHub milestone in just 60 days The sponsorship is concrete, not symbolic - it includes Copilot Pro+ access, dedicated security funding, and scalability support for the project team GitHub sponsors projects that matter for the future of software development, and this backing signals OpenClaw has crossed from “interesting experiment” into infrastructure-level significance The move is a bet that open source AI agents will be central to how software is built in 2026 and beyond, and that GitHub wants to be the home where that class of technology lives and scales OpenClaw’s growth trajectory and now its platform backing make it a clear signal about the direction of agentic, AI-operated software development Two weeks ago, GitHub made a quiet but significant announcement: they are now an official sponsor of OpenClaw. ...

April 8, 2026 · 3 min · James M
Following the Money in Data

Following the Money: Databricks vs Snowflake vs the Open-Source Alternative

The views in this post are my own personal reflections on the data industry, written in my own time. They are not about any specific employer, team, or colleague, past or present, and do not draw on any non-public information. In 2026, the technical gap between Databricks and Snowflake has narrowed to a sliver. Both offer world-class serverless compute, both support Iceberg/Delta as first-class citizens, and both have integrated AI agents that can write SQL better than your average intern. ...

April 8, 2026 · 4 min · James M

DevOps GitHub Projects

Most of what makes a productive DevOps engineer is not hidden inside vendor portals - it lives in open source, on GitHub, and it is free. The projects below are the ones I return to most often, whether for learning, daily tooling, or reference implementations of patterns that would otherwise take weeks to work out alone. DevOps and Site Reliability Engineering (SRE) Resources to calibrate what good looks like in the discipline. ...

May 29, 2023 · 3 min · James M