AI-Native Pipelines Banner

AI-Native Pipelines - What Changes When Your Consumer Is an LLM, Not a Dashboard

TL;DR Data pipelines were optimised for human consumers - dashboards, BI tools, analysts. In 2026 a growing share of pipeline output flows directly to language models, agents, and retrieval systems. That changes the design constraints in ways that catch teams off guard. Aggregation matters less. Context fidelity matters more. Freshness behaves differently. Schema moves from rigid to negotiated. Cost shifts from compute to tokens. The biggest mistake is treating an LLM consumer as if it were just another dashboard. It is not. It does not skim, it does not interpret charts, it does not have working memory across rows. It needs to be fed. The new patterns - retrieval-aware partitioning, embedding pipelines, structured-document outputs, prompt-shaped views, evaluation harnesses for data quality - are the actual subject of “AI-native data engineering” in 2026. The Underlying Shift For thirty years the implicit consumer of every data pipeline was a human looking at a screen. Even when the pipeline ended in an API or a CSV, the conceptual end-user was someone who would interpret the output with judgement, context, and skim-reading. ...

May 3, 2026 · 9 min · James M
Catalog Layer Battleground Banner

The Catalog Layer Is the New Battleground - Unity, Polaris, Gravitino, Nessie

TL;DR With the open table format wars largely settled, the strategic fight in 2026 has moved up to the catalog layer - the system that manages tables, namespaces, governance, and access. Four credible open or open-ish catalogs are now in serious play: Unity Catalog (Databricks), Polaris (Snowflake), Apache Gravitino (Datastrato/community), and Project Nessie (Dremio/community). All four implement the Iceberg REST catalog spec to varying degrees, which means clients can talk to them through a common protocol. The differentiation has moved to governance, multi-tenancy, lineage, federation, and developer experience. Unity is the most production-mature and the most coupled to Databricks. Polaris is the cleanest open implementation of the REST spec. Gravitino is the most ambitious in scope - aiming to catalog non-table assets too. Nessie is the most opinionated about Git-style branching for data. The winning catalog will probably not be a single project. It will be the protocol (Iceberg REST) plus multiple compliant implementations plus federation between them. That is the picture 2026 ends with. Why The Catalog Layer Matters Now A table format defines how data is laid out on disk. A catalog defines: ...

May 3, 2026 · 8 min · James M
Iceberg vs Delta vs Hudi 2026 Banner

Iceberg vs Delta vs Hudi in 2026 - The Format Wars Are Over

TL;DR The open table format war between Apache Iceberg, Delta Lake, and Apache Hudi is effectively over in 2026 - and the outcome is not a single winner but a clear settlement. Iceberg has won the role of the neutral standard that engines and platforms expect to read and write. It is the format you choose when you do not want to be coupled to a single vendor. Delta has won the role of the incumbent default inside the Databricks ecosystem and remains a strong choice if Databricks is your primary engine. Delta UniForm has narrowed the gap by letting Delta tables expose Iceberg metadata. Hudi has not won a category outright. It retains a smaller but loyal user base for streaming-heavy and CDC-heavy workloads, where its design choices still genuinely fit. The interesting battle has moved up the stack to the catalog layer. The format question is mostly settled. The catalog question is the new fight. The Format Wars - A Brief History For most of the early 2020s the lakehouse story was a three-way argument about how to put ACID transactions on top of object storage. ...

May 3, 2026 · 8 min · James M
Apache Iceberg in 2026

Apache Iceberg in 2026: The Open Table Format That Won

In 2023, the question was “which open table format will survive - Iceberg, Delta, or Hudi?” In 2026, that debate is over. Apache Iceberg won, and it won for reasons that have almost nothing to do with its raw performance. It won because it is the only format that both Snowflake and Databricks now treat as a first-class citizen, because the vendors picked sides on catalogs rather than table formats, and because enterprise buyers decided that multi-engine portability was worth more than a small performance edge. ...

April 22, 2026 · 11 min · James M
Snowflake Icon

Snowflake Storage for Apache Iceberg: Enterprise Open Data Comes to AWS and Azure

A New Era for Open Data Formats Snowflake has announced the general availability of Snowflake Storage for Apache Iceberg on both AWS and Azure, marking a significant shift in how enterprises can build open, interoperable data lakehouses. This development combines Snowflake’s enterprise reliability and governance capabilities with the flexibility and openness of Apache Iceberg, one of the most promising open table formats in the data ecosystem. For a deeper look at Iceberg itself, see Apache Iceberg in 2026, and for where this sits in the broader platform picture see The modern lakehouse stack. ...

April 18, 2026 · 4 min · James M
Unity Catalog in Practice

Unity Catalog in Practice: Lessons From the Field

The views in this post are my own personal reflections on industry patterns, written in my own time. They are not about any specific employer, team, or colleague, past or present, and do not draw on any non-public information. Unity Catalog sounds straightforward: “one governance layer for all your data and AI assets.” In theory, it’s elegant. In practice, you’ll run into gotchas that docs don’t prepare you for. This post collects generic patterns that come up repeatedly in public talks, vendor docs, community write-ups, and open discussions of UC adoption in 2026. For where Unity sits in the broader picture of catalogs, table formats, and engines, see The modern lakehouse stack. ...

April 5, 2026 · 10 min · James M
Databricks vs Snowflake

Databricks vs Snowflake in 2026: An Honest Comparison

The views in this post are my own personal reflections on the data industry, written in my own time. They are not about any specific employer, team, or colleague, past or present, and do not draw on any non-public information. The question “Databricks or Snowflake?” has dominated data engineering conversations for the past five years. In 2026, it’s still the wrong question. But let me answer it anyway, because sometimes you have to pick one. For the wider stack this choice sits inside, see The modern lakehouse stack. ...

April 5, 2026 · 11 min · James M
Databricks Training and Certification

Databricks Training & Certification

Overview Databricks offers certification tracks aligned to common roles: Data Engineer, Data Analyst, Apache Spark Developer, Machine Learning Engineer, and Generative AI Engineer. All certifications: Validity: 2 years from pass date Cost: $200 per exam attempt Format: Multiple choice, proctored online Recent Updates (2026): Emphasis on Lakeflow Declarative Pipelines (the evolution of DLT), Unity Catalog, liquid clustering, predictive optimization, AUTO CDC, Lakehouse Federation, and serverless compute Choose a certification based on your: ...

April 4, 2026 · 4 min · James M