- Artificial Intelligence (LLMs, AI agents, and the future of human expertise)
- Blockchain (Decentralized infrastructure, networks, and ecosystem evolution)
- Data Engineering (Building data infrastructure that actually scales)
- DevOps (Infrastructure, automation, and operational philosophy)
- General (Culture, science, and the miscellaneous)
- Retro Computing (The machines and culture that shaped computing)
- Music Production (Gear, sound design, and creative workflow)
- Personal Development (Expertise, craft, and the engineering mindset)
- Space (Infrastructure and vision for human expansion beyond Earth)
Roman Yampolskiy: The Researcher Who Thinks AI Cannot Be Controlled
Most people writing about AI risk in 2026 are recent arrivals. Roman Yampolskiy is not. He has been making the same argument - that advanced AI systems may be fundamentally uncontrollable - since before the field of AI safety had a settled name, which is partly because he is the one who gave it that name. Whether you find his conclusions alarmist, prescient, or somewhere in between depends mostly on how you read the gap between current systems and the ones he writes about. This post is an attempt to lay out the man, the argument, and the reasons it deserves more than a dismissal. ...
Human Spaceflight Rockets in 2026: A New Era Takes Off
A few weeks ago, four astronauts came home from the Moon for the first time since 1972. Artemis II splashed down on April 11, 2026, after a nine-day flight that took its crew further from Earth than any human has ever travelled - 252,756 miles, a new record set by Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen. It is the clearest signal yet that human spaceflight has stopped being a thing of the past and started being a thing of the near future again. But the headline mission is only one piece of a much larger picture. The decade we are living in is shaping up to be the most consequential one for crewed space travel since Apollo - and unlike the 1960s, this time it is not a single government driving it. ...
Humanoid Robotics in 2026: From Prototypes to Production
For most of the last decade, humanoid robotics looked like a category that would always be three years away. Demos were impressive, factory floors stayed empty, and serious analysts pointed to bipedal locomotion, dexterous manipulation, and the price of high torque-density actuators as reasons the form factor would lose to wheeled and fixed-arm systems for any real industrial work. In 2026 that argument no longer holds cleanly. Multiple companies are running paid pilots inside the warehouses and assembly lines of named customers - GXO, Mercedes-Benz, BMW, Amazon, Toyota - and one (1X) is taking deposits on a home robot. Production is still measured in thousands per year rather than tens of thousands, but the curve is unmistakable. This is the year humanoids stopped being a research bet and started being a procurement decision. ...
Physical Modeling Synthesis: The Underrated Future of Sound Design
If you’ve spent any time with Pianoteq or the Audio Modeling SWAM instruments, you’ve felt something different. Not the crisp accuracy of a sampled library, not the flexibility of wavetable synthesis - but something that responds like an instrument. Strings that vibrate with sympathetic resonance. Piano keys with wooden resistance. A cello that sings differently when you bow it hard versus soft. This is physical modeling: mathematics as an instrument, not just a sampler or synth engine. ...
MPE Deep Dive: Why Expressive MIDI Changes Everything
If you have spent any time around electronic music in the last decade, you have probably seen the letters MPE written on the side of a controller and not thought too much about them. The acronym sounds like a feature bullet. It is not. It is a quiet but fundamental reframing of what an electronic instrument can do, and once you have spent serious time playing one, going back to a fixed-velocity keyboard feels like trading a touch screen for a number pad. ...
Hardware Sequencers in 2026: When Physical Beats Software
By mid-2026, the “in-the-box” vs “out-of-the-box” debate has fundamentally shifted. We no longer argue about analog warmth or filter aliasing - neural synthesis has made those distinctions almost invisible to the ear. The new battleground is cognitive load, and that is where dedicated hardware sequencers are quietly winning ground back. As I argued in The Automation Paradox, once AI can generate a passable 16-bar loop in seconds, the human’s job shifts to curation and intent. A hardware sequencer is the most direct tool we have for enforcing that intent. ...
Music Production: Mobile Apps for iPad and iPhone
About iOS has quietly become one of the most interesting platforms for music making. The combination of multi-touch, low-latency audio, AUv3 plug-in hosting, and instant-on hardware means an iPad can sit somewhere between a notebook for ideas and a fully credible studio instrument. The list below is what I keep installed and reach for - synths I actually open, grooveboxes that have made it into finished tracks, and a couple of learning tools that are genuinely useful. Apps are grouped by developer so you can find related instruments quickly. Where an app is iPhone-only or particularly suited to one device, I’ve called that out. ...
The Best MPE Controllers in 2026
About MIDI Polyphonic Expression (MPE) is an extension of MIDI that gives every note its own continuous control over pitch, pressure, and timbre. Where standard MIDI shares pitch bend and aftertouch across the whole channel, MPE spreads notes across multiple channels so each voice in a chord can be bent, swelled, or shaped independently. In practice this means you can hold a chord and bend a single note up, slide between voicings without re-triggering, add vibrato to the top line of a phrase while the rest sustains cleanly, and play electronic instruments with the kind of per-note nuance that string and wind players have always taken for granted. ...
Suno in May 2026: where the platform actually is
TL;DR - Suno v5.5 (March 2026) is the most expressive model yet, and three personalisation features finally make the platform usable as a real workflow: Voices (clone your own verified singing voice), Custom Models (fine-tune v5.5 on your own catalogue), and My Taste (lightweight preference learning for everyone). The Warner Music deal is now visible in the product - older models are being deprecated, free accounts have lost commercial download rights, and the ownership language has softened from “you own this” to “you have commercial rights.” Best used for demos, stem libraries, and personal sound signatures; still risky for releases that need clean copyright provenance. ...
AI Agents That Actually Work: Patterns From Real Projects
I have spent the last eighteen months either building, reviewing, or operating systems that some marketing department somewhere has called “agents”. The definition has been so thoroughly stretched that it now means anything from a chatbot with a calculator tool to a long-running autonomous workflow that touches production infrastructure. Underneath the noise there is a real engineering discipline emerging, and the patterns that separate the systems that survive contact with real users from the ones that demo well and fall over are starting to be legible. ...