AlphaFold’s release in 2021 was the AI-for-science moment that broke through to the general public. A computational solution to a 50-year-old problem in biology - predicting protein structure from sequence - that produced a tool used by hundreds of thousands of researchers. The narrative around AI-for-science crystallised: deep learning would produce a series of similar breakthroughs across scientific domains.

The 2026 reality is more interesting and less clean. AlphaFold-class breakthroughs have been rarer than the early narrative suggested. But AI has spread across scientific practice in subtler ways that, in aggregate, have done more to change how science is actually done than the few headline breakthroughs.

TL;DR

  • The big breakthroughs have been real but rare. AlphaFold, AlphaMissense, and a handful of others. The pace of similar headline results has been slower than expected.
  • The long tail of routine AI usage has been larger than expected. Most scientific labs now use AI tools daily for routine work that is invisible from outside the field.
  • Code, writing, and analysis assistance are the categories where AI has spread fastest in research practice.
  • Specialised scientific models are emerging in many fields - climate modelling, materials science, drug discovery, neuroscience - with real but domain-specific impact.
  • The cultural change inside science is significant: AI is now routine infrastructure for many researchers, not a specialised tool reserved for AI-savvy labs.

What AlphaFold actually did

Worth being precise about what the AlphaFold story actually shows.

It solved a specific, well-defined, decades-old problem. It produced a publicly available tool that genuinely changed practice in a field. It did this through a combination of careful problem framing, large amounts of relevant training data (the Protein Data Bank), substantial computational resources, and methodological innovation.

The conditions that made AlphaFold possible are not common across science:

  • Most scientific problems are not as well-defined as protein structure prediction.
  • Most scientific fields do not have a public dataset the size of the PDB.
  • Most research questions are not amenable to the kind of pattern recognition that deep learning excels at.
  • Most research budgets cannot support the compute that AlphaFold required.

The expectation that AlphaFold would be followed quickly by a string of similar breakthroughs underestimated how specific the conditions were. There have been similar breakthroughs - AlphaMissense for genetic variants, AlphaFold-Multimer for protein interactions, several materials-science successes - but they remain notable events rather than monthly occurrences.

The long tail of routine AI usage

What has happened instead is more interesting. Across scientific disciplines, researchers now routinely use AI tools for a wide range of practical tasks:

Code writing and debugging. Scientific computing involves a lot of code, and AI coding assistants have eaten a meaningful share of the work. Researchers in biology, physics, chemistry, climate science, neuroscience - all use Copilot, Claude, Cursor, and similar tools daily for the computational side of their work.

Writing assistance. Drafting papers, summarising literature, producing grant proposals. The quality of AI-assisted scientific writing has reached the point where most researchers use it for at least the draft stage, with their expertise applied to revision and verification.

Literature review. Tools like Elicit, Consensus, and the various semantic-search-over-papers systems have made literature review meaningfully faster. The 2026 researcher can survey a field in days rather than weeks.

Data analysis at the routine level. Standard data cleaning, exploratory analysis, statistical testing - much of this is now done with AI assistance. The researcher still designs the analysis and interprets the results; the AI handles the mechanics.

Image and signal analysis. Specialised AI models for routine image classification - cell counting, microscopy analysis, satellite imagery, medical imaging - are widely deployed. The applications are domain-specific, but they are common enough that they have changed how labs work.

None of these individually is dramatic. Together they have shifted how a meaningful share of working scientists spend their time.

Where breakthroughs are happening

A few specific domains where AI has produced or is producing breakthrough results, beyond AlphaFold:

Materials science. GNoME (Graph Networks for Materials Exploration) and similar systems have identified large numbers of candidate stable crystals, many of which have since been synthesised. The pipeline from computational prediction to laboratory verification has tightened significantly.

Drug discovery. AI-designed molecules are now in clinical trials. The pace is still constrained by biology and regulation rather than by computational candidates, but the candidate side is meaningfully better than it was.

Climate modelling. AI weather forecasting models (GraphCast, Pangu-Weather, others) are now competitive with traditional numerical weather prediction at substantially lower compute cost. The climate-modelling community is incorporating these methods seriously.

Mathematics. Tools assisting with proof search and verification. The recent results from AlphaProof and similar systems have shown that AI can contribute to the formal mathematics process. Whether this scales to generally useful mathematical discovery remains uncertain.

Particle physics and astronomy. Pattern recognition over enormous data volumes - LHC collision events, telescope imagery - is heavily AI-assisted at this point. The discoveries are not directly produced by AI, but the data processing that enables them is.

What has not changed

Worth being honest about the parts of science that AI has not transformed:

The slow parts of science remain slow. Setting up experiments, growing samples, running clinical trials, doing field work. The bottleneck for most discovery is not analysis; it is physical experimentation.

Causal claims and explanation. AI tools are good at pattern recognition but limited at producing the kinds of explanations that scientific knowledge consists of. The work of building a mechanistic understanding of a phenomenon remains human work.

Theory formation. No AI has yet produced a novel theoretical framework that changed how a field thinks. The pattern-recognition models can find regularities; they do not yet produce the conceptual shifts that mark major scientific progress.

Peer review and the institutional process. Science remains a human-institutional enterprise. The publication, review, and credentialing systems work largely as they did before AI.

What this is doing to the institution

The interesting longer-term question is what widespread AI use is doing to the institution of science itself. A few patterns are emerging:

The pace of publication is accelerating. AI-assisted writing and analysis let researchers produce papers faster. Whether this is a good thing depends on whether quality is keeping up - the evidence is mixed.

Replication challenges are getting harder. When part of the analysis was AI-generated, reproducing the results may require the same AI model, which may no longer exist in its original form. The reproducibility crisis has acquired a new dimension.

The skill profile of a successful researcher is shifting. Computational and AI fluency matter more than they did. Domain depth still matters. The combination is harder to develop than either alone.

The economics of scientific careers are unsettled. AI tools raise individual productivity, which could compress employment in research staff roles. Whether this happens depends on whether the institution chooses to do more science with the same money or the same science with less money.

The honest summary

AI in scientific research in 2026 is a real, large, broadly distributed phenomenon - not the few-big-breakthroughs picture the AlphaFold story suggested, but a wide diffusion of AI tools into routine scientific practice. Most working scientists use AI daily. Most of that usage is invisible from outside their fields. In aggregate it has changed how science is done more than any specific breakthrough has.

The next decade will probably continue this pattern. A few headline AlphaFold-class results, alongside continuous improvement in the routine tools that researchers use. The cumulative effect over years is likely to be larger than the individual events suggest, in the same way that the cumulative effect of the personal computer on science was larger than any specific application.