Scott Galloway is the kind of commentator the AI conversation rarely produces: not a researcher, not a founder, not a doomer, not a booster. He is a marketing professor and a serial entrepreneur with a record of correctly reading the corporate stories of the last two decades, and he has spent the last two years pointing at the AI story with increasing concern. The headline of his pitch - that AI was not built for ordinary people and that the rich no longer need them - is provocative on purpose. The argument underneath is more careful, and worth pulling apart on its own terms.

TL;DR

  • Scott Galloway is a clinical professor of marketing at NYU Stern School of Business and host of The Prof G Pod, Prof G Markets, and Pivot with Kara Swisher.
  • He is best known as the founder of L2 Inc, a digital intelligence firm acquired by Gartner in 2017, and as the founder of the executive-education startup Section.
  • His central economic claim about AI is that the United States in 2026 is “a giant bet on AI” and that the current valuations look like a late-stage bubble. He has been particularly pointed about OpenAI’s circular financing arrangements with Nvidia and Microsoft.
  • His central political claim is that the AI story is being marketed in a way that flatters owners of capital and disempowers workers - the title of his most-watched 2026 interview, AI Wasn’t Built For You. The Rich Don’t Need You Anymore, captures the thesis cleanly.
  • His operational advice is more measured than the rhetoric. He uses AI daily, treats it as a thought partner rather than a replacement, and tells students to do the same.
  • He thinks the durable skills of the next decade are storytelling, relationships, and resilience - things AI is bad at and that compound over a career.
  • He is the New York Times bestselling author of The Four, The Algebra of Happiness, Post Corona, Adrift, and The Algebra of Wealth.

The man behind the argument

Galloway is, in his own framing, a creature of business school rather than computer science. He has an undergraduate degree from UCLA and an MBA from UC Berkeley’s Haas School of Business, and he teaches brand strategy and digital marketing at NYU Stern, where he has been on the faculty since 2002. The biographical detail that matters for the AI conversation is that he has spent his career studying how large companies tell stories about themselves, and the shape of the story they tell is usually the shape of the trade he is most interested in.

Outside the classroom he has a long entrepreneurial record. He founded the e-commerce firm Red Envelope in 1997, the digital intelligence firm L2 Inc in 2010 - acquired by Gartner in 2017 for around $155 million - and the executive-education startup Section (formerly Section4) in 2019. He has served on the boards of Eddie Bauer, Urban Outfitters, Gateway, The New York Times Company, and the Haas School of Business. He is a TED speaker with one of the most-viewed business talks of the last decade.

The reason this matters for his AI commentary is that he is not an outsider to the systems he analyses. He has built and sold companies into the same advertising-and-attention economy that has now financed most of the AI build-out, and he has been on the boards of public companies whose valuations have ridden the same Magnificent Seven wave he now thinks is overextended. He writes from inside that experience rather than from a position of distance, and he is open about it.

What he actually says about AI

Galloway is not a technologist and does not pretend to be one. His commentary is mostly about three things: the economics of the AI build-out, the political economy of who benefits, and the personal-strategy question of how to live and work in the world that is now being built. Each strand reinforces the others, and he moves between them fluently across his books, columns, and podcasts.

The economics. Galloway’s framing of the current AI cycle is that it is a late-stage bubble dressed up as inevitability. He has pointed at the same set of facts a lot of analysts have - frontier model spend, datacenter capex, the concentration of US equity returns in a handful of names - but his particular contribution is to read the financing structure as a tell. Nvidia investing tens of billions in OpenAI, OpenAI committing to spend that money on Nvidia chips, hyperscalers booking AI compute as both customer and supplier: he calls this circular financing and points out that the dot-com era ended with a structurally similar set of arrangements unwinding in days. He has been explicit, in his Bubble.ai column and elsewhere, that he is not predicting timing - just that the geometry is the geometry of bubbles.

The political economy. This is where the rhetoric gets sharpest. Galloway’s claim is that the dominant marketing story about AI - that it is a tide that lifts all boats, that displaced workers will retrain into more interesting jobs, that the productivity gains will be widely shared - is the same story that was told about offshoring, automation, and platform consolidation, and that it was wrong each of those previous times. The empirical record, in his reading, is that the gains from each of those waves accrued narrowly to the owners of capital and the top decile of cognitive workers, and that the broad middle either stagnated or fell. He sees nothing in the AI deployment so far that would break the pattern, and several things - the cost structure, the concentration of compute, the lobbying intensity - that suggest it will be worse.

The line that has travelled the furthest in 2026, AI wasn’t built for you, the rich don’t need you anymore, is doing two things at once. The first is rhetorical compression of the political-economy argument. The second is a specific claim about labour: that one of the things capital previously needed from ordinary workers - their cognitive labour, their judgement, their willingness to show up - is the thing AI is being optimised to substitute for. If that substitution succeeds even partially, the bargaining position of most workers gets worse, not because of any single layoff but because the implicit threat of replacement becomes credible.

The personal strategy. Galloway is sharper than most cultural commentators on what to actually do with this. His advice to his MBA students has been consistent across the last two years: use AI every day, treat it as a thought partner, do not delegate the parts of your work that are the work, and invest disproportionately in the things AI cannot do. The things he names are storytelling, presence, and relationships. He is not unique in saying this, but he is unusual in saying it from inside the building - to a roomful of students paying NYU prices for an MBA in 2026.

How he uses AI himself

A small but useful detail of Galloway’s commentary is that he is not a refusenik. He has been public about using AI tools daily for research, brainstorming, and pitch-deck production, and equally public about pulling the plug on an AI version of himself when it did not meet the bar he wanted his name attached to. The combined picture is of someone who treats the technology as real, takes it seriously enough to integrate it into his workflow, and is willing to walk away from a specific deployment when the quality is not there.

This is worth flagging because it disarms the obvious objection to his bubble talk. He is not arguing that the technology is fake. He is arguing that the financing is overheated, the marketing story is misleading, and the distribution of the gains is going to be narrower than the boosters claim. Those are separable from the question of whether the underlying tools work, and his use of those tools is the evidence that he can hold both ideas at once.

The bubble thesis, in detail

If you read Galloway’s writing on AI as one long argument, the bubble thesis is the spine. The shape is roughly this.

A small number of companies have absorbed an enormous fraction of US equity returns over the last three years, and a large fraction of that performance is now contingent on AI revenue assumptions that have not yet been validated at scale. The revenue growth at OpenAI is real, but the unit economics are weak and the gross margins are still being subsidised by hyperscaler partnerships whose long-run terms are unclear. The financing structure - hyperscalers buying chips, GPU vendors investing in model labs, model labs renting back the chips - means that a stress event in one part of the chain transmits faster than it would in a less circular market.

The historical analogy he keeps reaching for is not 2008 but 2000. The Nasdaq lost 77% of its value between March 2000 and October 2002, and the trigger was not a single event but a sequence of capital-spending guidance cuts and bankruptcies that revealed how much of the network-buildout demand had been one company buying from another. He thinks the 2026 AI economy has the same shape and that the unwind, when it comes, will be measured in weeks rather than months.

The part of this thesis that is hardest to argue with is the descriptive part. The concentration of returns is real. The financing arrangements are real. The dependence of the broader market on a small number of names is real. The part that is genuinely contested is the timing and the depth of any correction, and on those Galloway is careful to say he does not know.

The controversial parts

The position has serious critics, and the critiques deserve engagement on their own terms.

The first objection is that the AI economy is structurally different from the dot-com economy because the capex is buying real, productive infrastructure. Datacenters and GPUs are durable goods that get used by other workloads even if specific model labs underperform. There is no equivalent of dark fibre being abandoned in 2000 to current infrastructure investment, the argument goes, because the substrate is general-purpose compute and the demand for it is broader than any single AI bet. Galloway’s response is that this is a difference in degree rather than in kind - that the financing being circular is what matters, not what the underlying asset can theoretically be repurposed to.

The second objection is that the labour displacement story is being prosecuted too early. Frontier models in 2026 are uneven, agentic systems are still brittle in production, and the empirical record on AI replacing knowledge workers is mixed at best. Many of the headlines about AI-driven layoffs turn out, on inspection, to be cost cuts justified with AI language. Galloway’s response is that the marketing story is doing real work even before the substitution is real, because it changes the bargaining position of the worker now, in advance of the technology actually arriving.

The third objection is that his policy implications, where he draws them, are politically contested in ways the analysis itself is not. He has argued for stronger labour protections, more aggressive antitrust action against the largest platforms, and a tax structure that taxes capital more like labour. Whether you agree with those moves is largely independent of whether you agree with the bubble call. Critics fairly point out that he sometimes blurs the line between describing what is happening and prescribing what should be done about it.

The fourth, and perhaps the most useful, is that some of his headline phrasing is sharper than the underlying claim. The rich don’t need you anymore is rhetorical compression rather than a literal empirical statement, and Galloway himself draws that distinction in his longer-form writing and interviews. Readers who only encounter the clipped version on social media can miss the more careful argument behind it, which is a fair note about the format rather than about him.

What he predicts

Galloway is willing to put numbers and timelines on his views in a way most commentators are not, which is one reason his clips travel so far.

In the near term he expects more disruption around the AI majors than the consensus is pricing in. He has predicted that OpenAI could pull or postpone its IPO, that the gap between frontier and open-weight models will compress further than is comfortable for the closed labs, and that one or more high-profile model labs will have a meaningful recapitalisation event before the end of the year. He is not predicting collapse; he is predicting that the story of inevitability gets a hard edit.

In the medium term he expects continued dislocation in cognitive labour markets. His framing is that the threat is not that AI takes everyone’s job at once, but that a workforce on a knife edge gets pushed off it by a technology that arrives at the wrong moment. He has been particularly worried about graduate hiring - the entry-level white-collar pipeline that was already weakening before AI - and has written that 2026 is the worst graduate market in a generation.

In the long term his thesis is more political than technical. He thinks the question of who owns and controls the largest AI systems will dominate the next decade of US politics in roughly the way platform regulation dominated the last one, and that the outcome of that fight will determine whether the productivity gains are widely shared or narrowly captured. He is not optimistic about the default trajectory, but he is also unusually clear that he thinks the default is not destiny.

Where he thinks we are heading

The summary version of Galloway’s worldview, stripped of the slogans, is that AI is a real technology embedded in an unreal financing structure and a misleading marketing story, and that the unreal parts are doing more economic and political damage than the real parts have yet earned. He thinks the financing will correct, the marketing will be exposed, and the technology will end up integrated into work and life in ways that are useful but more incremental than the current narrative suggests. The harm in the meantime - the careers shaped around assumptions that turn out to be wrong, the policy concessions made in the heat of an inevitability frame - is the part he wants people to take seriously now.

The thread that connects all of this is unusually consistent across a large body of work. Across his books, his columns, his podcasts, and his lectures, the underlying frame is the same: every wave of business technology in the last thirty years has been described as broadly empowering, every wave has in fact narrowed the distribution of gains, and the smart move for an individual is to build the kinds of capital - human, social, narrative - that the next wave does not commoditise.

How to read him

The honest summary is that Galloway is best understood as a marketing professor who has applied his discipline to a story that most of the rest of the AI conversation has accepted at face value. He is not a contrarian for its own sake. He is a careful reader of corporate narratives who thinks the corporate narrative around AI in 2026 is unusually self-serving. The strongest version of his position - that AI was built for the rich and that the rest of the workforce should plan accordingly - is rhetorical. The weaker version - that the financing is overheated, the marketing is misleading, and the distribution of gains will be narrower than promised - is harder to dismiss.

For a working engineer, the practical implication is the same one career-aware writers have been drawing for the last year. Use AI daily. Treat it as a thought partner, not a replacement. Invest in the things that compound over a career and that AI is genuinely bad at - judgement, taste, narrative, trusted relationships. Treat the labour-market story as something you have to navigate, not something that will navigate itself in your favour. Galloway’s slogans are designed to get attention. The advice underneath them is unglamorous, conservative in the small-c sense, and probably right.

Further Watching

Scott Galloway: AI Wasn’t Built For You. The Rich Don’t Need You Anymore!

Provocative Predictions for the Future of Tech with NYU Marketing Professor Scott Galloway

Scott Galloway’s Predictions for 2026 | Prof G Markets

Why Scott Galloway Pulled the Plug on AI Bot | Pivot

Red Flags at OpenAI - How One Company Could Burst the AI Bubble | Prof G Markets