For the past few years, AI law has been one of those topics that felt perpetually five minutes away. Governments would announce frameworks. Committees would publish white papers. Experts would debate what the rules should eventually look like.
That phase is over.
In 2026, the legal infrastructure around AI is arriving, piece by piece, and not always in a coherent sequence. The EU has its Act. The US has fifty state governments moving in slightly different directions while Washington stalls on federal legislation. Courts in multiple jurisdictions are working through copyright cases with implications that will reshape the entire industry. And businesses deploying AI tools are discovering, sometimes the hard way, that “the AI did it” is not a defence that holds up in court.
This is where things actually stand - and where they are heading.
The EU AI Act: Now Enforced, Not Just Law
The EU AI Act entered into force in August 2024, but the rules that matter most for businesses kick in fully on 2 August 2026. That date is not a distant deadline anymore.
The framework works on a risk tiering system. Systems that pose unacceptable risk - social scoring by governments, real-time biometric surveillance in public spaces, AI that manipulates people psychologically - are banned outright. High-risk AI (think hiring tools, credit scoring, medical devices, critical infrastructure) faces a demanding compliance burden: mandatory risk assessments, human oversight, detailed documentation, and registration in a public EU database.
The penalties for getting this wrong are not theoretical. Fines for high-risk violations run up to €30 million or 6% of global annual turnover, whichever is higher. Violations of the prohibited practices rules are capped at €35 million or 7% of turnover. For large AI companies, these are numbers that require board-level attention.
For most UK and European businesses, the most immediate requirements are around general-purpose AI models and the training data they use - which brings us to the copyright problem.
The Copyright War Is Now a Case Law Factory
More than fifty copyright lawsuits against AI companies are currently working their way through US federal courts. That number has grown steadily, and the cases span nearly every creative category - books, music, art, code, news articles. What started as individual creators filing claims has become a coordinated effort across industries.
The core legal argument in most of these cases is straightforward: AI developers used copyrighted works to train their models without permission or payment. The counter-argument, that training constitutes “fair use” under US law, has not been resolved decisively. Courts are splitting, and the appeals process will take years.
In the EU, the picture is more structured. Under the EU Copyright Directive, rights holders can formally opt out of having their work used for AI training. The AI Act adds a transparency layer on top: every provider of a general-purpose AI model - meaning anything in the class of large language models - must publish a public summary of the datasets used for training. They must check whether data sources carry copyright reservations, and if they do, either license the content or exclude it.
This is a significant shift. It puts the burden of verification on the AI developer, not the rights holder. And it means the casual “scrape the internet and worry later” approach that characterised early LLM training is no longer viable in Europe.
The European Parliament is also pushing for legal liability for GenAI providers that fail to comply with these transparency requirements - not just regulatory fines but civil liability to creators whose rights were infringed.
Who Is Responsible When AI Gets It Wrong
This is the question that is landing on businesses right now, and the answer is almost always: you are.
Courts are increasingly clear that deploying AI does not transfer liability to the AI vendor. If an AI system causes harm - a wrong medical diagnosis, a biased hiring decision, a financial recommendation that loses a client’s savings - the liability assessment looks at the humans and companies who built it, sold it, deployed it, and supervised its use. The organisation deploying the AI, and the professionals using it, remain fully liable for outcomes.
The EEOC in the US filed an amicus brief in 2025 supporting plaintiffs in an AI hiring discrimination case, arguing that AI vendors exercising control over hiring decisions can be directly liable under anti-discrimination laws. That argument is finding traction. It means the vendor - not just the employer - can be held responsible if an AI tool they designed produces unlawfully discriminatory outcomes.
For businesses using AI in customer-facing contexts, the practical implication is that the law expects you to understand what your tools are doing and to have controls in place. “We trusted the algorithm” is not an adequate response.
The US Federal Gap - and How States Are Filling It
The United States does not have a comprehensive federal AI law. That is unlikely to change quickly. Washington’s current posture, shaped by an executive order from late 2025, leans toward deregulation at the federal level and is attempting to preempt the most restrictive state laws on the grounds that inconsistent rules fragment the market.
But the states are not waiting. California, Colorado, New York, and Texas have all enacted AI-specific statutes. These focus on specific, high-stakes uses - algorithmic hiring, healthcare decisions, financial services, consumer-facing chatbots that fail to disclose they are AI. The result is a patchwork that companies operating nationally have to navigate, state by state.
Federal agencies, meanwhile, are applying existing laws to AI contexts more aggressively. The FTC is using consumer protection powers against deceptive AI claims. The CFPB is scrutinising AI-driven credit decisions under fair lending laws. The FDA is working through its regulatory framework for AI in medical devices. None of this required new legislation - it is existing authority applied to a new context.
The practical impact for businesses is that they face AI-specific legal exposure from multiple directions at once, without a single framework that resolves all of it. That is a harder compliance problem than a single comprehensive law would create.
What Is Coming in the Next Few Years
Mandatory AI content labelling - The EU AI Act already requires platforms that publish AI-generated text, audio, images, or video to label it clearly as artificial. This is not optional and not vague. Similar requirements are appearing in US state laws. Within a few years, content authenticity will likely become an auditable compliance requirement in many jurisdictions, not just a best practice.
The autonomous agent question - AI agents that can take actions in the world - browse, transact, communicate, write code, sign contracts - are raising a legal question that no existing framework cleanly answers: when an autonomous agent causes harm in the course of doing what it was instructed to do, how does liability flow? Right now the answer is “back to the human who deployed it,” but that answer gets harder to sustain as agents become more capable and their behaviour becomes less predictable from their instructions. Legal scholars are beginning to discuss the concept of AI as “legal actors” - entities that owe duties to persons without having full legal personhood - as a way to assign responsibility without the philosophical problems that come with treating AI as persons. This debate is theoretical today, but the pressure from real agent deployments will push it into courtrooms within a few years.
Liability insurance for AI - The risk management industry is beginning to price AI liability exposure properly. Expect AI-specific clauses in professional indemnity and product liability policies to become standard, and expect premiums to reflect the actual risk profile of the AI systems an organisation uses.
Training data markets - If AI developers must license training data rather than scrape it, a market has to exist. Several platforms are already positioning themselves as licensed training data providers. This may resolve the copyright conflict more efficiently than litigation - rights holders get paid, developers get legal certainty. How quickly that market matures will partly determine how the remaining court cases settle.
The Uncomfortable Reality
The legal scaffolding around AI is being built largely after the technology deployed at scale. That is not unusual in technology history - the law tends to lag. But it creates a period of genuine uncertainty where businesses are exposed to liability frameworks that are still being worked out in real time, by courts and regulators who are themselves learning as they go.
The EU’s approach - comprehensive, risk-based, with real penalties - gives businesses at least a known framework to comply with, even if compliance is demanding. The US approach - fragmented, agency-led, state-patchwork - is harder to navigate but currently more permissive for development.
Neither system is finished. Both will change as more cases produce precedent and as more real-world AI deployments produce the incidents that force harder questions.
What is certain is that the “move fast and figure out the legal stuff later” era is over. Not because the law has caught up with AI - it has not - but because enough of the law has arrived that the gaps are no longer a safe place to operate.
If you are building with AI, deploying AI, or your business depends on AI outputs, the legal landscape is now one of the real constraints on what you can do and how you do it. That is probably healthy. But it is also genuinely new, and anyone who tells you they have it fully figured out is guessing.
Further reading:
- EU AI Act compliance overview - European Commission
- EU AI Act: Practical Compliance Guide for 2026 - Legiscope
- AI Enforcement Accelerates as Federal Policy Stalls - Morgan Lewis
- How 2026 Will Reshape Technology and AI Law - Founders Legal
- Generative AI Copyright: Law, Litigation & Best Practices in 2026 - AIMultiple