How the UK Government Is Shaping the Future of Copyright in the Age of AI

How the UK Government Is Shaping the Future of Copyright in the Age of AI

A Deepfake Scam That Shows What’s at Stake

In early 2024, a widely reported incident involving British engineering firm Arup revealed just how advanced — and how dangerous — AI‑generated content has become. An employee in the company’s Hong Kong office joined what seemed like a normal video meeting with senior colleagues, including the CFO. The faces looked right. The voices sounded right. The conversation felt routine.

Trusting what he saw, the employee authorised transfers totalling $25 million. Only later did it emerge that every person on that call was an AI‑generated deepfake - a fully synthetic imitation created using publicly available footage and cloned voices.

Incidents like this aren’t just cautionary tales about cybersecurity. They demonstrate how powerful today’s AI systems are, how convincingly they can mimic human expression, and how urgently society needs rules governing how AI systems are trained and deployed.

This is the backdrop against which the UK Government has been working to modernise its copyright framework for the AI era.

Why Copyright and AI Are NowDeeply Intertwined

The UK’s creative industries and its AI sector are both central to the country’s economic strategy — but rapid advancements in AI, particularly large language models and generative systems, have thrown long‑standing copyright rules into question.

AI systems are typically trained on vast amounts of material, including copyright protected works. This has created tension on both sides:

  • Creators are struggling to maintain control over how their work is used — and to seek payment when it’s used to train commercial AI models [gov.uk].
  • AI developers say unclear rules make it difficult to  innovate, invest, or operate legally within the UK [gov.uk].

The Government acknowledges this uncertainty “is undermining investment in and adoption of AI technology,” making reform both urgent and unavoidable [gov.uk].

The Government’s Three Big Goals

Across multiple consultations and official papers, the UK Government has outlined three core aims for its AI‑era copyright framework:

1. Protect creators’ control and ability to be paid

Creators should have meaningful control over whether their work is used to train AI systems — and a route to remuneration when they choose to permit it [gov.uk].

2. Ensure AI developers can legally access data

High‑quality training data is essential for developing competitive AI models.  The Government wants a system that gives developers clear legal access when rights holders haven’t opted out [gov.uk].

3. Build trust and transparency

TheGovernment sees transparency as foundational — developers should disclose what data they use, how they obtained it, and when content is machine‑generated [gov.uk].

The Hybrid Approach the UK Is Proposing

The UKGovernment is increasingly coalescing around a two‑part framework designed to balance copyright protection with AI innovation:

A. Creators can “reserve” their rights

This mechanism would let rights holders opt out of having their works used for AI training, enabling licensing and remuneration where desired [gov.uk].

B. A broad exception applies when rights aren’t reserved

If creators don’t opt out, AI developers would be allowed to use their content under a new copyright exception — creating legal clarity for training at scale [gov.uk].

This model aims to strike a balance between creative rights and technological progress.

But Public Opinion Is Strong — and Split

TheGovernment’s major consultation (December 2024–February 2025) drew more than11,500 responses. Early findings reveal deep divisions:

  • 88% supported mandatory     licensing for all AI training — a view largely held by creative     industries.
  • Only 0.5% supported a broad     exception allowing unrestricted data use.
  • Just 3% supported the Government’s     preferred “opt-out” approach [blogs.ucl.ac.uk].

This highlights a fundamental tension: creators want stronger protections and guaranteed remuneration, while developers argue that requiring licences for everything would make training modern AI models nearly impossible.

Transparency: The Government’s Hard Line

If there’s one area where the UK Government is absolutely firm, it’s transparency across the AI lifecycle.

Training data transparency

Developers may be required to disclose what data they used to train their models [assets.pub...ice.gov.uk].

Output transparency

AI‑generated content may need clear labelling to distinguish it from human-created material [assets.pub...ice.gov.uk].

Web‑crawler transparency

Policy makers are examining how training data is collected, and what standards web crawlers must meet [assets.pub...ice.gov.uk].

These changes could reshape how AI firms operate in the UK — and how accountable they are to creators and consumers.

Computer‑Generated Works andDigital Replicas

The Government is taking a cautious approach to the broader implications of AI‑created content:

  • It plans no immediate changes to the law on computer‑generated works without a human author,     citing limited evidence and early‑stage technology [gov.uk].
  • It will continue to reassess this area as AI evolves [gov.uk].
  • It is actively examining issues like digital replicas, where a person’s likeness or voice is used     synthetically — an area of growing concern [assets.pub...ice.gov.uk].

Licensing, Enforcement, and What Comes Next

The UKGovernment is considering:

At the same time, the government is developing a voluntary Code of Practice in partnership with creators, researchers, and AI firms.  If voluntary adoption fails, legislation is on the table [gov.uk].

Final Thoughts: A Critical Moment for AI and Creativity

The Arup deepfake incident is more than a story about fraud - it’s a stark example of what happens when AI capabilities collide with outdated rules and unpreparedsystems.

The UK Government’s emerging copyright framework reflects this reality.  It aims to:

  • Protect creative work
  • Enable responsible AI innovation
  • Bring transparency to how AI systems are trained and what they produce

The final shape of the rules will depend on the outcomes of the 2026 consultation cycle,but the direction is clear: a more balanced, transparent, and future‑readycopyright regime designed to keep pace with the rapid evolution of AI.

Bottom Line

Plans for AI-generated content needing to be labelled suggest a world of increasedaccountability for content origin with potential compliance requirements for platforms and creators, and greater emphasis on audit trails.   In short, creators will need to prove that their work is real, original, and human-created.  

This is exactly what Etched® has been built to do: 

* Blockchain time stamping to show when a piece of work was created.

* The ability to document the creative process (drafts, iterations, metadata).

* A tamper-evident record of authorship. 

Etched® is a platform for creators, media companies, marketplaces and platforms, and lawyers needing verified legal evidence - in fact, anyone needing to show trust, transparency and authenticity.

Victor Caddy
Victor Caddy
Join us on LinkedIn:
Blog

I think, therefore IAM

Stay ahead in the legal world of Intellectual Asset Management and Intellectual Property with The Victor.

Project Image