February 20, 2026
The Smash-and-Grab Era Is Over
When five Hollywood studios sent cease-and-desist letters to ByteDance in a single week, it wasn't just a legal skirmish. It was a signal that the AI industry's default playbook — scrape everything, settle later — has finally hit a structural wall. Europe saw this coming. And Siloett was built for what comes next.
The Moment Everything Shifted
On 13 February 2026, Disney fired off a cease-and-desist letter to ByteDance describing Seedance 2.0 — the Chinese AI video generator that went viral with hyper-realistic clips of Brad Pitt and Tom Cruise — as treating its copyrighted library like "public-domain clip art." Paramount followed within 48 hours. The Motion Picture Association condemned it. The broader entertainment industry followed. ByteDance pledged "safeguards" — without specifying what any of them were.
That wasn't enough. On 17 February, Warner Bros. and Netflix both sent their own letters — and Netflix went further than anyone else had dared. Its litigation director called Seedance "a high-speed piracy engine" and gave ByteDance three days to respond before facing immediate litigation. The letter cited specific shows — Stranger Things, Bridgerton, Squid Game, KPop Demon Hunters — and named the infringing scenes in forensic detail, including a Bridgerton masquerade sequence that mirrored a specific character's gown. ByteDance had even been promoting the infringing content through its own official social media channels. Then on 18 February, Sony Pictures filed the fifth cease-and-desist in a week, dismissing ByteDance's safeguard pledges as "delayed or half-hearted" and citing what it called "willful" infringement across its entire catalogue.
Five major studios. One week. The pattern is no longer a re-run of the Sora episode — it is something structurally different. ByteDance had no Disney partnership to retreat into, no pre-existing relationship with Hollywood, no settlement runway. It was simply caught in the act, at scale, with the evidence publicly documented.
Disney, Netflix, and Sony have armies of lawyers. European filmmakers, independent producers, and creative archives do not — and their content is being scraped just the same.
European Creators Are Equally Exposed — With Far Less Recourse
It would be easy to read the Seedance story as a clash between Chinese tech and American entertainment conglomerates — a dispute between entities with the resources to absorb the fight. But that framing misses where the real damage is being done.
The BBC, ITV, Canal+, Banijay, RAI, ARD — these are not small or defenceless organisations. They are among the most significant content producers and distributors in the world, with substantial legal resources and valuable IP catalogues built over decades. And they are already fighting back. The BBC formally threatened Perplexity with legal action in June 2025, demanding the AI company stop scraping its content, delete existing copies, and propose financial compensation. ITV, Channel 4, and Channel 5 are all currently investigating Minimax after its Hailuo.ai video generator was found generating content bearing their logos — without any relationship or permission. These are organisations that take IP seriously and have the means to pursue it.
But here is the problem: litigation is slow, expensive, and profoundly uncertain. Even well-resourced European broadcasters are finding that the legal route produces years of discovery, massive legal bills, and outcomes that remain anybody's guess — the US fair-use question alone will not be definitively settled for years yet, and probably longer. The only consistent winner in this process is lawyers. Meanwhile, the scraping continues. Content keeps flowing into training pipelines. And the window to establish the right framework — rather than simply accumulate grievances — is closing.
This is where the EU AI Act matters most. The GPAI obligations that came into force in August 2025 — requiring every general-purpose AI provider to publish a structured training-data summary and demonstrate active copyright compliance — exist precisely because European legislators understood that litigation is not a sustainable answer. It protects no one efficiently. What the law creates instead is a transparency and licensing obligation — and what the market now needs is the infrastructure to fulfil it.
The EU Saw This Coming
While US courts grind through copyright doctrine and fair-use arguments — with no definitive rulings yet — Europe has moved decisively. The EU AI Act's GPAI provisions have been in force since August 2025. Every provider placing a general-purpose AI model on the European market must maintain a documented copyright policy, publish a public training-data summary using the European Commission's mandatory template, and demonstrate active compliance with the EU Copyright Directive's opt-out mechanisms.
From August 2026, the AI Office gains full enforcement powers — fines of up to €15 million or 3% of global annual turnover, whichever is higher. For a frontier lab with €1 billion in revenue, that is a €30 million exposure per violation. This is not theoretical. It is a countdown that every GPAI provider operating in Europe is now running against. And it carves out a clear commercial position: AI companies that can demonstrate provenance and licensing of their training data face dramatically reduced legal risk, faster regulatory clearance, and — critically — more investable balance sheets. The liability sitting inside an unlicensed model is now a disclosed risk that investors are beginning to price. Siloett turns that liability into an asset.
Why Professional AV Content Is the Prize
Not all training data is equal. The AI models shaping the next decade — world models for physical AI, robotics foundation models, cinematic video generators — require something scraped social-media clips cannot provide: high-fidelity, professionally captured, motion-rich content. Motion-capture performances. Multi-angle cinematic footage with precise depth and lighting metadata. Choreography with frame-level anatomical ground truth. These are the datasets that distinguish a model that generates plausible motion from one that understands the physics of a human body in space — and they matter enormously to teams building the next generation of humanoid robots, surgical systems, and physical simulators.
This content sits, almost entirely, in private vaults — unlicensed, untracked, and unmonetised for AI purposes. Consider the signal in the one exception: Disney's $1 billion, three-year deal with OpenAI in December 2025, giving Sora access to a narrow slice of its character catalogue. One studio. One lab. One laboriously negotiated bilateral agreement. The rest of the industry — thousands of rights holders, dozens of frontier labs — has nothing analogous. The professional AV licensing market for AI training is not small. It is structurally absent. The Disney deal is the benchmark that proves what it is worth.
Why IP Holders Aren't Signing Up — and What Changes That
Existing licensed data platforms exist. The problem is not a lack of marketplaces. The problem is that professional media rights holders are not using them. Studios, production companies, motion-capture facilities, and archive owners are asked to deposit content into a centralised repository, surrender meaningful oversight of downstream usage, and trust that licensing terms will be enforced on their behalf — without real-time visibility. For a studio with a catalogue worth hundreds of millions, or a production house whose competitive advantage is the uniqueness of its archive, that trade-off is not acceptable. For a European independent filmmaker with no legal resource at all, it is simply incomprehensible.
The result is market paralysis: AI companies cannot access the professional-grade libraries they need; rights holders refuse to engage with platforms that give them no real agency; and the gap gets papered over with bilateral deals that take months to negotiate and serve only the largest parties on both sides, leaving the entire long tail of the market — which is most of the market — entirely unserved.
The problem isn't that no licensing market exists. It's that professional IP holders won't participate in the ones that do — because they offer control theatre, not real control.
Not a Compromise. A System Design.
The reason the Seedance row — like Sora before it — produces only threats, pledges, and uneasy détente is that no infrastructure exists to make licensing both feasible and verifiable at scale. What the market needs is a protocol — one that makes provenance native to the asset, not an afterthought bolted on after a crisis. Siloett is designed to work for all three parties simultaneously, not to balance competing interests but to remove the structural conflict entirely.
For IP Holders & Creators
Track, Control & Monetise
Professional libraries — AV catalogues, motion-capture archives, cinematic vaults — are registered with full provenance documentation. Licensing terms, usage rights, and permitted AI applications are defined by the rights holder and recorded on the platform. Every training use is logged, auditable, and tied to documented consent. Rights holders gain real-time visibility and revenue from an asset class that currently earns them nothing — whether they are a Hollywood studio or an independent European filmmaker with no legal department.
For AI Companies & Labs
Build Without Liability Overhang
Access curated, licensed, documented datasets. Know precisely what your model was trained on. Produce the training-data summaries the EU AI Act mandates — not as a compliance burden but as a competitive differentiator signalling investment-grade governance. No litigation exposure. No reputational damage. No hidden balance-sheet risk. A provable, auditable data stack that withstands regulatory scrutiny and investor due diligence alike.
For Regulators & Policymakers
Fairness Without Friction
The GPAI Code of Practice demands copyright compliance and training-data transparency. Siloett provides exactly what the EU AI Office's mandatory template requires — structured provenance data, rights-reservation audit trails, and verifiable opt-in records. Regulators get a platform that protects creators — large and small, American and European — without forcing AI labs into litigation standoffs or creative stagnation. Innovation and IP protection become complementary, not adversarial.
European Wedge. Global Scale.
The EU is not simply a regulatory constraint to navigate. It is a forcing function that creates a global standard. When the EU mandates training-data transparency, every frontier model — regardless of where it is built — that wants access to European markets must comply. The playbook of scraping and settling doesn't scale to a world where regulators can audit your data stack and levy eight-figure fines. This is the pattern that shaped GDPR into a global privacy standard. The EU AI Act is doing the same for data provenance.
For robotics and world models specifically, the urgency is acute. Teams at the frontier are actively seeking high-quality motion and physical-interaction datasets. They cannot use scraped footage at scale without liability. They have no scalable alternative. Siloett is building the supply chain that doesn't yet exist.
If anyone can generate a Spider-Man scene on their laptop, who truly owns Spider-Man? The answer isn't litigation. The answer is infrastructure.
The Window Is Now
The Seedance episode will resolve the same way Sora did: pledges, quiet pressure, and eventually a licensing deal of some kind. But the underlying structural problem — that there is no scalable, trustworthy mechanism for professional AV rights licensing in AI training — remains entirely unsolved. Every resolution to a public crisis is a temporary patch over a systemic gap.
The EU AI Office's enforcement clock starts ticking in August 2026. GPAI providers have a shrinking window to build compliant data pipelines. The industry arms race between Hollywood and AI will not produce a workable licensing system on its own — it will produce more cease-and-desist letters, more pledges, more uneasy détente, and more years of expensive litigation that benefits no one except the firms billing by the hour. The BBC, ITV, Canal+, Banijay, Cineflix and their counterparts across Europe deserve a better option than a courtroom. So do the AI companies that want to build with their content legitimately. What the market needs is infrastructure that makes fair licensing the path of least resistance, not the path of most friction.
That is the company Siloett is building. The timing is not incidental. It is the point.