Introduction
TL;DR
On 2025-10-31, Japanese manga publisher Shueisha issued an official statement accusing OpenAI of copyright theft, calling generative AI “stealing with extra steps."[1][4] The publisher demanded enforcement measures beyond opt-out systems and national-level legal reforms, directly challenging OpenAI’s current approach to training data sourcing.[1][5] This incident reignites a fundamental debate about AI training ethics, copyright frameworks across jurisdictions, and the viability of balancing technological innovation with creator protection in the AI era.[2][10]
Context: The Scope and Stakes
Shueisha stands as one of Japan’s largest publishers, controlling iconic intellectual properties including [translate:One Piece, Dragon Ball, Jujutsu Kaisen,]] and [translate:Chainsaw Man]].[1][4][7] The global anime and manga markets represent billions of dollars in revenue and employ tens of thousands of creators. In 2025, Japan’s content industries rank among the world’s most valuable cultural exports, competing globally with Western entertainment conglomerates.[6]
When OpenAI released Sora 2 (announced late September 2025), social media was flooded within days with AI-generated videos exhibiting visual styles and character designs nearly identical to registered copyrighted works.[3][5][13] Shueisha’s response represents not merely a business dispute but a watershed moment in defining AI development boundaries within copyright law.
Why it matters: The outcome will likely set precedent for how national copyright systems address generative AI’s use of training data globally.
The Incident: Sora 2 and Unauthorized Content Replication
Timeline and Evidence
2025-09-30 (or 2025-10-01): OpenAI releases Sora 2, a video generation model capable of producing short films from text prompts.[3][5][13]
2025-10-01–2025-10-07: Massive proliferation of AI-generated videos on social media featuring character designs and visual styles from anime franchises including [translate:Dragon Ball, Bleach, Spirited Away,]] and Mario.[3][5][13][16] Some outputs were reported as “almost identical” to originals.[13]
2025-10-10: Japan’s Cabinet Office (via the Intellectual Property Strategy Promotion Secretariat) formally requests OpenAI to cease copyright infringement.[6][13][16]
2025-10-28: Content Overseas Distribution Association (CODA), representing 36 Japanese companies including Shueisha, submits written request to OpenAI.[3][8]
2025-10-31: Shueisha releases independent, strongly-worded statement condemning OpenAI and calling for legal action.[1][4][23]
The Core Technical Problem
Shueisha’s statement alleges that Sora 2 generates outputs resembling Japanese anime and manga because the model was trained on copyrighted Japanese content without authorization.[1][5][8] CODA’s technical assessment confirms that a substantial portion of Sora 2 outputs “closely resemble Japanese content or images, which is the result of using Japanese content as machine learning data."[5][8]
Why it matters: This moves the dispute beyond stylistic similarity into evidence of unauthorized dataset inclusion—a more serious legal claim than mere output similarity.
Shueisha’s Official Statement: Moral Clarity and Legal Demands
The Core Message
In its October 31, 2025 statement titled “Responses to Copyright Infringement Using Generative AI,” Shueisha articulated three distinct positions:
1. Moral Principle:
“While the evolution of generative AI should be welcomed for enabling more people to share the joy of creation and enjoy creative works, it must not be built upon trampling the dignity of creators who poured their heart and soul into their work or infringing on the rights of many."[1][4]
This language deliberately rejects technological determinism. The publisher refuses the framing that copyright restrictions must yield to AI advancement. Instead, it posits that sustainable AI systems require ethical foundations.
2. Procedural Critique:
Shueisha explicitly rejected the opt-out model currently employed by Sora 2:
“Unless providers of generative AI services, under their responsibility, promptly implement effective countermeasures against infringement—going beyond an ‘opt-out system’—and provide remedies for rights holders, the spiral of infringement using generative AI services will continue unabated."[1][5]
The critique is precise: by the time rights holders discover their work in training data and file opt-out requests, the model has already been trained. The opt-out mechanism cannot undo the past infringement.
3. Legal Pledge:
“Regardless of whether generative AI is used, Shueisha will take appropriate and strict measures against any use we determine infringes upon rights related to our works. We will also actively engage in activities to build and maintain a sustainable creative environment through collaboration and cooperation with copyright holders and relevant organizations."[1][4]
This is not rhetoric. Shueisha’s announcement signals intent to litigate.
Why it matters: The language oscillates between moral principle, legal analysis, and practical threat. It is simultaneously a principled stance on creator dignity and a calculated business decision to protect asset value.
The Broader Coalition: 17 Organizations
Shueisha’s statement followed—but exceeded in strength—a joint statement from 17 publishers and creative associations, including the Japan Cartoonists Association and Kodansha.[1] The fact that Shueisha issued an independent, stronger statement suggests disagreement over the joint statement’s adequacy.
The Legal Framework: Opt-Out vs. Prior Consent
The Jurisdictional Divide
The Shueisha-OpenAI dispute crystallizes a fundamental conflict between two copyright models:
| Dimension | Japan (Prior Consent / Opt-In) | OpenAI’s Model (Opt-Out) |
|---|---|---|
| Default Rule | Copyright owners must authorize use; AI companies must obtain permission before training. | AI systems may train on any data unless owners affirmatively object. |
| Burden of Action | Rests on AI companies to source licenses or consent. | Rests on copyright holders to discover infringement and file opt-out requests. |
| Timing of Enforcement | Enforcement occurs before training occurs. | Enforcement occurs (if at all) after training is complete. |
| Data Removal | If consent withheld, data is not used; if revoked, model retraining may occur. | Even after opt-out request, already-trained parameters cannot be cleanly removed.[22] |
| Legal Basis | Copyright law views authorial rights as inherent property requiring prior permission. | Fair use doctrine in US law permits transformative uses of copyrighted material. |
Why This Matters Philosophically
Barbara Rasin (J.D. Candidate, 2027) argues that opt-out schemes fundamentally contradict property law principles:
“The opt-out model is antithetical to the notion of property. Copyright owners own exclusive rights in their creations. These rights are inherent; owners do not forfeit rights by failing to affirmatively reserve them. Copyright operates like ownership of a house or car—property rights are inherently protected and cannot be lost due to the owner’s passivity."[25]
Under this logic, OpenAI’s position reverses centuries of copyright jurisprudence. Rather than requiring AI companies to secure permission, opt-out imposes on creators the burden of continuous vigilance.
Why it matters: This is not merely a regulatory technicality. It reflects competing visions of intellectual property in the AI era.
International Government Intervention
Japan’s October 2025 Actions
On 2025-10-10, Minoru Jouchi, Japan’s Minister of State for Special Missions (IP & AI Strategy), delivered an official government statement:
“Anime and manga are irreplaceable treasures that the world can be proud of. The Japanese government expects AI firms to uphold cultural and legal boundaries."[6][13][16]
The language is diplomatic but the underlying message is forceful: the Japanese state is now a party to this dispute. Prior government intervention has often preceded legislative action.
Additionally, Akihisa Shiozaki, LDP Deputy Secretary-General and a lawyer, warned that the Sora 2 case raises “serious legal and political problems” and suggested invoking Article 16 of Japan’s 2025 AI Promotion Act, which grants the government authority to demand explanations of AI systems and filtering measures.[16]
By October 7, Digital Minister Masaaki Taira stated that OpenAI must ensure Sora 2 complies with Japanese AI standards or face potential government action.[16]
Why it matters: Government intervention typically precedes either negotiated settlements or statutory reforms. Japan has positioned itself as unwilling to accept OpenAI’s current practices.
CODA’s Formal Role
The Content Overseas Distribution Association (CODA), established in 2002 with support from Japan’s Ministry of Economy, Trade and Industry and Agency for Cultural Affairs, formally represents 36 member companies.[21][24][27] On 2025-10-28, CODA submitted a written request to OpenAI on behalf of members including Sony’s Aniplex, Bandai Namco, Studio Ghibli, Square Enix, [translate:Toei Animation,]] Kadakuza, and Shueisha.[3][8]
CODA’s position: “The act of replication during the machine learning process may constitute copyright infringement. Under Japanese copyright law, prior permission is in principle required before use of copyrighted works."[8]
Why it matters: CODA’s involvement signals coordination among Japan’s entire content industry, not isolated publisher concerns.
The Wider Copyright Dispute: Legal and Scholarly Divide
Thomson Reuters v. Ross Intelligence (February 2025): A Turning Point
The Delaware District Court ruled on 2025-02-11 in Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence Inc. that AI companies cannot defend unauthorized use of copyrighted training data under the “fair use” doctrine.[15][18]
The Court’s Reasoning:
Non-Transformative Use: Ross used Thomson’s headnotes to create a competing legal research tool. The use was commercial and competed directly with Thomson’s own product, Westlaw. Thus, it was not “transformative” in character.[15]
Market Harm: The AI tool directly substituted for Thomson’s commercial offering, harming the original copyright holder’s market.[15][18]
Scope Limitation: Crucially, Judge Stephanos Bibas noted: “Only non-generative AI is before me today. How fair use factors will apply in generative AI cases (such as New York Times v. OpenAI) remains unresolved."[15]
Why it matters: While the ruling does not directly address generative AI, it signals that at least some U.S. courts recognize limits to fair use in AI training contexts. The outcome of higher-stakes cases (e.g., New York Times v. OpenAI, Google Books litigation) will likely extend this reasoning.
The “Fair Learning” Argument
In contrast, Stanford Law Professor Mark A. Lemley argues in the Columbia Science and Technology Law Review that AI training should be protected as “fair learning”:
“AI does not compete with authors. Instead, it uses their work fundamentally differently. ML systems copy works not to access their creative expression (what copyright protects) but to obtain uncopyrightable components—facts, ideas, linguistic structures. The concept of ‘fair learning’ suggests that even if traditional fair use factors argue against it, AI training on copyrighted materials should be considered fair, given that ML is extracting patterns and facts rather than replicating creative expression."[12][20]
Lemley further contends that purely machine-generated outputs are generally not eligible for copyright protection, undermining claims that AI unfairly displaces human creators.[12][20]
The Stalemate: The American legal establishment remains divided. While the Copyright Alliance warns against overly expansive “transformativeness” findings that favor AI companies,[14] other scholars defend fair use applicability to AI training. No Supreme Court precedent exists.
Why it matters: The unresolved U.S. debate contrasts sharply with Japan’s clearer position: no opt-out schemes; prior consent required.
European Regulatory Framework (August 2025 Implementation)
The EU AI Act, effective 2025-08-01 for new general-purpose AI models (GPAIMs), imposes copyright compliance obligations:[11][17]
- Transparency: GPAI developers must publish summaries of training data sources.
- Copyright Respect: Developers must comply with EU copyright directives.
- Opt-Out Mechanisms: Copyright holders may reserve their works from text and data mining (TDM) via machine-readable formats.
However, implementation remains inconsistent across EU member states. Poland explicitly requires machine-readable opt-out signals; Italy does not. No standardized protocol exists.[28]
Why it matters: The EU’s approach is intermediate—stronger than the U.S. fair use position but weaker than Japan’s prior consent requirement. Yet even the EU’s opt-out mechanisms face technical and practical challenges.
OpenAI’s Response and Sam Altman’s Positioning
October 2025 Statement
OpenAI CEO Sam Altman responded to Japanese government concerns on 2025-10-04 by stating the company would “improve copyright filtering and safeguards."[16] However, no specific timeline, technical specifications, or enforcement mechanisms were disclosed.
Altman’s Longer-Term Position
More revealingly, Altman has publicly advocated for “balanced copyright” approaches in the “Intelligence Age,” cautioning that overly restrictive laws could hinder technological progress and advantage international competitors, particularly China.[10] He has also announced plans to introduce features allowing copyright owners to impose detailed restrictions on AI-generated depictions of their characters.[10]
The Tension: Altman simultaneously acknowledges copyright holders’ legitimate interests while resisting strong regulatory constraints. This mirrors Silicon Valley’s traditional position: innovation first; regulation later.
Why it matters: The gap between Altman’s vision (permissive frameworks with optional safeguards) and Shueisha’s demand (mandatory prior consent and legal reform) is unbridgeable absent external pressure.
Corporate Copyright Indemnity Programs
Microsoft, Google, and OpenAI now offer “Customer Copyright Commitment” programs providing indemnification to enterprise users sued for copyright infringement through their platforms.[11] However, these protections:
- Exclude General Users: Individual and SME users receive no indemnification.
- Exclude Copyright Holders: The original creators receive no compensation; only commercial users of AI outputs are protected.
- Do Not Address Training Liability: Indemnity covers output usage, not the legality of the training process itself.
Why it matters: These programs are corporate risk management, not content creator protection.
Structural Issues: The Core Tensions
Innovation vs. Creator Dignity: The False Binary
Shueisha’s statement rejects the framing that creator protection and technological progress are mutually exclusive. The statement reads:
“The evolution of generative AI should be welcomed, but not built upon trampling the dignity of creators."[1]
This is neither Luddite anti-technology sentiment nor naive utopianism. It asserts that responsible AI development includes:
- Fair Data Sourcing: Commercial AI systems should compensate creators or obtain explicit consent for training data.
- Transparent Attribution: The sources of training data should be auditable and traceable.
- Output Filtering: Systems should limit outputs that excessively mimic identifiable copyrighted characters.
These are not anti-innovation positions. They are pro-accountability positions. A company willing to implement such measures remains profitable; it simply cannot exploit freely.
Why it matters: The binary “innovation vs. regulation” frame obscures the real issue: who bears the costs of innovation? If training costs are externalized onto creators (via unpaid data use), then AI costs appear artificially low. Fair pricing of training data would reveal the true development costs while funding creator economies.
Hallucination and Authenticity Erosion
Some Sora 2-generated videos were recognized as “almost identical” to originals.[13][16] This poses unique risks:
- Consumer Deception: Viewers may mistake AI outputs for authentic creator work.
- Trust Collapse: If AI-generated fakes become indistinguishable from originals, creators lose control over their public reputation.
- Individual Defense Impossibility: Manga creator Hirohiko Araki stated: “The extreme scenario is that AI-generated fakes might be accepted as the real thing, and even if we try to protect our work, these counterfeits are becoming so sophisticated that it’s no longer a fight an individual can win. Laws are probably the only way to regulate this."[23]
Why it matters: This moves beyond copyright economics into questions of epistemological integrity. If AI can reliably generate indistinguishable copies, copyright law itself becomes inadequate; authentication technology and legal presumptions must evolve.
Precedents and Future Scenarios
Potential Legal Pathways
1. Japanese Domestic Litigation Shueisha may file suit in Japanese courts under the Copyright Law (著作権法, 2025 amendments). Japanese courts have few precedents on AI training but likely lean toward copyright holder protection given statutory language and judicial conservatism. Success probability: moderate to high.
2. U.S. Litigation Shueisha could pursue claims in U.S. federal court under the Digital Millennium Copyright Act (DMCA) or Copyright Act. However, the fair use doctrine’s broader application in U.S. law creates uncertainty. Recent decisions (Thomson Reuters v. Ross) suggest some judicial skepticism toward fair use defenses in AI contexts, but SCOTUS precedent remains absent. Success probability: uncertain.
3. Regulatory Pressure Through CODA, Shueisha and allies may lobby the Japanese government for AI-specific copyright amendments or enforcement mechanisms. Given October 2025 government involvement, legislative action appears possible within 12–24 months.
Global Policy Convergence Scenarios
| Region | Current State | Likely Direction |
|---|---|---|
| Japan | Prior consent required; strong creator advocacy | Statutory amendments strengthening protections; possible AI-specific copyright crimes |
| EU | Opt-out via machine-readable signals (2025-08 onwards) | Standardization of opt-out protocols; potential shift toward opt-in for audiovisual works |
| US | Fair use debate ongoing; no SCOTUS precedent | Fragmentation likely: some circuit courts favor fair use; others adopt narrow view; Congress may intervene |
| UK | Post-Data Bill strike-down, opt-out approach stalled | Legislative reboot possible; alignment with EU or Japan remains uncertain |
Why it matters: Jurisdictions with strongest copyright protection (Japan) may become the de facto global standard if major tech companies refuse to maintain region-specific models.
Conclusion: Redrawing the Boundaries of Intellectual Property
The Shueisha-OpenAI dispute represents a civilizational moment for AI governance. Three conclusions merit emphasis:
1. Copyright as Default Norm, Not Permission Carve-Out
Shueisha’s statement reclaims copyright’s foundational principle: creators own their work and control its use by default. This stands opposed to OpenAI’s implicit framing that data is “ambient” and trainable absent explicit objection. If Shueisha prevails legally or through policy, the “opt-in” model (requiring AI companies to secure permissions) becomes the global norm, not opt-out.
2. Technology Innovation Does Not Require Creator Exploitation
Sam Altman’s framing—that strong copyright protections undermine AI progress—is empirically contestable. Well-compensated training data sources create markets for high-quality content, which incentivizes creation. Conversely, uncompensated use may temporarily accelerate AI development while undermining the source industries (anime, manga, literature) that generate training value in the first place. Shueisha’s position is: sustainable AI ecosystems require fair data economics.
3. National Regulation Will Shape Global Standards
Japan’s stringent position, backed by cultural significance and regulatory power, will likely influence global AI governance. If OpenAI cannot maintain Sora 2 under Japan’s copyright regime, it faces either:
- Accommodate Japan: Implement opt-in, prior-consent systems (costly but feasible).
- Exit Japan: Withdraw services (costly to market access and brand).
- Litigate: Defend fair use in Japanese courts (uncertain outcome).
Given Japan’s content industry’s global influence, accommodating Japan’s standards likely becomes the path of least resistance, effectively globalizing those standards.
4. Artificial Moral Clarity Over Technological Murkiness
Shueisha’s “stealing with extra steps” phrasing rejects the euphemistic language of AI discourse—“training,” “fair use,” “transformative.” Instead, it reasserts moral clarity: unpaid, unconsented use of creative work is theft, regardless of the machine learning architecture. This moral repositioning may prove more politically potent than technical copyright arguments.
Summary
- Shueisha’s October 31, 2025 statement marks a watershed in AI regulation discourse, shifting from technological accommodation to moral principle.
- The opt-out vs. opt-in debate reflects competing visions of intellectual property: Silicon Valley’s permissive framework vs. Japan’s strict prior-consent model.
- Legal precedent (Thomson Reuters v. Ross) suggests U.S. courts are narrowing fair use defenses for AI training, though SCOTUS has not ruled.
- Japanese government involvement signals legislative action is likely; Japan may become the de facto global standard-setter for AI copyright governance.
- The core tension is unsustainable. OpenAI cannot simultaneously defend unlimited training rights and respect Japanese copyright norms. Policy convergence—favoring creator protection—appears inevitable.
Recommended Hashtags
#AIcopyright #GenerativeAI #Shueisha #IntellectualProperty #OpenAI #Sora2 #AnimeIndustry #AIethics #DigitalRights #TechRegulation
References
“Shueisha Official Statement” | Essential Japan | 2025-11-02
https://essential-japan.com/news/one-piece-and-dragon-ball-publisher-shueisha-accuses-ai-users-of-trampling-the-dignity-of-autho“Japanese Publishers Take Stand Against OpenAI’s Sora 2” | Karmatic AI | 2025-11-02
https://karmatic.ai/japanese-publishers-take-stand-against-openais-sora-2-over-unauthorized-training/“Studio Ghibli, Square Enix and Shueisha Also Ask OpenAI Sora” | News Imperium | 2025-10-02
https://news.imperium.plus“AI is Just Stealing with Extra Steps” | Yahoo News | 2025-11-29
https://www.yahoo.com/news/articles/ai-just-stealing-extra-steps-160000029.html“Sony’s Aniplex, Bandai Namco and Other Japanese Publishers Demand End to Unauthorized Training” | Automaton Media | 2025-10-30
https://automaton-media.com/en/news/“Japan Asks OpenAI to Stop Using Anime Characters” | IGN / LinkedIn | 2025-10-18
https://www.linkedin.com/posts/rafaelbrown_japan-copyringhtinfringement-ai-activity-7385519144013766656-Au58“One Piece and Dragon Ball Publisher Shueisha Threatens Legal Action” | IMDB | 2025-11-29
https://www.imdb.com/news/ni65554692/“CODA Written Request” | GameSpot / Voice LaPass | 2025-11-07
https://voice.lapaas.com/openai-anime-manga-training-japan-studios/“Sam Altman’s Perspective on Copyright Protection” | The New Publishing Standard | 2025-11-14
https://thenewpublishingstandard.com“Generative AI – Addressing Copyright” | RPC Legal | 2025-09-21
https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/generative-ai-addressing-copyright/“Japan Warns OpenAI Over Sora 2 Video Model Amid AI-made Anime” | AI Daily Post | 2025-11-28
https://aidailypost.com“Mid-Year Review: AI Copyright Case Developments in 2025” | Copyright Alliance | 2025-08-27
https://copyrightalliance.org“Court Rules AI Training on Copyrighted Works Is Not Fair Use” | DG Law | 2025-02-26
https://www.dglaw.com“Japan Warns OpenAI Over Anime & Manga-style Sora 2 Videos” | Windows Report | 2025-10-15
https://windowsreport.com“The AI and Copyright Law Policy Dilemma” | Pennington’s Law | 2025-11-23
https://www.penningtonslaw.com“Thomson Reuters v. Ross Intelligence Fair Use Ruling” | Trademark Lawyer Magazine | 2025-03-19
https://trademarklawyermagazine.com“Opt-Out Approaches to AI Training: A False Compromise” | Berkeley Technology Law Journal | 2025-04-17
https://btlj.org“One Piece Publisher Shueisha Vows Appropriate And Strict Measures” | Bounding Into Comics | 2025-11-02
https://boundingintocomics.com“The Question of the Opt-out Model for AI Training” | GALA Law Blog | 2025-04-13
https://blog.galalaw.com“Content Overseas Distribution Association” | Wikipedia | 2022-11-22
https://en.wikipedia.org/wiki/Content_Overseas_Distribution_Association“AI Training and Opt-out Mechanisms for EU Copyright Holders” | Traple | 2025-06-12
https://www.traple.pl