The Reemergence of the “Generative AI × Copyright” Wall
In September, U.S.-based OpenAI released its new video-generation app “Sora.”
With just a text prompt, users can generate high-quality 10-second videos within minutes. The app took the internet by storm in just a few days. Yet, problems emerged just as quickly. Social media was soon flooded with videos resembling Japan’s beloved characters such as Pokémon and Dragon Ball, sparking accusations of copyright infringement.
OpenAI CEO Sam Altman promptly acknowledged the issue and announced plans to introduce a system that allows copyright holders to control whether their works can be used for video generation.
However, this episode laid bare the blurred and culturally contingent boundary between “AI freedom” and “the protection of creators’ rights.”
Why the “Opt-Out” Model Became Controversial
At the root of the Sora controversy lies the opt-out model—a system under which AI may use works for training or generation unless the copyright holder explicitly refuses.
While this model has gained some acceptance in the West, the situation in Japan is different.
In Japan, anime and game characters are not merely creative works—they embody a company’s brand identity and function as cultural assets that span merchandise, events, and media.
If such assets are used in AI training or generation without prior explanation or consent, it is hard to avoid the impression that they were “used without permission.”
Disney Was Protected—But Japan Wasn’t?
Interestingly, some pointed out that Disney characters such as Mickey Mouse could not be generated from the start.
If true, this suggests OpenAI may have coordinated with Disney in advance—fueling perceptions of “Japan being treated lightly.”
The gap between companies like Disney, which strictly manage their intellectual property, and Japan’s anime industry is wide—both legally and culturally.
Here emerges a new challenge: the need for international IP negotiation power in the AI era.
The Significance of Nintendo’s Firm Statement
On October 5, Nintendo declared on X (formerly Twitter):
“Regardless of whether generative AI is used, we will take appropriate action against anything we deem to infringe upon our IP.”
This statement sent a clear signal to AI developers and users alike:
Nintendo will judge based solely on whether infringement exists, not who or what created it.
At a time when authorship in AI-generated works is increasingly ambiguous, this unwavering stance represents perhaps the final bulwark for rights protection.
Conditions for Coexistence Between AI Developers and IP Holders
CEO Altman has expressed intentions to introduce revenue sharing for rights holders and to shift toward an opt-in model.
While these are steps forward, mere licensing and payment mechanisms are not enough.
What is essential is dialogue grounded in an understanding of both AI systems and cultural context.
AI is no longer just a tool—it is reshaping the very landscape of cultural expression.
Therefore, what we need is a new ethical framework that balances freedom of use with respect for culture—a kind of “AI Cultural Compact.”
Redefining “Respect” in the Age of AI
The Sora controversy is not merely a technical mishap.
It is an event that compels us to ask: What does it mean to show respect for creative works in the age of AI?
Altman said, “I want to express my gratitude for Japan’s incredible creativity.”
But true gratitude goes beyond words—it lies in building systems that treat both rights and culture with equal respect.
The future of co-creation between humans and AI will not emerge from mere convenience,
but from the reconstruction of respect itself.