In light of recent news about the mass ingestion of existing works by AI companies and the resulting lawsuits and settlements, I will outline the personal and societal dilemma that lies between “ownership of creative works” and “the rapid pace of AI development.” If you are an artist, you want to “protect your soul.” If you are an entrepreneur, you want to “move fast with the best tools.” This article explores what is realistic in cherishing both at the same time.
The Current Landscape (Legal and Practical Snapshot)
In recent years, there has been growing legal backlash against the practice of training AI by ingesting vast amounts of data from the web. Multiple large-scale lawsuits and settlements are underway, with some already reaching agreements. Newspapers, image agencies, studios, and individual creators have raised concerns that their styles and works have been used without permission. Court battles revolve around whether ingestion of training data constitutes copyright infringement or qualifies as “fair use,” as well as whether companies have an obligation to compensate rights holders.
Why Emotions Run High (The Artist’s Perspective)
- Loss of ownership
A work is not just a product but contains the creator’s expression and personal value. Having it used as training material without permission is equivalent to losing the right to control its distribution and use.
- Economic and opportunity loss
If AI can mimic a style, it risks reducing commissions and revenue. This is why lawsuits are emerging across publishing, photography, and music.
- Cultural degradation
If anyone can quickly generate works “in that style,” individuality and scarcity erode, raising fears of long-term harm to creative culture.
Why Entrepreneurs See It as “Inevitable” (The Entrepreneur’s Perspective)
- Competition and speed
The quality of AI products depends heavily on training data. Companies aiming to build the best AI quickly need large and diverse datasets. Requiring strict regulation or prior permissions slows development and increases the risk of losing in competition.
- Innovation externalities
Past internet-driven advances, such as search and social media, often followed a “build first, regulate later” trajectory. Entrepreneurs want to replicate this.
- Practical difficulties
Obtaining prior permission from every rights holder is unrealistic, due to the sheer volume of works and the difficulty of identification and negotiation. Many companies therefore ingest data first and settle later.
Middle Ground (Practical Guardrails and Viable Options)
Here are several realistic compromises to protect both sides, blending policy proposals with operational practices:
- Opt-in/opt-out creator data pools
A formal platform where creators voluntarily contribute data with revenue-sharing models.
Pros: Transparency, revenue return.
Cons: Not all high-quality data will be included.
- Compensation funds or rights-credit systems
AI companies contribute funds proportional to the works used (or suspected to be used) in training.
Pros: Easier to compensate small creators.
Cons: Distribution rules are difficult to design.
- Dataset provenance and transparency
Companies disclose, even in broad terms, what data was used for training, enabling rights holders to audit. Transparency reduces disputes.
- Style protection and labeling rules
While direct copying and style imitation differ legally, trademark-like protections or origin labeling (e.g., warnings for “inspired by X style”) could help avoid consumer confusion.
- Low-friction licensing tools for small creators
Platforms could offer one-click interfaces for granting usage rights and setting fees, making it easy for individual creators to license their work.
Practical Advice for Artists
- Establish governance for data procurement, including sources, logging, and usage purposes. Transparency directly reduces litigation risk.
- Prioritize licensed or opt-in datasets. Though more costly in the short term, it reduces long-term risk.
- Study lawsuits and settlements to estimate future compensation costs.
Checklist for Policymakers and Legal Teams
- Consider legal frameworks mandating attribution and provenance tracking of training data.
- Design compensation mechanisms accessible to small creators.
- Establish clear guidelines distinguishing academic/public-interest use from commercial use.
- Require labeling of AI-generated works (e.g., “AI-generated” or “inspired by X style”).
These steps cannot be decided overnight but directly reduce the “cost of confusion” in both practice and law.
Conclusion — Change Is Needed, Not Destruction
Whether you are an artist or an entrepreneur, the goal is the same: to create good things and build systems that reward them fairly. Court battles may not provide the ultimate answer, but the current wave of disputes is a strong signal that “rules must be made.”
Entrepreneurs want to move fast, and artists want to protect their souls — these goals can coexist. The keys are transparency, fair compensation, and easy-to-use licensing infrastructure. The struggle between practical implementation and legal enforcement will continue, but through dialogue and compromise, a future without mutual harm is possible. What we can do today is to build small but meaningful mechanisms of agreement, step by step.