Unpacking the IPR Dilemma in the Age of Generative AI
Generative AI has taken the world by storm. From writing essays and generating images to coding software and mimicking human voices, these tools are reshaping how we work, create, and communicate. But as we lean further into this AI-driven future, one critical question remains unresolved: who owns the content created by AI?
This isn’t just a philosophical musing—it’s a legal, economic, and ethical puzzle. And at its heart lies the issue of intellectual property rights (IPR).
In this article, we break down the key challenges and real-world implications surrounding copyright, fair use, trade secrets, and the global legal response to AI-generated content—drawing insights from a detailed 2024 report by the World Intellectual Property Organization (WIPO).
What’s Generative AI, Anyway?
Generative AI tools like ChatGPT, Midjourney, and GitHub Copilot are trained on massive datasets and produce new content—text, images, code, music, and more—based on human prompts. But here’s the twist: the data used to train these tools often includes copyrighted material, personal data, and proprietary content scraped from across the web.
This raises two immediate concerns:
Does training AI on copyrighted works infringe on existing IP rights?
Can the outputs of AI themselves be copyrighted—and if so, by whom?
The Great Copyright Dilemma
Copyright laws were written in an era when only humans created art, literature, or inventions. AI complicates that. In most countries, for a work to be protected by copyright, it must be created by a human.
According to WIPO, some countries like the U.S. reject copyright for AI-generated works unless there’s clear human authorship. But others—like the UK, India, New Zealand, and South Africa—grant copyright to “computer-generated works” even without human creators. Recently, a Chinese court upheld copyright for an AI-generated image because the human user made detailed choices in crafting the prompt.
So we’re in a global gray zone. If you’re a business using GenAI to create content, don’t assume you automatically own the rights.
Are You Infringing Without Knowing It?
Many GenAI tools are trained on content that may be under copyright. If the tool spits out something that closely resembles a copyrighted work—or includes brand names, logos, or visual styles—you could be on the hook for infringement.
And it’s not just developers who are liable. Even users of these tools can face legal risk, especially in countries where intent doesn’t matter—using infringing content, even unknowingly, can lead to penalties.
WIPO advises businesses to:
Choose tools trained on licensed or public domain datasets
Avoid prompts that reference third-party IP
Run plagiarism and image checks before publishing content
What About Fair Use and Parody?
In some jurisdictions (like the U.S.), “fair use” might apply to AI training—especially for research or parody. But this is far from universal.
There’s also legal debate on whether fair use can justify the scraping of billions of copyrighted works to train AI models. So far, courts have offered mixed signals. With lawsuits pending in multiple countries, the fair use argument remains risky for commercial use.
The Invisible Risks: Trade Secrets and Confidential Info
Ever typed sensitive information into an AI tool? You may have unknowingly compromised a trade secret.
Some GenAI tools store user prompts for quality control or even model improvement. This means that confidential information—like proprietary strategies, financial data, or legal drafts—might be saved or reused.
WIPO recommends:
Using private-cloud or enterprise versions of AI tools
Training staff to avoid entering confidential info into AI prompts
Implementing strict access controls and prompt monitoring
The Open Source Trap
AI-generated code could be pulling from open-source projects—many of which carry licenses requiring attribution or limiting commercial use.
If you’re integrating AI-generated code into your proprietary product, you might unintentionally violate these licenses. That could force you to release your source code under open terms—something most businesses would want to avoid.
Deepfakes and Voice Rights
GenAI can now clone voices, faces, and mannerisms—often with scary accuracy. While this opens creative doors, it also risks violating privacy and publicity rights.
Countries like China have already enacted laws mandating clear labels for “deep synthesis” content. WIPO urges organizations to get explicit consent before synthesizing someone’s likeness or voice.
The Global Legal Patchwork
From the EU’s upcoming AI Act to China’s real-time regulations, legal systems are racing to catch up. WIPO highlights the need for international harmonization on key IP questions—but for now, businesses must navigate a fragmented and evolving regulatory landscape.
A Practical Checklist for Businesses
To protect your brand, reputation, and legal standing, here’s what WIPO suggests:
Train your teams on IP risks and AI limitations
Avoid using third-party brands or copyrighted works in prompts
Use tools trained on licensed datasets
Label and record AI-generated content
Get clarity on ownership via contracts and terms of service
Final Thoughts
GenAI offers incredible potential—but with it comes profound legal uncertainty. For creators, businesses, and lawyers alike, the IPR conversation is just beginning. Until clearer global laws emerge, the safest path is one of transparency, due diligence, and informed use.
As WIPO rightly puts it, navigating intellectual property in the age of AI is not just about compliance—it’s about future-proofing your innovation.
Source: WIPO (2024). Navigating Intellectual Property and Generative AI. World Intellectual Property Organization. https://www.wipo.int/ai

Mazharul Islam,
Corporate Legal Practitioner,
Member of Harvard Business Review Advisory Council.
He can be reached at mazhar@insightez.com
