Skip to main content Skip to footer

February 12, 2024

Gen AI and content: Balancing risks and rewards

4 steps for media companies seeking to lead the way in a brave new world.

Never has the media industry encountered as nuanced a technology as generative AI.

On the one hand, the technology offers limitless opportunities to create new products and thrilling real-time experiences. On the other, it raises thorny legal issues for content creators and owners. 

Perhaps most intriguing, generative AI is poised to play a key role in solving the very challenges it has helped create.

For media companies, the rush of questions about revenue opportunities is matched by an equal number of unknowns regarding content rights. The industry is set for big changes, yet intellectual property (IP) owners and the content supply chain are unsure how, where, or when they will occur. 

As the industry buckles up, we’ve listened to clients and can offer guidance on how to balance the risks and rewards by proactively protecting content rights, exploring global rights management, fostering healthy creator ecosystems, and emphasizing policy and governance.

Can generative AI be a content creator?

Content rights have always been complex for the programmers who manage and safeguard content. Now, generative AI tools that scrape text, images, videos, and music to train large language models (LLMs) are throwing a wrench into rights management processes that were already complicated to accommodate distribution that includes streaming and digital channels in addition to linear TV and box office.

If 2023 was a year of shock and awe at AI’s rapid adoption, 2024 will be about deploying it in production and monetizing it.

Our own recent research indicates adoption will follow an s-curve similar to those traced by other tech advances: a gradual rise, to a dramatic spike, to a plateau in which the technology becomes refined and pervasive.

Not surprisingly, generative AI’s rise is causing concern among creators and owners. So far, they have few options for controlling LLMs’ use of their content. Google-Extended remains a notable exception—it empowers website publishers to opt out of having their content and data used to train AI models and still appear in search results.

AI was at the heart of last year’s strikes by entertainment writers’ and actors’ guilds, and Hollywood executives continue to debate the technology’s impact on studios and artists. A diverse roster of creators are turning to copyright law for protection; the New York Times, Getty Images and comedian Sarah Silverman all voice similar legal objections: that LLMs are trained on content they’ve created without their consent. Musicians are contemplating hitting the picket line over issues that include AI, and legal actions extend to code as well. A class-action lawsuit targets Microsoft, GitHub, and OpenAI for permitting AI-powered coding assistant GitHub Copilot to train on public repositories and violate open-source license terms.

These developments signal a simmering tension over generative AI’s intersection with copyright law. In the U.S., these laws prohibit copyright protection for AI-generated output that isn’t solely the work of its authors, but the law is considered ambiguous and subject to interpretation. A fair use defense depends on whether the AI output is deemed transformative—in other words, whether it uses copyrighted work in ways that are distinctly different from the original. A closely watched 2023 decision by the U.S. Supreme Court affirmed pop artist Andy Warhol’s silkscreened portraits weren’t transformative and didn’t constitute fair use.

How that outcome will impact digital creators is still unknown. Meanwhile, copyright of AI-generated works remains under scrutiny. A federal judge ruled recently that AI-created artwork was ineligible for copyright protection, and the U.S. Copyright Office canceled the copyright registration for a partially AI-generated graphic novel.

Navigating the new content landscape

Given rising industry concerns and potential legal complications, there’s a pressing need for media companies to prioritize mitigation strategies and protective measures—and a role for generative AI to play in the process. Here are four recommendations on ways content owners can use the technology to navigate this new landscape.

  1. Proactively protect content rights in new ways. Content owners, producers and distributors provide the first line of defense when it comes to safeguarding content and IP. Digital risk management (DRM) software is becoming increasingly pivotal for identifying risks and developing strategies using advanced techniques such as machine learning (ML).

    Generative AI, however, makes it possible to protect content in ways that are easier and more convenient.

    • Content categorization: Generative AI tools can automatically categorize new content based on metadata, genre, and other factors.

    • Dynamic DRM application: Based on content categorization and license parsing, gen AI automatically applies appropriate DRM settings to each piece of content.

    • Geographic customization: DRM settings adapt based on the geographic location of the user, in compliance with local laws and licensing agreements.
  2. Implement a global rights management system. Because media companies typically maintain separate rights management systems for each geographical region, the goal of a single global system has always seemed distant at best. No more. Generative AI-driven automation introduces capabilities that bring global systems closer to reality. With its ability to, for example, scan licensing agreements to extract key terms such as geographic restrictions or expiration dates related to the media rights contract, the technology moves comprehensive global rights management systems closer to reality. It’s a key advance for the systems, which feed media supply chains and yet are often underused. While no small undertaking, a global rights system holds the potential for operational efficiency and improved IP control and content monetization.

  3. Foster transparency, respect, and partnership within the content ecosystem. AI systems aren’t black boxes immune to basic copyright protections. Building a sustainable business model that incorporates AI systems requires partnerships that include content creators and producers together with AI companies. Failure to create inclusive ecosystems typically increases resistance to change. Moreover, inclusivity is especially important because among generative AI’s most intriguing qualities is its potential to serve as an equalizer.

    For the successful adoption of disruptive technologies, content creators must work closely with publishers and other users to develop a win-win proposition. That could come in the form of use cases that build more efficient content operations by automating mundane tasks, recreating IP for multiple distribution channels, and incentivizing creators to adopt new tools to develop more creative content.

  4. Double down on policy and governance. Content owners should partner with guilds, government, and individual contributors to prepare governance and acceptable usage policies. These policies will provide a path forward for both legal consumption of content and fair compensation for it. A governance framework also addresses ethical, societal, and economic concerns surrounding generative AI. Governance is top of mind for most media companies, and many are participating in guilds and government organizations. They’re critical players in the evolving landscape legal and technology landscapes.

As the media industry navigates generative AI’s complex terrain, it’s critical to strike a balance between leveraging its potential for innovation and addressing the legal challenges.

For content owners, the goal is to proactively protect IP through advanced digital risk management strategies, global rights management systems, and collaborative ecosystems with an emphasis on policy and governance. The key is integrating the strategies to harness gen AI's benefits while ensuring protection for creators' rights and contributions, thereby fostering a sustainable and legally compliant content landscape.

In a follow-up post, we will explore steps companies can take to position themselves in the content supply chain to leverage safe, responsible AI.

For more information on the market for generative AI, read our report, New work, new world.

Catherine Karas

Director – Consulting, Comms, Media & Technology

Author Image

Catherine has 20+ years of executive & digital transformation consulting experience, specializing in besting disruptive technologies such as gen AI within CMT as it impacts the content value chain. She has led key initiatives to drive revenue growth, define advertising tech strategy and improve operational efficiency.

Kevin Ghorm

Senior Consultant, Comms, Media & Technology

Author Image

Kevin Ghorm is a senior consultant with experience in strategic planning and business development. He has 10+ years of experience within media & entertainment, with a particular focus on royalties & rights management.

Praneeth Reddy

Manager – Consulting, Comms, Media & Technology

Author Image

Praneeth holds 10+ years of experience in strategy consulting and delivery experience for leading media and entertainment clients specializing in automating, optimizing and enhancing enterprise applications and business process with a focus on artist payment, studio ticketing and advertising.

Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition