<p><br> <span class="small">November 26, 2025</span></p>
Overlooking responsible AI is a risk that cannot be ignored
<p><b>Ensuring responsible AI is often considered late in the game. But to avoid legal and financial risk, it needs to be considered early—before the first prompt is even typed into the AI system.</b></p>
<p><a href="https://www.weforum.org/stories/2025/11/why-the-risk-of-overlooking-responsible-ai-can-no-longer-be-ignored/" target="_blank" rel="noopener noreferrer"><i>Previously published</i></a><i> by the World Economic Forum in November 2025.</i></p> <p>Your product team just delivered a new breakthrough feature, developed in record time. They proudly describe their use of an AI model to accelerate research and design. Everyone’s celebrating the win.</p> <p>Meanwhile, behind the scenes, that same model may have been trained on proprietary data your company doesn’t own. Worse, the team might not know what data the model used. And even if they do, they might not fully understand, or even be aware of, the terms and conditions governing its use.<b></b></p> <p>That lack of clarity could quickly turn into legal exposure. What looked like innovation could now trigger an intellectual property lawsuit that could end up costing the company millions and a loss of customer trust.</p> <p>This scenario is not far-fetched. Instead, with the rapid uptake of AI across the business and government landscape, it’s a cautionary tale about the growing importance of responsible AI.</p> <h4>Responsible AI and data lineage</h4> <p>Plenty of businesses are aware of the need for responsible AI. But many treat it as an afterthought or a separate workstream—something the legal team or compliance office will address after a system is built.</p> <p>However, responsible AI is much more than a side project or a footnote in a governance policy. Especially when it comes to understanding and explaining AI data lineage, responsible AI is a frontline defense against serious legal, financial and reputational risk.</p> <p>Here’s an example of why. Most commercially available and open-source large language models commonly used today are trained on a great deal of data, including data that is proprietary or restricted to a particular use. The data might have been pulled from a corporate website, an academic journal, an open-source repository with a restrictive license, a government dataset or a social media platform containing personal data.</p> <p>The fact that these models are so widely available from major vendors leads many companies to assume their use comes with no legal risk. They rarely stop to ask—or even think about—where the data inside the models comes from or whether they’re legally allowed to use it in the ways they intend.</p> <p>However, while the AI is legal to use, the data it’s trained on, very often, is not. And when businesses use that data to design a new product, generate marketing content or build a customer-facing application, they may unknowingly expose themselves to legal action, even long after their AI-powered innovation was deployed.</p> <p>It's not as though model vendors fail to include legal disclaimers —many do, and even open-source licenses often include the terms and conditions on the proper use of their data and models. The issue is that businesses often aren’t aware of these disclaimers or may underestimate the consequences of failing to take them seriously.</p> <p>The fact that these disclosures exist puts the responsibility squarely in the hands of the businesses using these models. As any legal scholar will tell you, ignorance of the law is no excuse. Unfortunately, very few people actually read them.</p> <h4>A ticking legal time bomb</h4> <p>I have little doubt that legal firms around the world are already working with AI experts to uncover weaknesses in AI data use. These weaknesses could then be exploited in litigation or class-action lawsuits. Any organization that can’t clearly explain its data lineage or demonstrate responsible use of its data could be vulnerable.</p> <p>Once the first lawsuit is launched, it will mark the beginning of an unstoppable trend. Now that AI is so widely used, the opportunities for legal action are endless.</p> <p>It’s also just a matter of time before we see governments levying fines and penalties to enforce legal data use. Already, the EU AI Act and NIST AI Risk Management Framework require explainability, data lineage and ethical use. Just as sustainability audits are standard practice today, we’ll see responsible AI audits become a matter of course tomorrow.</p> <h4>Avoiding AI data hazards</h4> <p>But there are ways to avoid these costly mistakes. The ideal scenario is to embed trusted data practices and master data management from the start. Any AI framework should be built on a solid foundation of responsible AI that accounts for IP ownership, data lineage and the provenance of not just data but of the AI models themselves. When these principles are treated as core design requirements rather than an afterthought, organizations can innovate confidently while minimizing legal and financial risk.</p> <p>But in many cases, businesses will need to retroactively assess the data used in their AI systems. This is where we’ll see the rise of new roles to mitigate risk. Data engineers, for instance, will become data pruners—people specifically skilled in identifying and removing unauthorized or high-risk data from models. We’ll also see quality assurance reengineers, capable of validating AI outputs, ensuring compliance with responsible AI standards and reengineering models to meet legal and functional requirements.</p> <p>Once non-compliant or unauthorized data is removed, many companies will turn to synthetic data as a safer alternative, allowing them to retrain models without compromising IP integrity or regulatory compliance.</p> <p>Ultimately, we may see companies shift from general-purpose models to tailored AI systems built on clean, owned data. This transition will significantly reduce dependency on generic models. By investing in custom models, organizations will gain greater control, transparency and legal confidence in how their AI operates.</p> <h4>Moving forward with confidence in AI</h4> <p>As AI evolves, respecting data lineage and IP will become a critical component in proving oneself a champion of responsible AI. But beyond being a good corporate citizen, businesses will need to think of responsible AI as a firewall between innovation and costly legal and financial risk.</p> <p>Organizations that build with responsible AI principles from the start will not only stay protected; they’ll be positioned to move forward with confidence in unlocking long-term value from AI.<br> </p>
<p>Dr. Kathrin Kind-Trueller, Director for AI and Advanced Analytics at Cognizant, began her career in 1999 at Siemens. She has developed ADAS, engine management, and autonomous driving functions at Bosch, ZF-TRW, BMW, and more. She holds a doctorate in AI and multiple advanced degrees.</p>