Gen AI comes with great promises – and even greater responsibilities. As a privacy expert, I meet many companies with concerns about how to safeguard data privacy while exploring Gen AI. This is because AI systems often rely on large amounts of personal data to learn and make predictions, which raises questions about the collection, processing and storage of such data. To most businesses, fortunately, trustworthy AI is a prerequisite in considering adopting Gen AI.
The question is how you safeguard data in the Gen AI context without slowing down innovation and risk missing out on business opportunities. Based on my experience, these are the necessary elements to consider when initiating a Gen AI project in a business context – all to avoid unpleasant surprises along the road.
- Learn about the risks. The main privacy concerns surrounding AI are the potential for unlawful use of personal data and unauthorized access to personal information. Personal data in the wrong hands can have severe outcomes. There is also a concern that AI systems may strengthen existing biases. Industries with access to large amounts of personal data, such as banks, insurance, health care and life science, are keen on Gen AI projects but risk using their data in ways they are not allowed to.
- Take regulations very seriously. The EU AI act is likely to take effect in 2026 (EU negotiators just reached a provisional agreement) – and then everyone will need to adhere to it. If not, companies will get fined. Why is it needed when we already have GDPR? Because GDPR protects individuals in relation to the processing of personal data, not in relation to the outcome of the processing. Neither does GDPR cover the processing of non-personal data and its outcome. The new proposed legal framework focuses on the specific utilization of AI systems and associated risks, not the technology in itself, and covers AI used by companies, in the public sector and law enforcement.
- Think governance before getting started. The first question a client normally asks me is “How do we go about practically starting with Gen AI?”. My recommendation is to consider governance directly – before you even get started on a tangible project. How do you control your data and AI system? You need processes, responsibilities, instructions, transparency and explainability. Set up a governance structure around AI before even testing.
- Ensure privacy in training data. Gen AI is very different from traditional AI. The magic happens in the “black box”, the space where Gen AI merges information and provides output – and no one knows in detail what happens in there. How do you explain the process to a customer who initially consented to use personal information for X but not for Y? And what happens if a person wishes to withdraw his or her data? It’s impossible. In my opinion, the safest way is to train AI on anonymized or synthetic data. Cognizant works with Anonos to create “data without drama” to train Gen AI models. Another aspect is when employees utilize a Gen AI tool like ChatGPT using business-related information; then this potentially confidential information is used to train the model further. Here, you can have an add-on to filter the data and preserve secrets.
- Let the experts in early. Last but not least: invite privacy experts early in the loop to ensure that you design for privacy already from the ideation phase. Why? It’s all about trust; a customer, partner or investor needs to be confident that your Gen AI models are ethically and morally correct. Cognizant has the world’s largest unit exclusively dedicated to data privacy – I call it Cognizant’s best-kept secret. With 20+ people holding IAPP’s (The International Association of Privacy Professionals) highest level of certificate and a 200+ skilled pool of resources, we bring unparalleled expertise when it comes to protecting personal and person related information while empowering businesses to harness the innovation with Gen AI.
Gen AI holds the potential to become a game-changer in many areas but is also threatening society as we know it. Let’s use the technology in a responsible way and for the common good; safeguarding data privacy is an important part of that responsibility.