July 25, 2025
Building trustworthy AI together: Reflections from the AI for Good Global Summit 2025
What we learned at the AI for Good Summit 2025: advancing responsible AI through collaboration, implementation, and impact.
I recently returned from the AI for Good Global Summit 2025 in Geneva, and as always, I left inspired by the people, the ideas, and the sense of shared responsibility to ensure AI continues to serve all of humanity. What began as a bold experiment nearly a decade ago has grown into a global gathering of over 12,000 people, where leaders from government, academia, industry, and civil society come together to shape the future of AI.
As a founder of this journey from the beginning, I’ve watched the summit evolve from intimate discussions to a movement. What has remained constant is the spirit of shared purpose: to build AI systems that are safe, inclusive, and beneficial to all. It’s heartening to see that early vision now embodied in the standards being developed, the policies debated, and the innovations showcased by large AI companies, or any of the 140+ countries now contributing to the agenda.
This year’s summit felt especially significant. The conversations were less conceptual and more focused on practical implementations grounded in real-world needs.
That shift from principles to practice came through several of the sessions I had the opportunity to join and contribute to.
Strengthening trust through testing and evaluation
One of the most meaningful discussions I took part in was the Open Dialogue for Trustworthy AI Testing, which brought together experts from the OECD, Oxford Martin School, UC Berkeley, and others to explore how we build more consistent and transparent approaches to evaluating AI in real-world settings.
A few challenges stood out. Many regions still lack access to the tools and frameworks needed to test and monitor AI safely. Standards vary widely. And across the board, there’s a tendency to treat testing as a milestone rather than a continuous process.
This is something I think about often in my work at Cognizant: how we ensure that AI systems, especially those used in high-stakes environments, are not only accurate but reliable, fair, and trustworthy but and resilient over time. Evaluation has to extend beyond performance metrics to include transparency, long-term reliability, and alignment with real-world conditions.
My key takeaway from the session was that trust in AI starts well before deployment. It begins with clear, shared evaluation frameworks that are tested in context and designed to evolve.
Securing agentic AI
Another recurring thread was the growing security challenges posed by agentic and multi-agent systems. These models introduce a new layer of complexity, especially when it comes to safeguarding data and coordinating behavior across distributed environments. As these technologies mature, the question isn’t just how powerful they are, but how we ensure they remain aligned with human values and governance structures that can keep pace.
Rethinking the future of work and learning
In another session, AI Futures: Reimagining Learning and Work in 2035 and Beyond, I joined a diverse group of educators, technologists, and policy leaders to explore how AI might reshape education and employment over the next decade.
Through scenario planning and open discussion, we surfaced a shared recognition: AI should be used to augment human capability, not replace it. Whether through personalized learning pathways, flexible credentialing, or support for lifelong education, the goal is to create environments where AI expands access and supports meaningful participation in the future economy.
What resonated most was the understanding that foresight is no longer a theoretical exercise. If we want AI to support a more equitable future, we have to design for that outcome now with both intention and collaboration.
Applying AI to global challenges
I was also proud to see our team from Cognizant AI Lab contribute to the summit’s applied mission during the session on AI for Agriculture: Shaping Standards for Smart Food Systems. Our team shared Project Resilience, an open platform built under the Global Initiative on AI and Data Commons. It brings together technical experts and decision-makers to develop AI tools that support sustainable agriculture, climate adaptation, and public health planning
As part of the platform, we demoed the Irrigation Strategy Optimization project, a tool built on the AquaCrop simulator to help optimize irrigation timing and field strategies. Designed to improve crop yield, reduce water use, and lower management costs, it offers farmers and policymakers practical, data-informed support for more sustainable decisions in the field.
One stat shared during the session stuck with me: globally, up to 40% of crop yield is lost before harvest, and another 40% after the harvest. That means just 20% of what’s grown actually reaches people. The scale of that inefficiency is staggering, and it makes clear how even small, targeted improvements through AI could have meaningful global impact. Tools like Project Resilience aren’t just technical demonstrations. They’re early examples of how AI can help reduce waste, strengthen food systems, and meet real human needs.
What’s next
Leaving Geneva this year, I felt something crystallize. The summit and the broader movement it helped catalyze is no longer just about AI for Good, but imagining better futures and building them, with clarity and care.
At Cognizant, we’re committed to building AI that earns trust. That means designing with context, investing in transparency, and embedding responsibility into every layer of development and deployment. This is how we start building AI that can help shape better futures.
The insight I carry forward is this: no single organization, platform, or government can do this alone. We need a shared infrastructure. We need patient collaboration. And we need to keep inviting new voices in - especially those not traditionally at the table.
Building trustworthy AI is not a destination but a discipline. A commitment to collective stewardship. And after eight years of nurturing and watching this movement take shape, I remain convinced: the most important work is still ahead.
Amir Banifatemi leads the company's efforts to ensure its AI technologies, services, and capabilities meet the highest standards of safety, reliability, and responsible innovation.