Skip to main content Skip to footer


August 25, 2025

How Neuro AI Helped Make History at the Cognizant Vibe Coding Event

How 53,000 associates came together for the world’s largest generative AI hackathon


On August 21, 2025, Cognizant’s Vibe Coding event made history by setting a new Guiness World Record for the most participants in an online generative AI hackathon. The event brought together 53,199 Cognizant associates across 40 countries in a global celebration of creativity, collaboration, and code. But behind the excitement of this achievement lays an immense challenge: how to fairly and transparently evaluate thousands of submissions in record time. The solution came from our Neuro AI Multi-Agent Accelerator.

Why We Turned to AI

Manually judging the 30,601 submissions would have been unmanageable. Our calculations showed it would require 8 employees working full time (7 hours a day) for more than a year—at 30 minutes per submission—to complete the task. Waiting that long would have drained the momentum from an event built on speed and energy. Instead, by harnessing AI agents, we completed the evaluations in less than a day of compute time—a dramatic leap in efficiency without sacrificing fairness.

How the Multi-Agent Accelerator Made It Possible

At the core of the first stage of evaluations was the Neuro AI Multi-Agent Accelerator, a platform purpose-built for orchestrating large-scale, collaborative AI systems. We designed a judging pipeline that reflected the depth of human evaluation while amplifying it with AI’s speed and consistency.

A parent orchestration agent managed the process, assigning submissions to the right evaluator agent and consolidating the results.

Specialized sub-agents focused on key rubric dimensions: Innovativeness, User Experience, Scalability, Market Potential, Ease of Implementation, Financial Feasibility, and Complexity.

Each agent produced not just a score, but also an explanation of its decision—ensuring transparency, accountability, and trust in the process.

To capture the full essence of each project, our system evaluated three types of submission inputs:

·      Text descriptions, analyzed for clarity, problem framing, and originality.

·      Video pitches, transcribed audio of the videos and studied for context

·      Code repositories, parsed for architecture, modularity, complexity, and engineering maturity.

This holistic, multi-modal approach mirrored how human judges would evaluate—only at enterprise scale and speed by parallelizing and distributing tasks over multiple machines.

When It All Came Together

In less than a day of compute time, the Neuro AI-powered system had reviewed more than 30,000 submissions at a cost of $7,000 and delivered a comprehensive and structured evaluation of the submissions. Human judges were then able to filter by criteria and keywords to help them further rank, distill and evaluate the submissions. This hybrid model—AI for scale, humans for final judgment—ensured fairness, efficiency, and quality.

The Vibe Coding event was more than a record-setting moment. It was a proof point for the future of human + AI collaboration. By leveraging the Neuro AI Multi-Agent Accelerator, we showed how challenges once thought insurmountable can be solved with creativity, engineering excellence, and the right technology.

This was not just a milestone for Cognizant or the participants—it was a glimpse of how multi-agent systems can scale to enterprise settings and help us achieve the extraordinary.



Latest posts

Related topics