August 28, 2025
A Record in Record Time
More than a hackathon: Cognizant's Vibe Coding event gave tens of thousands a first-hand look at how AI is transforming the way we create and build
We had a short few weeks to run the world’s largest online GenAI hackathon, and we pulled it off. With more than 50,000 participants across 40 countries, the event officially set a Guinness World Record for the most participants in an online generative AI hackathon. And we did it thanks to an incredible team, including our AI agent teammates.
The hackathon wasn’t just about breaking records. It was a chance for all of Cognizant, coders or non-coders, to experience and learn firsthand how vibe-coding can accelerate creativity, remove barriers, and change how we work.
Overcoming resistance to AI coding
It was only natural for us to vibe-code the registration portal itself. In just a few days, the portal was live, complete with monitoring and analytic dashboards. That experience alone set the tone: fast, efficient, and AI-enabled.
But with any disruptive innovation, resistance is to be expected.
Coders tend to be reluctant to vibe-code as they prefer to have full control over their code and know it inside out. Every coder has their own unique coding ‘handwriting’, and so relying on AI to write the code is uncomfortable for them, to say the least.
Non-coders also need a push to try out vibe-coding. For starters, the term vibe-coding has ‘coding’ right in there, and for many, it evokes the trauma of trying their hands at C or Java in college and deciding it is not for them.
So, of course, getting more tens of thousands of registrations for the event took some convincing. If you have never coded in your life but have ideas for apps or software that you can express in your own words, in a few sentences, then, given the right tool, the vibe-coding experience is life changing. You simply type in your idea, however vague, and the AI will interact with you to make it real, clarifying your intent, giving you available options, and iteratively building and showing you its work, allowing you to modify it to your heart’s content.
If you are a seasoned coder, on the other hand, this is well worth giving a try and putting your skepticism to the test. I know I did, and I learned a lot. I just recently came up with a new ML algorithm and wrote it down in my own cryptic pseudo-code handwriting, which I thought only I would be able to follow. When I pasted it into one of the vibe-coding platforms, to my utter disbelief, it implemented the algorithm and had it running within a few short minutes, taking care of all the mundane details I did not care about, like setting up a docker container, installing the dependencies, building the test harness... all things that end up taking too much time and distracting from the actual algorithmic work. At long last, as a coder, I feel liberated in focusing on the real meat of the matter and iterating on it in a hyper agile way. The experience blew me away.
Breaking records – and facing the scale
Well, our efforts paid off. We had more than 50,000 participants, and impressively, more than 40% were non-coders, with 20% never having written any code in their lives.
But these numbers, while impressive, also posed a major challenge: how do you assess that many submissions? Even with many participants forming teams, which helped reduce the total entries, we still received more than 30,000 final submissions. That’s tens of thousands of unique projects, and there was no way we could manage to give everyone a fair judgment in a reasonable time.
Enter the multi-agent evaluation system. Using our Neuro AI Multi-Agent Accelerator, we rapidly created a multi-agent system to review the submissions. Each submission consisted of a filled-out form with open-text fields describing various aspects of the project, the code itself, as well as a 2-minute video recorded by the participants. The multi-agent system evaluated and scored these submission artifacts on multiple dimensions, including innovation, UX, scalability, market potential, ease of implementation, financial feasibility, and complexity.
We created the multi-agent evaluator in an agile and iterative manner, tuning the scoring, comparing it to manual scores for a sample of the submissions, as well as manually evaluating the agentic scoring itself, including reviewing the logged reasoning behind each given rating.
The evaluation of the more than 30,000 final submissions took around 15 hours on ten servers and cost around 7,000 dollars. In comparison, if we had evaluated the submissions manually, even at an average of 30 minutes per project, it would have taken over 15,000 hours of work – or a full year for a team of 8 people working full-time. This is assuming that we included experts in the various coding languages, as well as subject matter experts for the various submission domains.
The vibe coding event was an out and out success. We achieved unprecedented scale in unprecedented speed for a project of unprecedented magnitude and complexity, and in the process, I feel like we instrumented a cultural shift in our company towards the adoption and effective use of AI in all aspects of what we do. Monumental.
Babak Hodjat is CTO of AI at Cognizant and former co-founder & CEO of Sentient. He is responsible for the technology behind the world’s largest distributed AI system.