February 02, 2026
Behind the Research with Hormoz Shahrzad
An inside look into the work and expertise from Senior Director Hormoz Shahrzad.
Welcome to the third edition of our Behind the Research blog series, featuring Cognizant AI Lab researchers. This series offers an inside look at some of their work and what inspires them, what they’re interested in, and how it shapes the technology that is becoming part of our daily lives. Today, we are interviewing Hormoz Shahrzad, Senior Director at our AI Lab.
Firstly, how did you first get interested in AI and research more generally?
I’ve always been fascinated by how things work at a fundamental level. As a child, I used to think that if I ever ended up alone on an island, I should at least understand enough about the world to make a living from first principles. That mindset naturally led me to take things apart—first mechanical toys and devices, and later, during my teenage years, electronics and digital circuits in high school.
When I was introduced to computer programming and algorithms, that curiosity accelerated. Programming felt like a way to build complex behavior from simple rules, and I became increasingly interested not just in how systems worked, but why they behaved the way they did. This gradually expanded into learning about algorithms, the computational principles underlying the brain, and eventually AI more broadly.
A pivotal moment came in my second year of college, when I encountered Conway’s Game of Life and Holland’s evolutionary algorithms. Seeing how rich, intelligent-looking behavior could emerge from simple local interactions had a profound impact on me. That experience pulled my interest beyond traditional AI toward Artificial Life—toward understanding how intelligence itself can arise from basic processes rather than being explicitly designed.
What are the main problems you are currently working on, and what makes them meaningful or exciting to you?
I’m working on a set of closely related problems centered on how complex, reliable intelligence can emerge from structured constraints rather than unrestricted search. My dissertation improves evolutionary computation through masking and distribution mechanisms, guiding optimization by selectively exposing subsets of parameters. This approach enhances scalability and produces more stable, interpretable dynamics when modeling large-scale brain activity.
In parallel, I study creativity in humans and machines, influenced by Edward de Bono’s lateral thinking. I see creativity as structured exploration, where constraints and shifting perspectives play a productive role—an idea that connects naturally to masking strategies in optimization.
I’m also exploring how masking can improve evolutionary fine-tuning of large language models, and developing a multi-agent problem-solving framework that decomposes complex tasks into smaller, verifiable steps to increase reliability and trust.
Overall, my work lies at the intersection of optimization, cognition, and emergence, aiming to understand what makes intelligence more scalable, creative, and dependable
How do you decide which research questions are worth pursuing and exploring in depth?
- I’m drawn to questions that get at underlying mechanisms, not just performance gains. I care about understanding why a system works, especially when intelligent or complex behavior emerges from simple rules. I also prioritize questions that transfer across domains—if the same idea connects evolutionary algorithms, brain dynamics, creativity, and modern AI systems, that’s a strong signal it’s worth deeper exploration. In general, if a question still feels compelling after I try to formalize it, simplify it, and break it, that’s usually when I commit to it.
I try to ask: what would this question ultimately lead us to? If the answer is only a minor or incremental improvement, it’s usually not worth pursuing. I prefer problems that open up new perspectives across many domains, with ideas that can generalize and meaningfully improve understanding beyond a single setting.
Can you share a project you’re especially proud of and what you learned from it?
One project I’m particularly proud of combined multi-objective optimization with novelty search (Enhanced Optimization with Composite Objectives and Novelty Pulsation). resulting in a method I called composite objective novelty pulsation. Although the original motivation came from stock trading, those details couldn’t be published, so I used sorting network design as a clean benchmark to demonstrate the idea. The approach ended up breaking the world record at the time for 20-line sorting networks, which was unexpected and, more importantly, showed that the method transferred beyond its original domain. That experience reinforced for me that cross-domain transfer is one of the strongest indicators that a research idea captures something fundamental.
How do you stay up to date with such a fast-moving field? And are there any areas of AI that you think are underrated or not getting enough attention right now?
In a field that’s moving as fast as AI, I don’t try to keep up with everything. Instead, I anchor myself around a small set of fundamental questions and follow work that meaningfully contributes to them. Being in close contact with like-minded peers who have diverse perspectives has been just as important—those conversations often surface important ideas long before they appear in papers. In terms of underrated areas, I think evolutionary computation remains underemphasized, especially as a tool for exploring structure, robustness, and emergence rather than just optimization. I also believe explainable and structured AI systems deserve more attention, particularly as models become larger and are increasingly deployed in high-stakes settings. Ultimately, the most sustainable way forward is to focus on the path that genuinely interests you and stay in touch with it over time, letting curiosity and consistent engagement guide your growth as the field evolves.
Looking back, is there anything you would have done differently in your path toward AI research? What advice would you give to someone who is interested in getting into AI research?
I tend to think that everyone’s path into research is unique and tightly interwoven, so I wouldn’t change much about mine, so altering any part of it would likely have led me somewhere else entirely.
For someone interested in AI research, my main advice is to resist being distracted by hype. Instead, look inward, identify what genuinely holds your curiosity, and pursue that consistently. Depth tends to matter more than timing, and sustained interest is usually a better guide than trends. Trying to catch a train you have no interest in rarely leads anywhere great. But passion, even if it leads nowhere, you were at least doing something you truly care about.
Your AI agent has a dashboard of “human stats” about you. What three silly metrics is it tracking?
The number of mathematical abstractions I’ve built from everyday life (most of which may never actually pan out).
How many times I’ve rewatched Dr. Strangelove.
The number of hybrid phrases I’ve invented by blending English, Farsi, and Arabic in random moments.
Hormoz specializes in evolutionary AI and explainable decision-making systems, focusing on scalable optimization techniques and multi-agent problem-solving frameworks.