Skip to main content Skip to footer


December 9, 2025

Behind the Research with Elliot Meyerson

An inside look into the work and expertise from Principle Research Scientist Elliot Meyerson


Research interview

Welcome to the first edition of our Behind the Research blog series, featuring Cognizant AI Lab researchers. This series offers an inside look at some of their work and what inspires them, what they’re interested in, and how it shapes the technology that is becoming part of our daily lives. Today, we are interviewing Elliot Meyerson, Principal Research Scientist at our AI Lab.

Firstly, how did you first get interested in AI and research more generally?

  • My interest started with games, especially chess and I was fascinated by how you “solve” a game by understanding it deeply, which naturally led me toward AI and using AI to solve problems. In late-night discussions in undergrad, I avoided philosophical debates by saying “We’ll just build AI, and it will tell us the answer.” Later, computer science classes showed me a careful, structured way to think about processes, and research internships in undergrad—along with supportive mentors—introduced me to the excitement of forming new questions and pushing the boundaries of what we know.

 

What are the main problems you are currently working on, and what makes them meaningful or exciting to you?

  • I’m working on how to create processes that can generate unbounded value over long periods of time, especially systems that stay efficient, diverse, and safe as they grow. Many AI processes saturate quickly, so I look to open-ended processes like natural evolution for inspiration, since it has been running for billions of years and continues producing remarkable novelty. Designing similarly long-running, generative processes in AI systems is challenging, but can result in systems that are both beautiful and impactful.

 

How do you decide which research questions are worth pursuing and exploring in depth?

  • I focus on questions that feel important and have the potential for extraordinary returns. I try to avoid getting swept into mainstream trends and instead target problems that seem overlooked, especially problems people (consciously or unconsciously) assume can’t be done. Discussions with peers and reading papers often transform abstract perspectives into focused projects. For example, the Apple “Illusion of Thinking” paper inspired the MAKER project, which became a very concrete and compelling challenge.

 

Can you share a project you’re especially proud of and what you learned from it?

  • The Expressive Encodings paper resolved a conjecture about the fundamental power of evolution. Its central message is that evolution, even with the simplest genetic operators, can implement any generative process. This is akin to the universal representation theorems for neural networks. Many people still think of evolution as uninformed random search, but this work showed that evolution can morph to mirror the behavior of any intelligent process. The result should inspire more work on evolutionary computation, and it helps explain recent breakthroughs, like our ES fine-tuning paper,  DeepMind’s AlphaEvolve  and our Languge-Model Crossover (LMX) work This work won a best paper award at GECCO, but many people have yet to appreciate its significance; it may still be some years ahead of its time. A personal takeaway is that the research you find the most meaningful or conceptually elegant may not be what others care about today, but it’s still worth pursuing.  Also: maybe don’t make paper titles overly mysterious. 

 

How do you stay up to date with such a fast-moving field? And are there any areas of AI that you think are underrated or not getting enough attention right now?

  • Keeping up has become much harder compared to five years ago. I rely a lot on my research community as we share findings and help filter the flood of new work. Having very specific questions in mind also helps me search more effectively for the most relevant research, and AI tools now play a big role in navigating and consolidating the information.  As for areas that are underrated, I think evolution as a computational principle still does not get enough attention, even as interest grows through work like AlphaEvolve, which continues to lead to new mathematical discoveries. 

 

Looking back, is there anything you would have done differently in your path toward AI research? What advice would you give to someone who is interested in getting into AI research?

  • The field has changed so much since I first started in it that it’s hard to compare, but my advice is to find problems you genuinely care about—even if they’re outside the mainstream—and find an advisor who supports exploration. Really push yourself to find questions that challenge and inspire you. Try to build connections across different focus areas, because surprising insights can come from unexpected intersections. But, importantly, actually build things. With modern AI coding tools, it’s easier than ever to experiment and put your ideas to the test. Don’t spend all your time just thinking about the problem, work to solve it. Research is like evolution, you can’t figure everything out from the beginning; you have to try things out and see what sticks.

 

Your AI agent has a dashboard of “human stats” about you. What three silly metrics is it tracking?

  • Number of silly songs invented per day

  • Coffee satisfaction index (how good the day’s coffee was)

  • Chess distraction index (how often I’m thinking about chess-world news)



Elliot Meyerson

Principal Research Scientist

Author Image

Elliot is a research scientist who is oriented around improving creative AI through open-endedness, evolutionary computation, neural networks, and multi-agent systems.



Subscribe to our newsletter

Get our latest research and insights on AI innovation


Latest posts

Related topics