Skip to main content Skip to footer


Aug 6, 2025

An Accelerating Synergy: Reflections from ICML and GECCO 2025

Reflections from ICML and GECCO 2025 on building AI systems that evolve, adapt, and remain accountable to the people they serve 


What does it mean to build AI systems that are not just performant, but capable of evolving, aligning, and staying intelligible to the people they serve?

This summer, our team at Cognizant AI Lab set out to explore how those questions are being asked (and answered) across the machine learning community. We attended two of the field’s most dynamic research conferences: ICML, focused on core machine learning advances, and GECCO, the flagship event for evolutionary computation.

While each conference brought its own perspective, what stood out most was how closely their themes echoed each other. Researchers across domains are shifting focus: not just improving isolated models, but designing systems that can adapt, collaborate, and evolve in context. The excitement wasn’t only about technical breakthroughs. It was about rethinking the goals and structures of AI itself.

ICML: Stepping Beyond the Model

At ICML 2025, one question kept surfacing: Are the good questions still machine learning questions?

The conference delivered its usual share of theoretical depth, from tighter bounds to clever algorithmic refinements. But the most compelling discussions centered on what comes next. Now that core machine learning capabilities are well established, how do we integrate them into larger systems that can reason, interact, align, and grow over time?

The emerging focus wasn’t just about improving models. It was about stepping back to ask how models fit into larger systems with other agents, with tools, with humans. This idea, which we call the "agentic frame," shifted focus away from isolated models and toward multi-agent systems and dynamic objectives.

This theme came through clearly in the keynote talks:

  • Jon Kleinberg explored the “handoff” between AI and humans: how intelligent systems and people should defer, lead, or collaborate depending on context.

  • Frauke Kreuter emphasized the need for deeper engagement between ML researchers and those who gather the data these systems are trained on, surfacing critical questions about whose perspectives get encoded into models.

  • Anca Dragan asked what happens when we optimize for the wrong thing, challenging the field to rethink not just how we optimize, but what we choose to care about in the first place.

  • Andreas Krause closed the loop by discussing optimization and discovery as an iterative process, one where models don’t just solve problems, but help generate better questions.

The best invited talks always aim to spark new directions, but what stood out this year was how many speakers encouraged the field to step outside the boundaries of ML itself. 

We saw these ideas echoed in the paper sessions. In EvoControl, an evolutionary algorithm is used for policy learning in a high-frequency control scenario, where traditional RL methods do not fit. Improved performance is observed with the power of evolution in cases where safety-critical fast reactions are required. Several standout contributions placed LLMs inside evolutionary loops, such as G-Sim, which uses generative simulations and gradient-free calibration, and Autoformulation, which enables LLMs to construct and refine mathematical optimization models as code. These works showed how language models can not only generate but also critique and iterate on their own outputs, a direction closely aligned with how we at the Lab approach multi-agent and evolutionary systems.

Our team contributed directly to these conversations through a position paper on Asymptotic Analysis with LLM Primitives. The work introduces a theoretical framework for reasoning about how large numbers of agents should be organized to improve efficiency, scalability, and general problem-solving capacity. During the conversations in our poster session, other researchers acknowledged the lack of attention on developing such theoretical tools for analyzing multi-agent systems, and they appreciated our effort on pushing forward this direction.  

A compelling paper on self-improvement in transformers showed that maintaining and refining a diverse group of models led to accelerated self-improvement, a promising divergence from the convergence assumptions of traditional ML. In ResearchTown, a research community is simulated using LLM agents, and interdisciplinary research ideas are generated automatically, potentially inspiring pioneering research directions. Meanwhile, recent work in reinforcement learning, on both diversity and lifetime tuning challenged foundational assumptions, pointing to the need for more open-ended training paradigms — directions that resonate with our recent work on the Knighting Blindspot of Machine Learning.

It’s clear that the field is beginning to move beyond the single-model mindset. If there was one takeaway, it was this: the future of machine learning may not look like machine learning as we know it. And that’s exactly the point.

 

At GECCO: Scaling Interpretability Through Symbolic Evolution

At GECCO 2025, the focus turned toward the evolutionary foundations of intelligent systems — and how principles from nature can help us build AI that is both more transparent and more adaptable.

We were proud to present several contributions exploring this balance of evolution and interpretability. Two of our works, EVOTER and AERL, explored how symbolic evolutionary approaches can scale interpretable AI.

  • EVOTER (presented in the Hot Off the Press session) introduced a symbolic learning framework that evolves simple, human-readable rule sets with temporal logic, relational features, and nonlinear transformations — making it both interpretable and expressive. It’s been used across applications from healthcare to control systems, offering decisions that can be easily audited and explained.

  • AERL built on this foundation, leveraging tensor-based translation to enable accelerated rule evaluation through GPU execution and fine-tuning symbolic rules with gradient-based updates. This makes interpretable AI not only transparent, but fast enough for large-scale, real-world deployment.

We were also delighted to receive the GECCO 2025 Humies Gold Award for RHEA, a symbolic regression system that uses evolutionary search to produce human-competitive results. Watch the presentation video here.

Beyond papers, our team contributed to major speaking sessions. Our VP of AI Research, Risto Miikkulainen, delivered his annual tutorial on Neuroevolution — a tradition since 2005. However, this year’s edition was significantly updated to follow the structure of his upcoming book Neuroevolution (co-authored with Sebastian Risi, David Ha, and Yujin Tang). The tutorial traced the field’s growth from the 1990s to today’s AI, emphasizing its role in intelligent agent behavior, decision-making, and multi-agent cooperation, as well as its synergies with deep learning, reinforcement learning, and generative AI. It also examined how neuroevolution can shed light on biological neural circuits and behavior. Resources, tutorials, and slides are available here, and the Neuroevolution book will be available soon in HTML form, with a print edition from MIT Press to follow.

Our team also played a leading role in the newly launched EvoSelf Workshop on Evolving Self-Organization. This workshop brought together researchers studying how complex systems form from autonomous components, a key concept in both natural and artificial intelligence. We gave a spotlight talk titled “Self-Organizing Models of Brain Wiring: Developmental Programs for Evolving Intelligence,” exploring how biologically inspired wiring patterns might serve as templates for multi-agent coordination.

As at ICML, the message at GECCO was clear: we’re moving from isolated performance toward systems that evolve in structure, strategy, and understanding — shaped by the environments and goals they serve.

Looking ahead

Together, our time at ICML and GECCO revealed a fundamental shift in how we think about intelligence. From evolving agent populations to self-organizing architectures and transparent rule-based systems, the field is moving toward AI that is not only smarter, but more accountable, adaptive, and aligned.

At our AI Lab, these ideas resonate with our mission. Whether advancing neuro-symbolic systems, building reflective multi-agent tools like Neuro SAN, or contributing to decision AI's theoretical foundations, our goal is shaping AI that grows with its context's complexity and the people it serves.

There's still much to explore. But as the field evolves, so will we — continuing to ask harder questions, contribute to the conversation, and build systems that reflect the future we want to create.



Elliot Meyerson

Principal Research Scientist

Author Image

Elliot is a research scientist who is oriented around improving creative AI through open-endedness, evolutionary computation, neural networks, and multi-agent systems.



Risto Miikkulainen

VP of AI Research

Author Image

Risto Miikkulainen is VP of AI Research at Cognizant AI Lab and a Professor of Computer Science at the University of Texas at Austin.



Latest posts

Related topics