DISCOVER WHY EVOLUTIONARY COMPUTING IS THE NEW FRONTIER IN AI TECHNOLOGY
Deep learning (DL) has transformed much of AI and demonstrated how machine learning can make a difference in the real world. Its core technology is gradient descent, which has been used in neural networks since the 1980s. However, massive expansion of available training data and computing gave it a new instantiation that significantly increased its power.
Evolutionary computation (EC) is on the verge of a similar breakthrough.
However, EC addresses a different but equally far-reaching problem. While DL is focused on modeling what we already know, EC is focused on creating solutions that do not yet exist. For example, DL makes it possible to recognize new instances of objects and speech within familiar categories, and EC makes it possible to discover entirely new objects and behavior – those that maximize a given objective. EC does it not by following a gradient like most DL and reinforcement learning approaches, but by doing massive exploration – using a population of candidates to search the space of solutions in parallel, emphasizing novel and surprising solutions.
As a result, EC makes a host of new applications of AI possible by designing more effective and economical physical devices and software interfaces. Examples include discovering more effective and efficient behaviors for robots and virtual agents, creating more effective and cheaper health interventions, growth recipes for agriculture, and mechanical and biological processes.
Similar to neural networks, the basic ideas of EC have existed for decades. Once instantiated to take advantage of the increased data and computing, they stand to gain in a similar way as DL did. Recent progress in novelty search, multi-objective optimization, and parallelization are indeed essential ingredients. Building on this momentum, we further highlight that scale-up evolution in the following.
Neuroevolution: Improving Deep Learning With Evolutionary Computation
Much of the power of DL comes from the size and complexity of the networks. Their architecture, network topology, modules, and hyperparameters can be optimized through evolution that goes far beyond human ability. We’ve demonstrated this idea by producing new state-of-the-art results in 2 multitask learning domains, Omniglot character recognition and CelebA face attribute recognition, as well as the standard sequence processing benchmark of language modeling. These complex networks discovered new solutions.
Solving Hard Problems With Evolutionary Computation
Much of the progress on EC has focused on challenging optimization problems that cannot be solved using traditional nonlinear optimization techniques. Most recent EC techniques are designed to take advantage of the large amount of computational power that is now available. This new technique using composite objectives to define a useful search space and novelty selection can be used to explore it effectively. Other (earlier) papers suggest how such evolutionary processes can be run on massively parallel computing. These techniques can discover good solutions in deceptive search spaces, matching state-of-the-art results on minimal sorting networks.
This exploration on our part is contributing to the momentum that is building up around EC, including recent results by research groups at OpenAI, Uber.ai, DeepMind, Google and BEACON