“*We cannot solve our problems with the same thinking we used when we created them*” Albert Einstein

The rapid increase of the CPU and GPU processing power has appreciably increased the range of solvable problems, but interestingly the key to solving many of our most complex issues today is creativity. In problem-solving, we are always dealing with conflicting measures of success and trade-offs in our approach – and these vary based on the angle we take.

How can we enable machines to have “aha” moments similar to human brainstorming? For example, if we provide a general approach and specific goals we want the system to recommend the best path to achieve them. And, if the system hits a roadblock, it shifts its approach to get past the block (this is novelty pulsation!) and focuses again on the objective to arrive at the optimal solution. Aha!

In general, linear and non-linear optimization are considered the standard, mechanical methods to deal with satisfying constraints and finding trade-offs. Parameters and goals must be defined upfront and what is known (or seems to be known) about the constraints and dependencies among the parameters to reach cut and dry solutions.

**Unleashing Algorithmic Creativity**

Evolutionary computation on the other hand, as a population-based search, not only is useful to find a set of solutions for domain experts to choose from, but also can be leveraged to discover inter-relations among all criteria where they are not concretely known.

An important benefit of multi-objective search is that it maintains a diverse population of candidates, which helps in deceptive problems. Not all diversity is useful, however, candidates that optimize only one objective while ignoring others are rarely helpful. A recent solution is to replace the original objectives by their linear combinations, thus focusing the search on the most useful tradeoffs between objectives. To compensate for the loss of diversity, this transformation is accompanied by a selection mechanism that favors novelty. Moreover, novelty pulsation, i.e. a systematic method to alternate between novelty selection and local optimization further balances the exploration/exploitation aspect of the method.

The effectiveness of such a method has been showcased in the highly deceptive problem of discovering minimal sorting networks, where it finds state-of-the-art solutions significantly faster than before. In fact, our method so far has established a new world record for the 20-line sorting network with 91 comparators. In the real-world problem of stock trading, it also discovers solutions that generalize significantly better on unseen data than previous methods.

**Exploration versus Exploitation**

Every search algorithm needs to both explore the search space and exploit the known good solutions in it. Exploration is the process of visiting entirely new regions of a search space, whilst exploitation is the process of visiting regions within the neighborhood of previously visited points. In order to be successful, a search algorithm needs to establish a productive synergy between exploration and exploitation.

A common problem in evolutionary search is that it gets stuck in local minima, i.e. in unproductive exploitation. A common solution is to kick-start the search process in such cases by temporarily increasing mutation rate. This solution can be utilized more systematically by making such kick-starts periodic, resulting in methods such as delta coding and burst mutation. Here, the kick-start idea works by turning novelty selection on and off periodically, which allows local search (i.e. exploitation) and novelty search (i.e. exploration) to alternate. This is very much like when humans approach a project by brainstorming ideas occasionally to conquer one milestone after the other.

**Multi-Objective Versus Novelty**

Figure 1 is a good example to visualizes the difference between the diversity that multi-objective methods (e.g. NSGA-II) create (left-side) and diversity that novelty search creates (right-side). What is notable is that if you compare them in the objective space (top-ones), the novelty looks more focused and less diverse, but novelty in the behavior space (bottom-ones) is much more diverse and this type of diversity leads to breakthrough solutions (like beating world records!)

Figure 1: Here you can see how novelty selection is creating a better coverage on the behavior space despite being more focused in the objective space. Here is a dynamic visualization of the difference in sorting networks domain which will be discussed later.

**How to Pick the Novel Solutions**

Figures 2 and 3 show the two phases of picking novel solutions. The red points are the solutions with the best fitness, but by projecting them in the behavior space you can see they are bundled up together.

Figure 2: Novelty Selection Phase-I

The first phase (Figure 3) is to select the solutions (marked green) with the most aggregate distance from everything else. Phase two (Figure 4) eliminates the closest pairs of green ones in order to get better overall coverage (blue solutions), so we end up having a healthy mixture of good candidates and creative ones.

Figure 3: Novelty Selection Phase-II

**Novelty Pulsation in Action**

Now that we understand how novelty pulsation works, let’s see its application and results in two completely different domains of stock trading and sorting networks, each with their own complex and deceptive search landscape.

**Stock Trading**

Stock trading is a natural multi-objective domain where return and risk must be balanced. Candidate solutions, i.e. trading agents, can be represented in several ways. Rule-based strategies, sequence modeling with neural networks and LSTMs, and symbolic regression using Genetic Programming or Grammatical Evolution are common approaches. Frequency of trade, fundamental versus technical indicators, choice of trading instruments, transaction costs, and vocabulary of order types are crucial design decisions in building such agents.

The goal is to extract patterns from historical time-series data and utilize those patterns to make optimal trading decisions, i.e. whether to buy, hold, or sell particular stocks (Figure 4). The main challenge is to trade in a manner that generalizes to unseen situations in live trading. The data is extremely noisy and prone to overfitting, and methods that discover robust decisions are needed.

Figure 4: Stock Trading Agent. The agent observes the time series of stock prices and several other indicators and makes live decisions about whether to buy, hold, or sell a particular stock. The signal is noisy and prone to overfitting; generalization to unseen data is the main challenge in this domain.

**Stock Trading Results**

Figures 5 and 6 compare solutions of the composite novelty selection and novelty pulsation methods, respectively. Each point corresponds to a trading solution and ideally, we want all the points to the right of the vertical axis (profitable solutions on training data) to fall on top of the vertical axis (also be profitable on the unseen data). The points in Figure 6 are noticeably closer to a diagonal line, which means that better training fitness resulted in better testing fitness, i.e. higher correlation and better generalization. Numerically, the seen-to-unseen correlation for the composite novelty method is 0.69, while for composite novelty pulsation, it is 0.86. In practice, this difference is significant and therefore translating to much-improved profitability in live trading.

Figure 5: Generalization from seen (x) to unseen (y) data with the Composite Novelty method. The correlation is 0.69, which is enough to trade but could be improved.

Figure 6: Generalization from seen (x) to unseen (y) data with Composite Novelty Pulsation method. The correlation is 0.89, which results in significantly improved profitability in live trading. It’s also notable that the majority of profitable genes on training data are also profitable on unseen data.

**Sorting Networks**

A sorting network of *n* inputs is a fixed layout of comparison-exchange operations (comparators) that sorts all inputs of size *n* (Figure 7).

Figure 7: A Four-Input Sorting Network. This network takes four numbers as its input (left) and produces output (right) where those numbers are sorted (small to large, top to bottom). Each comparator (connection between the lines) swaps the numbers on its two lines if they are not in order otherwise, it does nothing. This network has three layers and five comparators and is the minimal four-input sorting network. Minimal networks are generally not known for large input sizes. Their design space is deceptive which makes network minimization a challenging optimization problem.

Since the same layout can sort any input, it represents an oblivious or data-independent sorting algorithm, that is, the layout of comparisons does not depend on the input data. The resulting fixed communication pattern makes sorting networks desirable in parallel implementations of sorting, such as those in graphics processing units, multi-processor computers, and switching networks. Beyond validity, the main goal in designing sorting networks is to minimize the number of layers, because it determines how many steps are required in a parallel implementation. A tertiary goal is to minimize the total number of comparators in the networks. Designing such minimal sorting networks is a challenging optimization problem that has been the subject of active research since the 1950s. Although the space of possible networks is infinite, it is relatively easy to test whether a particular network is correct: If it sorts all combinations of zeros and ones correctly, it will sort all inputs correctly.

Many of the recent advances in sorting network design are due to evolutionary methods. However, it is still a challenging problem, even for the most powerful evolutionary methods, because it is highly deceptive: Improving upon a current design may require temporarily growing the network, or sorting fewer inputs correctly. Sorting networks are therefore a good domain for testing the power of evolutionary algorithms.

Figure 8: The average runtime needed to converge to the state of the art over different size networks. Composite Novelty with novelty pulsation converges significantly faster at all sizes than composite novelty without pulsation, demonstrating improved balance of exploration and exploitation.

**Sorting Networks Results**

Convergence time of the two methods to minimal solutions for different network sizes is shown in Figure 8. Composite novelty with novelty pulsation shows an order of magnitude faster convergence than composite novelty without novelty pulsation across the board. All runs resulted in state-of-the-art sorting networks. An interesting observation is that it takes proportionately less time to find state-of-the-art solutions for networks with an even number of lines than for networks with an odd number of lines. This result is likely due to the symmetrical characteristics of even-numbered problems. The latest state-of-the-art method exploits this very symmetry to find state-of-the-art solutions, but this domain-specific information was not used in our implementations.

*The fact that we have been able to achieve the state-of-the-art and even breaking the world record for 20-line networks (Figure 9) without exploiting domain-specific characteristics of the problem is itself a significant result.*

Figure 9: Novelty Pulsation Discovered This 20-Line Sorting Network with 91 Comparators, improving on the previous known minimal network by one comparator.

**Conclusion**

The composite novelty pulsation method is a promising approach to deceptive problems utilizing three key elements: Composite objectives effectively exploit the local search on the most useful tradeoffs. Novelty selection injects meaningful diversity and allow escaping deceptive areas. Finally, novelty pulsation balances exploration vs exploitation by alternating between the two to find more generalizing solutions faster. Composite novelty pulsation is a general method that can be combined with other advances in population-based search, thus increasing the power and applicability of the Cognizant LEAF platform. Cognizant LEAF is an evolutionary computation platform that is specifically geared to do this, and highly supports multi-objectivity alongside novelty search to promote creativity. Please comment below with any questions or feel free to email me directly.

Hormoz is Senior Director of Evolutionary AI at Cognizant. He’s a practicing evolutionary computation scientist with a passion for developing theories and techniques to be applied to real-world problem domains.

Prior to Cognizant, he was the Principal Scientist at Sentient Technologies where he had a major role in the design and implementation of the powerful LEAF and EC-Star distributed evolutionary platforms. Hormoz is an inventor and well-published computer scientist with more than twenty-five years of artificial intelligence, evolutionary computation, machine learning, time-series predictive modeling and artificial life problem-solving modeling experience.

He holds a Bachelor of Science in Computer Science and more than a dozen of granted U.S. and International patents in different aspects of evolutionary computation and artificial life. Hormoz has presented many papers in major evolutionary computation and artificial life conferences worldwide.