The State of Evolutionary Computation, Part Two

demo

In our previous post, we introduced the world of evolutionary computation, a family of optimization algorithms inspired by biological evolution. Though evolutionary algorithms have yielded successes in a wide variety of domains, they typically must explore and evaluate a large number of candidate solutions, which comes at a high computational cost.

In this post, we will take a look at two different ways in which researchers at GECCO 2018 are addressing these limitations. The first technique involves the surrogate model, an auxiliary model which approximates the quality of a solution without computing it directly. The second relies on visualizing the progress of evolutionary algorithms and understanding poor behavior.

Surrogate Models

As we previously discussed, evolutionary computation can be expensive. Evolutionary algorithms must maintain and evaluate many candidate solutions, and in practice, evaluating solutions may require a significant amount of time and/or compute power.

Surrogate models have emerged as one technique for reducing this cost. The idea is simple: since evaluating solutions is expensive, do it less. This is achieved by introducing a “surrogate model,” which approximates the fitness of a candidate solution rather than computing it directly.

For example, in their GECCO 2018 paper, Gaier, Asteroth, and Mouret propose the use of kernel-based methods as a surrogate model for predicting the fitness of solutions within the context of the NEAT evolutionary algorithm1. In contrast to other surrogate techniques, which require descriptions of the solutions and are difficult to apply to the free-form, variable solutions generated by an algorithm like NEAT, these kernel methods rely purely on a notion of distance, or similarity, between candidate solutions. Intuitively, if two solutions share a similar structure, then they probably share a similar fitness. Their kernel method exploits this intuition.

Conveniently, the NEAT algorithm already defines just such a notion of similarity. The so-called “compatibility distance” is a measure of similarity between evolved neural network topologies2. It is used by the NEAT algorithm to track innovation and maintain diversity. Gaier, Asteroth, and Mouret simply observe that they can exploit it for predicting fitness.

The full algorithm outlined by Gaier, Asteroth, and Mouret proceeds in three phases:

The authors find that their surrogate-assisted variant of NEAT (SA-NEAT) was able to generate a 5.7x speedup with no decrease in performance when solving the popular cart-pole swing-up task from reinforcement learning. Though the ability of evolutionary algorithms to perform massively parallel search was already known to be a valuable tool for quickly solving reinforcement learning problems, this work represents an important step toward making evolutionary algorithms produce solutions not only more quickly, but at a lower cost.

In fact, surrogate models have grown so popular that they are also being used in related fields, such as neural architecture search, which leverages machine learning, though not exclusively evolutionary algorithms, to automate the design of neural networks3. Moving forward, we expect surrogate models and their broader goal to reduce the computational cost of evolutionary algorithms to play a central role in the widespread adoption and application of these algorithms.

Complex Search Spaces

While the evaluation of candidate solutions is expensive, we can also view the computational cost of evolutionary algorithms through a different lens: search space complexity. For many problems, the number of possible solutions is enormous, and evolutionary algorithms must traverse these solutions. If an evolutionary algorithm cannot identify an adequate solution efficiently, then it is of little use in practice. For instance, the NEAT algorithm, which simultaneously optimizes the parameters and architecture of a neural network, searches over all possible neural networks, an intractable task. However, its philosophy of “start simple and complexify” often results in the early identification of viable, if still suboptimal, solutions.

A visualization of a typical search space for a neural architecture search problem. There are many possible solutions. An algorithm such as NEAT will often only explore a very small subset of solutions before arriving at a local optimum, or a solution which is optimal with respect to neighboring, similar solutions, but not the best possible solution. For simple problems, a local optimum may be an acceptable solution. However, for more complex problems, local optima may offer very poor performance. Since it is expensive to assess new solutions, even random search may fail to adequately explore the space of solutions. A better solution is needed.

In their tutorial at GECCO 2018, Gabriela Ochoa and Nadarajen Veerapen introduce their vision for combating complex search spaces: search maps4. They developed powerful tools for visualizing fitness landscapes, or the relative goodness of candidate solutions explored by an evolutionary algorithm at runtime. Their goal is to create a library of these visualizations for different types of problems. Such a library would allow researchers to identify and exploit characteristics of these landscapes to improve the performance of evolutionary algorithms.

Local Optima Networks (LON): Above are two visualizations of fitness landscapes explored by an evolutionary algorithm for the Traffic Collision Avoidance System (TCAS), left, and triangle, right, optimization problems. TCAS involves controlling the cruising altitude of an aircraft given different sensory inputs, while the triangle problem classifies triangles as scalene, isosceles, equilateral, or not a triangle. Grey lines represent evolutionary changes that result in a new solution which is equally as good as the existing solution. Black lines represent changes which result in a superior solution. Red lines represent changes that move from one optimal solution to another. The evolutionary algorithm starts at the “top” of these 3D visualizations and gradually works its way down deeper towards the optimal solutions.

Interestingly, these landscapes are very different. In both cases, it is much more common for an evolutionary algorithm to explore new solutions which are no better than the current ones. However, on the right hand side, we notice a unique phenomenon: as the algorithm gradually moves toward an optimal solution, black and grey lines become scarce, meaning that it is very difficult to produce solutions which are an improvement over the current solution set.

Ochoa and Veerapen did not offer any specific guidance on how to exploit such trends in their LON visualizations, though they recognize that this is the long-term goal of their project. For example, many evolutionary algorithms operate by introducing random, incremental changes, or mutations, to the current set of candidate solutions. If we were to observe a slowdown in the progress of an evolutionary algorithm, as illustrated by the increasing scarcity of black and grey lines above, then it may be beneficial to introduce radical changes, instead of incremental ones, in order to escape the current local optimum and stimulate new progress. Alternatively, we might consider restarting with a new set of candidate solutions.

At any rate, the ability to inform the behavior of an evolutionary algorithm at runtime to encourage progress and avoid local optima would be another important step towards more efficient and practical evolutionary search.

Where does SparkCognition fit in?

In our next post, we will take a look at the latest research presented by the SparkCognition team at GECCO 2018. Our work quietly tackles computational cost from both directions, reducing the evaluation time of individual solutions while also avoiding local optima by simplifying the search spaces traversed by the evolutionary algorithms. The result is a new approach for applying evolutionary algorithms to real-world, multiclass classification problems.

References

1 Liu, Chenxi, et al. “Progressive neural architecture search.” arXiv preprint arXiv:1712.00559 (2017).
2 Gaier, Adam, Alexander Asteroth, and Jean-Baptiste Mouret. “Data-efficient Neuroevolution with Kernel-Based Surrogate Models.” GECCO 2018.
3 Stanley, Kenneth O., and Risto Miikkulainen. “Evolving neural networks through augmenting topologies.” Evolutionary computation 10.2 (2002): 99-127.
4 Ochoa, Gabriela, and Veerapen, Nadarajen. “Search Maps: Visualising and Exploiting the Global Structure of Computational Search Spaces.

Latest blogs

Earth Day Renewing Renewables and Scaling Renewable Intelligence hero image
Blog
Campbell LeFlore

Scaling Renewable Intelligence

On Monday, April 22, more than one billion people in 193 countries will participate in events celebrating the environmental movement and renewing their commitment to

Read More

SparkCognition is committed to compliance with applicable privacy laws, including GDPR, and we provide related assurances in our contractual commitments. Click here to review our Cookie & Privacy Policy.