Skip to content

Introduction to Evolutionary Algorithms: Genetic Algorithm, Neuro-Evolution

Introduction to Evolutionary Algorithms Genetic Algorithm, Neuro-Evolution

Overview:

I discussed Artificial Neural Networks and the machine learning paradigms in the previous article. In this article, I briefly discuss the Evolutionary Algorithm techniques, in particular, the Genetic Algorithm and Evolutionary Artificial Neural Network techniques, which provide effective solutions by combining Neural Networks and Evolutionary Algorithms. Examples of these techniques include Neuroevolution of augmenting topologies (NEAT) and Real-time NEAT algorithms.

What are Evolutionary Algorithms?

An Evolutionary Algorithm is a collection of techniques inspired by the way biological life evolves through the process of reproduction, mutation, recombination (crossover), natural selection, and survival of the fittest. Evolutionary Algorithms consist of a variety of techniques but the principle behind them is the same. Evolutionary Algorithm techniques are often useful for search and optimisation. The general sub-areas of Evolutionary Algorithms include:

  • Genetic Algorithm [1]
  • Genetic Programming [2],
  • Evolutionary Programming [3],
  • Evolutionary Strategies [4, 5, 6],
  • Learning Classifier Systems [7],
  • Neuro-Evolution Algorithms [8].

Genetic Algorithm

The Genetic Algorithm is the most popular type of Evolutionary Algorithm. A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural selection. This algorithm generates new solutions to a problem by mimicking the natural selection process. The natural selection process involves four main steps:

  1. Selection: Individuals with better fitness are selected for reproduction.
  2. Crossover: The selected individuals mate and produce offspring.
  3. Mutation: The offspring undergoes random mutations.
  4. Repeat: The process is repeated until a solution is found.

When using a Genetic Algorithm, it is necessary to find a way of encoding the problem so that any potential solution can be represented as a ‘digital’ chromosome. Traditionally chromosomes were represented as a string of numbers in binary format [9].

Once the potential solution is encoded as a chromosome, a population of chromosomes is randomly created and “evolved over time by breeding the fittest individuals and adding a little mutation here and there” [10]. The main forces that form the basis of Evolutionary Algorithms include the crossover, mutation, and selection processes. The crossover and mutation processes are responsible for adding novelty to the individuals as they evolve.

The crossover process creates a new offspring from two individuals by recombining them from random points. The mutation process creates a new offspring by mutating the same individual at random points.

The selection process is responsible for checking the quality of evolved individuals. It is based on the idea of choosing the fittest individual that can survive and behave better than the rest of the population under environmental pressures. The fitness of individuals is calculated and measured using a fitness function. Many aspects of the Evolutionary Algorithms are stochastic including random selection of individuals. The fitter individuals would have a higher chance of being selected for seeding the next generation.

A Genetic Algorithm breeds chromosomes from the fittest individuals but it does not always guarantee finding the best solution or any at all. Generally, if it is utilised correctly, evolved over enough generations and with a bit of luck, a Genetic Algorithm will converge upon a good solution. Often, a Genetic Algorithm can be used to evolve agents’ behaviour off-line or between intervals, when there is a break in the process, but cannot easily be run in real-time. The next section discusses a number of encoding techniques for representing potential solutions as a chromosome.

Encoding Techniques

In order to find a potential solution using Evolutionary Algorithms, it is necessary to find a way to encode the networks, assign fitness scores, and perform the processes of mutation crossover, and selection. Crossover operators can sometimes be problematic. One of the main problems caused by this type of operator in a Neuro-Evolution is called the Competing Conventions Problem [10, 11]. It refers to having more than one solution to encode an Artificial Neural Network that exhibits identical functionality. In other words, different incompatible encodings may exist for the same solution.

When the crossover operator is implemented on genomes that represent the same solutions but do not have the same encoding, the resulting genome is likely to be damaged. Much research has been conducted on designing different encoding schemes to avoid this problem. The encoding techniques for Evolutionary Artificial Neural Networks are generally divided into two groups: Direct Encoding, and Indirect Encoding.

Direct Encoding

In direct encoding techniques, the exact structure of the Artificial Neural Network is encoded directly into the genotype [11]. A genotype is a representation of an ANN. In the context of Evolutionary Artificial Neural Networks, the structure of the ANN is referred to as the phenotype. In direct encoding techniques, the genotype is the same as the phenotype. The encoded data contains information about the number of neurons, the weights, and types of connections. Some categories of direct encoding include Binary Encoding and Graph Encoding.

Binary Encoding

Binary Encoding In Binary Encoding the structure of networks is encoded using a string of binary digits. A Genitor algorithm is an example of Binary Encoding that encodes the genome as a 9-bit string [13]. The first bit indicates if the connection is enabled and the rest represents the weight of the connection. The drawback of this algorithm is that for any potential connectivity an encoding is specified, which results in a maximal network topology.

Another example of Binary Encoding is the Binary Matrix algorithm. It represents neural network connectivity using a matrix of binary digits [14]. The rows or columns of the matrix can be used as the encoding of genomes. In Matrix Encoding the performance declines as the number of genomes increases. The reason is that the size of the matrix increases quickly in proportion to the square of the number of neurons. The competing conventions problem is not addressed in this encoding scheme and the crossover, operator is likely to produce fewer fit genomes.

Graph Encoding

Graph Encoding Graph Encoding algorithms represent the structure of an Artificial Neural Network using graph nodes and connections [11]. These algorithms include Path-based Encoding and Node-based Encoding. Path-based Encoding algorithm encodes the paths from each input neuron to each output neuron. The Node-based Encoding approach, which is also referred to as Genetic Encoding [15], uses genes to encode the required information for each neuron inside them. Genes are data structures that hold information about neurons and their connection. Node-based Encoding is used in the Neuroevolution of augmenting topologies to describe the neural network’s topology and connection weights. This encoding provides a way to avoid the competing problem by keeping a record of newly created nodes and links, which is discussed further in NEAT.

Indirect Encoding

In contrast to direct encodings that specify every neuron and connection of phenotype directly from genotype, in the indirect encoding, the structure of phenotype is encoded and constructed indirectly through some rules and instruction. “These rules often specify many connections simultaneously and may even be applied recursively” [11]. Examples of indirect encoding include Bi-Dimensional Growth and Grammar-Based encoding.

Genetic Programming

Genetic Programming is a branch of artificial intelligence that uses methods inspired by evolutionary biology to generate computer programs that perform well on a particular task. Genetic Programming is based on the idea that solutions to problems can be found through natural selection: the best solutions are those that survive and reproduce, while the worst solutions are those that die off. In Genetic Programming, solutions are represented as strings of computer code, and the process of natural selection is simulated by making small changes to these strings and then testing to see how well they perform on the task at hand. The best-performing solutions are then used to create the next generation of solutions, and the process is repeated until a satisfactory solution is found.

Differences between Genetic Programming and Genetic Algorithm

The main difference between genetic algorithms and genetic programming is that genetic algorithms are used to optimize a given function by searching for the best values of a set of parameters, while genetic programming is used to generate a computer program that performs well on a given task.

There are several other key differences between genetic programming and genetic algorithms:

  1. Genetic programming generally uses a higher-level representation than genetic algorithms. This higher-level representation is usually a tree-based structure, which can represent complex relationships between variables.
  2. Genetic programming often uses a more sophisticated selection process than genetic algorithms. This selection process is designed to preserve the diversity of the population, which is essential for the successful evolution of solutions.
  3. Genetic programming typically employs a more sophisticated crossover operator than genetic algorithms. This crossover operator is designed to create new solutions that are more likely to be viable than solutions created by randomly combining existing solutions.
  4. Genetic programming often uses a mutation operator that is specifically designed to maintain the diversity of the population. This operator is essential for the long-term success of the evolutionary process.
  5. Genetic programming typically uses a fitness function that is more complex than the fitness function used in genetic algorithms. This fitness function is designed to evaluate the solutions in the population in terms of their ability to solve the problem at hand.

Evolutionary Programming and Evolutionary Strategies

Evolutionary programming is a method of creating computer programs that use a process of natural selection, inspired by biological evolution. Evolutionary strategies are a family of optimization algorithms that are used to solve problems also by simulating the process of natural selection. These algorithms are based on the idea of using a population of potential solutions (called “organisms”) and subjecting them to a process of natural selection, in which the fittest solutions are selected to survive and reproduce, while the less fit solutions are culled. Over time, this process of natural selection leads to the emergence of increasingly fit solutions.

Differences between Evolutionary Strategies and Evolutionary Programming

There are several key differences between evolutionary strategies and evolutionary programming. Perhaps the most important difference is that evolutionary programming typically uses a population of computer programs, while evolutionary strategies typically use a population of numbers (or “vectors”). This difference arises because the problem being solved by evolutionary programming is typically a problem that can be expressed as a set of rules, while the problem is solved by evolutionary strategies is typically a problem that can be expressed as a set of numbers (or “parameters”).

Another important difference between the two algorithms is that evolutionary programming typically uses a process of “sexual selection” in which computer programs mate with each other and produce offspring that inherit characteristics from both parents, while evolutionary strategies typically use a process of “asexual reproduction” in which vectors reproduce independently of each other.

Finally, evolutionary programming typically uses a process of “mutation” in which computer programs are randomly changed in order to introduce new characteristics, while evolutionary strategies typically use a process of “recombination” in which vectors are combined in order to create new solutions.

Evolutionary Artificial Neural Networks

The Evolutionary Artificial Neural Network, also referred to as Neuro-Evolution, is a category of machine learning techniques that uses Evolutionary Algorithms to train Artificial Neural Networks [17] More effective solutions can be obtained when Neural Network and Evolutionary Algorithm techniques are combined. An Artificial Neural Network provides an architecture for learning, whereas Evolutionary Algorithmare good for search and optimisation tasks. Utilising Neural Network effectively requires designing the optimal architecture or topology for the network. Training a Neural Network involves obtaining the network’s parameters that minimise error value. These parameters include weights, taxonomy, thresholds, transfer functions, number of hidden layers, and number of neurons in each layer.

The process of tweaking these parameters is time-consuming and often comes down to academic trial and error. The aim of such a process would be to find a network that is simple enough to learn quickly and is able to generalise. Evolutionary Algorithms can be used to tweak these parameters to find the best solutions. In the Neural Network classification scheme, Evolutionary Artificial Neural Networks usually fall within the category of Reward-based Learning approaches. They are powerful techniques, particularly in situations where the state of the world is not fully known and a good theory of domain does not exist. Good examples of such techniques include Neuro-Evolution of Augmenting Topologies (NEAT) [16] and its extended versions, which are introduced briefly in the following section and discussed further in detail in the next article: NEAT.

Neuro-Evolution of Augmenting Topologies (The NEAT Algorithm)

Neuro-Evolution of Augmenting Topologies (NEAT) is a novel Neuro-Evolution algorithm, developed by Stanley and Miikkulainen [16], which automatically “evolves the topology for a Neural Network by adding links and hidden nodes through mutation to fit the complexity of a problem”. Stanley et al. also developed the real-time Neuro-Evolution of Augmenting Topologies (rtNEAT) algorithm as an extension of the NEAT technique to enable the modification of evolving agents in real-time. They presented real-time NEAT in a machine learning shooting game called NERO.

NERO stands for Neuro-Evolving Robotic Operatives. It is a machine learning combat game developed at the Digital Media Collaboratory at the University of Texas (Austin) [18]. In the game, teams of agents are trained to improve their combat skills including shooting targets, avoiding enemies, and navigating around obstacles. After training, the forces are pitted against each other in a hands-off, AI-only competition until one team has the highest number of agents left standing. The training of agents is performed via a set of exercises that users define and run prior to the competition.

The figure below displays snapshots of the NERO environment where agents have been trained to adapt their behaviour to two different situations [15]. In the left figure below, agents have learnt to avoid the enemy fire by running safely around it. The arrow indicates the direction of enemy fire. The right figure below shows how agents have learnt to navigate a maze, starting from the top of the maze and reaching the enemy at the bottom.

Agents evolving in the NERO environment to avoid enemy fire
(a) Agents evolving in the NERO environment to avoid enemy fire
Agents evolving in the NERO environment to navigate the maze
(b) Agents evolving in the NERO environment to navigate the maze

NERO allows players to interact with the game by designing exercises and creating customised maps or scenarios in order to train agents [20]. Players can place different items in the field including walls, flags, and enemy agents. The skills and behaviours of agents can be saved and reloaded later to continue the same training in different scenarios. The OpenNERO project is an initiative to create an open-source game platform based on NERO for Artificial Intelligence research and education.

Each agent in NERO is controlled by a Neural Network that can be trained using different techniques. NEAT is used to train Neural Network-based agents within NERO. Stanley et al. [18, 19] demonstrated that by using real-time NEAT the behaviour of Neural Network-controlled agents in NERO can be evolved efficiently and robustly in real-time.

D’Silva et al. [21] introduced a technique called Milestoning to further improve the real-time NEAT method. It was indicated that this technique effectively retained and preserved the learned knowledge, which in this case was the previously generated and useful behaviours of an agent, while new behaviours were being created and evolved. Yong et al. [22] also further enhanced the real-time NEAT method by incorporating advice from users to effectively guide the evolutionary process. This method improved the learning algorithm and the performance of the agents.

Whiteson et al. [23] presented a novel Neuro-Evolution technique called Feature Selective Neuro-Evolution of Augmenting Topologies (FS-NEAT), which is an extension of the NEAT method. FS-NEAT evolves the topology and weights of the ANN and automates the feature selection process by determining an appropriate set of inputs for the Neural Network.  The authors presented experiments to compare FS-NEAT to the regular NEAT method in the Robot Auto Racing Simulator (RARS).

Robot Auto Racing Simulator

Robot Auto Racing Simulator (RARS) was developed by M. E. Timin in 1995 and has become a Java-based test-bed and competition platform for programmers and practitioners of Artificial Intelligence. The results demonstrated that FS-NEAT performed better than NEAT when some of the available inputs were irrelevant or redundant. The number of inputs and size of the Neural Network were also smaller when using FS-NEAT compared to regular NEAT.

A special version of NEAT called Content-generating Neuro-Evolution of Augmenting Topologies (cgNEAT) is used to generate game content automatically in the game Galactic Arms Race, which is designed as a research experiment [24]. Galactic Arms Race is a multiplayer online shooting game where players pilot spaceships and fight enemies. The game content evolves constantly based on players’ preferences, creating unique content including new variants of weapons. This research on the NEAT evolutionary algorithm and its extended versions has proven that it is possible to evolve the game content and agent’s behaviour automatically at run-time. This has reduced the cost of content creation and resulted in a more interesting experience for players.

The regular NEAT and real-time NEAT are used in this research to endow explorer agents with learning abilities. Further information on these algorithms and how they were used in this research is provided in the next article discussing Evolutionary Artificial Neural Network.

Conclusion

This article (and the previous one) started with a description of some machine-learning techniques including Artificial Neural Networks, Evolutionary Algorithms and the combination of both in Evolutionary Artificial Neural Networks. These techniques have been attempts to mimic how biological systems work to learn and evolve.

Evolutionary Algorithms are a collection of techniques inspired by the way evolution occurs in biological life, which includes the Genetic Algorithm, Genetic Programming, Evolutionary Programming, Evolutionary Strategies, and Neuro-Evolution Algorithms. These techniques are often used for search and optimisation. The principle behind these techniques is similar, which involves encoding potential solutions to a problem as ‘digital’ chromosomes and evolving a population of chromosomes through the processes of mutation, crossover, and the selection of the fittest individuals.

The Evolutionary Artificial Neural Network techniques combine both the Neural Network and Evolutionary Algorithm principles to create more effective solutions. Neural Networks are good techniques when learning capabilities are required. Designing Neural Networks is a process of tweaking a number of parameters including the number of hidden layers, the weight of connections between neurons,  and the number of neurons in each layer. An Evolutionary Algorithm algorithm such as a Genetic Algorithm can be implemented to derive the best Neural Network structure for a particular situation, a structure that is simple enough to learn quickly and also capable of generalising.

Examples of Evolutionary Artificial Neural Network techniques include NEAT and real-time NEAT. These techniques are used in this research to bring learning capability to agents, which are further discussed further in the next article: Evolutionary Artificial Neural Network.

References

  1. J.H. Holland. Outline for a logical theory of adaptive systems. ACM, 3:297–314, 1962.
  2. J. P. Koza. Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, 1992.
  3. L. J. Fogel, A. J. Owens,  and M. J. Walsh. Artificial Intelligence through Simulated Evolution. John Wiley, New York, 1966.
  4. H. G. Beyer. The Theory of Evolutionary Strategies. Springer, Berlin, 2001.
  5. I. Rechenberg. Evolutionary strategy. In J. M. Zurada, R. Marks II, and C. Robinson, editors, Computational Intelligence: Imitating Life, pages 147–159. IEEE Press, Los Alamitos, 1994.
  6. H.-P Schwefel. Numerical Optimization of Computer Models. John Wiley, Chichester, UK, 1981.
  7. J. H. Holland. Escaping brittleness: The possibility of general-purpose learning algorithms applied to parallel rule-based systems. In R. S. Michal- ski, J. G. Carbonell, and T. M. Mitchell, editors, Machine Learning, pages 593–624. Morgan Kaufmann, Los Altos, 1986.
  8. S. Nolfi, J. L. Elman, and D. Parisi. Learning and evolution in neural networks. Technical report, Technical Report 9019, Center for Research in Language, University of California, San Diego, 1990.
  9. J. H. Holland. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence. MIT Press, Cambridge, 1975.
  10. M. Buckland. AI Techniques for Game Playing. Premier Press, Cincinnati, OH, USA, 2002.
  11. K. O. Stanley. Efficient Evolution of Neural Networks through Complexification. PhD thesis, Artificial Intelligence Laboratory, The University of Texas at Austin, 2004.
  12. D. Whitley. Genetic algorithms and neural networks. In J. Periaux, M. Galan, and P. Cuesta, editors, Genetic Algorithms in Engineering and Computer Science, pages 203–216. John Wiley and Sons, 1995.
  13. L. D. Whitley. The GENITOR algorithm and selection pressure: Why rank-based allocation of reproductive trials is best. In Proceedings of the 3rd International Conference on Genetic Algorithms, pages 116–123, San Francisco, CA, USA, 1989. Morgan Kaufmann Publishers Inc.
  14. A. Siddiqi and S. M. Lucas. A Comparison of Matrix Rewriting versus Direct Encoding for Evolving Neural Networks. In 1998 IEEE International Conference on Evolutionary Computation pages 392–397. IEEE Press, 1998.
  15. K. O. Stanley, B. D. Bryant, and R. Miikkulainen. Real-time neuroevolution in the NERO video game. IEEE Transactions on Evolutionary Computation, 9:653–668, 2005.
  16. K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Technical Report AI2001-290, Department of Computer Sciences, The University of Texas at Austin, 2002.
  17. X. Yao. Evolving artificial neural networks. In Proceedings of IEEE, volume 87, pages 1423–1447, 1999.
  18. K. O. Stanley, B. D. Bryant, I. Karpov, and R. Miikkulainen. Real-time evolution of neural networks in the NERO video game.  In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI- 2006, Boston, MA), pages 1671–1674. Meno Park, CA: AAAI Press, 2006.
  19. K. O. Stanley, B. D. Bryant, and R. Miikkulainen. Evolving neural network agents in the NERO video game. In Proceedings of the IEEE 2005 Symposium on Computational Intelligence and Games (CIG’05). Piscataway, NJ: IEEE, 2005.
  20. R. Miikkulainen, B. D. Bryant, R. Cornelius, I. V. Karpov, K. O. Stanley, and C. H. Yong. Computational intelligence in games. In G. Y. Yen and D. B. Fogel, editors, Computational Intelligence: Principles and Practice. IEEE Computational Intelligence Society, Piscataway, NJ, 2006.
  21. T. D’Silva, R. Janik, M. Chrien, K. O. Stanley, and R. Miikkulainen. Retaining learned behavior during real-time neuroevolution. Artificial Intelligence and Interactive Digital Entertainment, 2005.
  22. C. H. Yong, K. O. Stanley, R. Miikkulainen, and I. V. Karpov. Incorporating advice into evolution of neural networks. In J. Laird and J. Schaeffer, editors, Proceedings of the Second Artificial Intelligence and Interactive Digital Entertainment Conference, pages 98–104. AAAI Press, Menlo Park, 2006.
  23. S. Whiteson, P. Stone, K. O. Stanley, R. Miikkulainen, and N. Kohl. Automatic feature selection in neuroevolution. In Proceedings of the 2005 Genetic and Evolutionary Computation Conference (GECCO ’05), pages 1225–1232. ACM Press, Washington, D.C., 2005.
  24. E. J. Hastings, R. Guha, and K. O. Stanley. Evolving content in the galactic arms race video game. In Proceedings of the IEEE Symposium on Computational Intelligence and Games(CIG’09). IEEE, Piscataway, NJ, 2009.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.