NeuroEvolution of Augmented Topologies
From Wikipedia, the free encyclopedia
Neuro-Evolution of Augmenting Topologies (NEAT) is a genetic algorithm for evolving neural networks. Developed by Ken Stanley at University of Texas at Austin and published under the GPL; it integrates with Guile, a GNU scheme interpreter. Ken Stanley's NEAT is considered the base reference for implementations of the NEAT algorithm. This algorithm is notable in that it evolves both network weights and structure, and efficiently balances between the fitness and diversity of evolved solutions. It is based on three key ideas: tracking genes with historical markings to allow easy crossover between different topologies, protecting innovation via speciation, starting from minimal structure and complexifying as the generations pass. NEAT has shown to perform faster than other neuro-evolutionary techniques. It outperforms many RL (reinforcement learning) methods, too.
Contents |
[edit] Complexification
Conventional neural network topology is defined by the developer, and the genetic algorithm is used to modify weights in the network. The complexity of such a network stays constant through the evolution process, as the number of nodes, and connections between nodes remains constant. The NEAT approach begins with perceptron like structure, with no hidden neurons. It is a simplistic feed-forward network of input neurons and output neurons, representing the input and output signals. As the evolution progresses, the topology of the network may be augmented by adding a neuron along an existing connection, or by adding a new connection between previously unconnected neurons.
[edit] Phased Pruning
An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure.
[edit] rtNEAT
In 2003 Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through an iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires its current fitness measure is examined to see whether it falls near the bottom of the population, and if so it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations.
[edit] External links
- Ken Stanley's website
- NEAT Homepage
- Project NERO Website, description of an example application of rtNEAT (a realtime version of NEAT).
- "Evolving Adaptive Neural Networks with and without Adaptive Synapses".