New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
A NEAT Way for Evolving Echo State Networks
1. A NEAT Way for Evolving Echo State Networks Kyriakos C. Chatzidimitriou Pericles A. Mitkas Intelligent Systems and Software Engineering Labgroup Informatics and Telematics Institute Electrical and Computer Eng. Dept. Centre for Research and Technology-Hellas Aristotle University of Thessaloniki Thessaloniki, Greece
2.
3.
4.
5.
6. Basic Echo State Network If output units are linear: y(t) = w u(t) + w’ x(t) Linear function with a) linear b) non-linear and temporal features Large number of features Sparse Mean around 0 Spectral radius less than 1
7.
8.
9.
10.
11.
12. Crossover 1 3 2 4 5 1 3 2 Let’s assume the smallest gene is also the fittest.
19. Basic Flow Init Pop Simulation Learning Fitness Speciation Selection Mutation Crossover Next Gen Champion Generalization Performance
20.
21.
22.
23.
24.
25.
26.
27. Thank you for your attention Questions? Kyriakos Chatzidimitriou [email_address] http://issel.ee.auth.gr
Notas del editor
Modeling the mechanisms of learning and decision making of autonomous agents as Reinforcement Learning (RL) problems is an appropriate match. An autonomous agent following the RL paradigm will work towards maximizing the total amount of reward it receives over time, by making changes in its policy (i.e. the mapping of state to actions) based on feedback returned by interacting with its environment. For complex real world tasks with large continuous states and actions, we need to build good FAs, we need generalization since table entries no good in difficult problems. Also we need to look at other properties like non-linear and non-Markovian state signals.
After accepting the fact that we need FAs we have to chose what kind and how to adjust the parameters of that parametric FA. We will justify our choice of FA later in the presentation. What is AFAs? Models build ad/hoc, adapted to the problem at hand. How? Nature shows us a way through adaptive techniques like learning and evolution. As an idea this AFAs are good for the autonomy of the agent and good for the user as no expert is needed to fix things before hand.
Read first bullet, second bullter we want to believe it is developing into something complete. Cover as many aspects as possible (nonlinearity, non-markovian, learning, evolution, simple models, find solutions quicker, good performance etc.)
Input, output, linear features, reservoir, properties, trainable and untrainable weights, put an equation about linear and nonlinear features, recurrences for non-markovian signals
State of the art NE method which is based on three principles On the crossover practically avoid the competing conventions problem
Mean of reservoir weights is maintained to 0 in order to follow the best practices for ESNs.
Ommit the connection but the ones in the reservoir. Otherwise it is fully connected.
Even though fittest makes more sense and it is the NEAT approach, we found out that largest helped us escape local optima were the network stuck and needed more nodes to escape and reach better landscapes. Plan on putting fittest versus largest dilemma in the automation of the algorithm.
Supposed to be sparse, even though our and other research showed that non-sparse networks are quite efficient.
NEAT+Q converges on the 100 time steps line, we passed the same testbed with the same performance, much better than simple NN (very difficult to be solved), very steep learning curve due to learning