Contributed talk
in
Neural Networks 1,
July 31, 2019, 12:30 p.m.
in room
NUBS 2.10
Evolving Recurrent Neural Network Controllers by Incremental Fitness Shaping
Kaan Akinci, Andy Philippides
watch
Publication
Time variant artificial neural networks are commonly used in reinforcement learning problems such as creating controllers for specific tasks. Genetic algorithms are one of the methods that are used in time variant artificial neural networks. Recurrent neural networks (RNN) are one of the networks that has been used together with genetic algorithms to produce controllers. However, long-time short-time memory (LSTM) networks are not commonly used in a similar way. Since, LSTM networks are more capable in terms of memory, they are particularly interesting to evolve using genetic algorithms. This paper aims to evolve a LSTM network with different variations of evolutionary algorithms, which are steady state, evolutionary strategies and NEAT, in order to evolve a controller for a reinforcement learning task, which is landing a space craft to a certain place. After this task was assessed, we compare the behaviour and evolutionary process of LSTM with the behaviour of RNN which both were evolved with same fitness functions. LSTM proved to be evolvable using ES and performed better than NEAT with RNN and steady state with LSTM.