Elsevier

Neural Networks

Volume 123, March 2020, Pages 153-162
Neural Networks

Evolving artificial neural networks with feedback

https://doi.org/10.1016/j.neunet.2019.12.004Get rights and content
Under a Creative Commons license
open access

Abstract

Neural networks in the brain are dominated by sometimes more than 60% feedback connections, which most often have small synaptic weights. Different from this, little is known how to introduce feedback into artificial neural networks. Here we use transfer entropy in the feed-forward paths of deep networks to identify feedback candidates between the convolutional layers and determine their final synaptic weights using genetic programming. This adds about 70% more connections to these layers all with very small weights. Nonetheless performance improves substantially on different standard benchmark tasks and in different networks. To verify that this effect is generic we use 36000 configurations of small (2–10 hidden layer) conventional neural networks in a non-linear classification task and select the best performing feed-forward nets. Then we show that feedback reduces total entropy in these networks always leading to performance increase. This method may, thus, supplement standard techniques (e.g. error backprop) adding a new quality to network learning.

Keywords

Deep learning
Feedback
Transfer entropy
Convolutional neural network

Cited by (0)

1

These authors contributed equally.