CS seminar series presents
New trends on Recurrent Neural Networks
by Sebastian Basterrech
Thursday, November 24 at 14:00 in 205
Neural Networks (NNs) are one of the most useful and popular computational models in the areas of Machine Learning (ML) and Artificial Intelligence (AI). A numerical optimization problem appears when NNs are applied for solving ML tasks, which consists in minimizing a non-convex function in a multidimensional space. Several algorithms based on the first order derivative (gradient-type algorithms) of this non-convex function have been successfully applied for training Feedforward NNs (FNNs). Even though those methods can work well for training FNNs, they can fail in the case of networks with recurrences (Recurrent Neural Networks (RNNs)), as well as in the case of networks with many layers (Deep Networks).
During the last 10 years much effort has been devoted to developing efficient learning procedures for RNNs. The Reservoir Computing (RC) paradigm appears as one of the alternatives to RNNs. It has drawn a great interest on the community due to its successes for solving a wide range of ML problems. Another advance has been the development of the Hessian-free (HF) optimization algorithm, which has gained popularity due to its use in Deep Learning. In addition, several approaches based on metaheuristic techniques have been also proposed in the community.
In this talk, we revisit some of these popular learning approaches and we present our latest contributions on the area.