Title: Foundations of Deep Learning
Speaker: David Balduzzi (Victoria University of Wellington)
Time: 15:00, June 8th, 2016
Venue: Seminar Room (334), Level 3, Building 5, Institute of Software, Chinese Academy of Sciences.
Abstract: Recent years have seen spectacular progress in machine learning, with algorithms matching the performance of expert humans in object and voice recognition, video games, and the board game Go. These successes are all the result of training huge neural networks on massive datasets. Unfortunately, our theoretical understanding of neural networks is currently lagging far behind their empirical performance.
In this talk, I give an overview of my work on the foundations of deep learning, focusing on three core topics. The first topic is a game-theoretic analysis of feedforward neural nets with applications to convergence rates. The second topic is gradient estimation, with applications to reinforcement learning. Finally, I will discuss work on streamlining the structure of recurrent neural nets based on ideas from functional programming.