Charles Earl
A couple of weeks ago I attended the Theoretical Foundations of Deep Learning workshop at Georgia Tech. Some of the videos are accessible through the link.
Now that Deep Learning is used in everything from Siri to Snapchat, scientists want to know exactly what deep nets can learn and how fast they can learn it. A couple of talks of note included one by Aleksander Madry on the (in)security of deep neural nets and this talk by my friend Santosh Vempala on the bigger picture of building a theoretical picture of deep neural networks.