
How does gradient descent work?
Speaker(s):Jeremy Cohen (Flatiron Institute)
Time:2025-04-10 10:00–11:00 AM
Venue:Online (Zoom)
Abstract:
Optimization is the engine of deep learning, yet the theory of optimization has had little impact on the practice of deep learning. Why? In this talk, we will first show that traditional theories of optimization fail to explain the convergence of the simplest optimization algorithm — deterministic gradient descent — in deep learning. Whereas traditional theories assert that gradient descent converges because the curvature of the loss landscape is “a priori” small, in reality gradient descent converges because it *dynamically avoids* high-curvature regions of the loss landscape. Understanding this behavior requires Taylor expanding to third order, which is one order higher than normally used in optimization theory. While the “fine-grained” dynamics of gradient descent involve chaotic oscillations that are difficult to analyze, we will demonstrate that the “time-averaged” dynamics are, fortunately, much more tractable. We will show that our time-averaged analysis yields highly accurate quantitative predictions in a variety of deep learning settings. Since gradient descent is the simplest optimization algorithm, we hope this analysis can help point the way towards a mathematical theory of optimization in deep learning.
Bio:
Jeremy Cohen is a research fellow at the Flatiron Institute. He has recently been working on understanding optimization in deep learning. He obtained his PhD in 2024 from Carnegie Mellon University, advised by Zico Kolter and Ameet Talwalkar.
Join Zoom Meeting
https://us02web.zoom.us/j/82057441819?pwd=ZFCHoGRk17bX2azXvZaeB16Ai3RWCj.1
Meeting ID: 820 5744 1819
Passcode: 153978