机器学习与数据科学博士生系列论坛(第八十三期)—— Analyzing the Behavior of Neural Networks with Feature Learning Theory

发文时间:2025-01-09

Speaker(s):王迩东(北京大学)

Time:2025-01-09 16:00-17:00

Venue:腾讯会议 568-7810-5726

摘要:
Following the emergence of Neural Tangent Kernel (NTK), feature learning theory has become a significant branch of deep learning. Unlike NTK, this theory suggests that neural networks can learn features or signals from data during gradient descent. It typically assumes specific data generating models, such as Gaussian mixtures or sparse-coding models, and analyzes how a neural network, often a two-layer network, learns signals and noise. By simplifying the dynamics of complex networks into “signal learning” and “noise memory,” feature learning theory effectively describes the optimization performance during training and the generalization capabilities post-convergence. This approach has greatly enhanced the interpretability of deep learning, revealing the intrinsic interactions between data and neural network dynamics.
 
论坛简介:该线上论坛是由张志华教授机器学习实验室组织,每两周主办一次(除了公共假期)。论坛每次邀请一位博士生就某个前沿课题做较为系统深入的介绍,主题包括但不限于机器学习、高维统计学、运筹优化和理论计算机科学。