机器学习与数据科学博士生系列论坛(第九十四期)—— Beyond Pre-training: Shaping LLMs with Advanced Post-Training

发文时间:2025-11-13

Speaker(s):王迩东(北京大学)

Time:2025-11-13 16:00-17:00

Venue:腾讯会议 331-2528-5257

摘要:While pre-training lays the groundwork for Large Language Models (LLMs), the true key to their refinement and alignment lies in post-training. This lecture delves into the critical techniques—such as fine-tuning, reinforcement learning, and test-time scaling—that transform a base LLM into a capable, reliable, and safe AI. We will systematically explore how these methods enhance reasoning, factual accuracy, and ethical alignment. The discussion will also address pivotal challenges like catastrophic forgetting and reward hacking, providing a comprehensive overview of the current landscape and future directions in evolving LLMs beyond their initial training.

论坛简介:该线上论坛是由张志华教授机器学习实验室组织,每两周主办一次(除了公共假期)。论坛每次邀请一位博士生就某个前沿课题做较为系统深入的介绍,主题包括但不限于机器学习、高维统计学、运筹优化和理论计算机科学。