Forward and Backward Analysis for Neural Networks

发文时间:2024-05-08

Speaker(s):张喜悦(牛津大学)

Time:2024-05-08 10:00-12:00

Venue:智华楼-知无涯-313

Abstract:

Over the past decade, artificial intelligence (AI), especially deep learning (DL), has achieved significant advancement. Despite the wide deployment and enthusiastic embrace of AI technologies, the instability and black-box nature of DL systems are raising concerns about the readiness and maturity of AI. As with any automation technology, certification is an essential step to take for AI to be deployed in real-world safety- and security-critical applications. In this talk, I will present recent research outcomes for forward and backward analysis of neuron networks (NNs) to provide provable guarantees on the critical decisions taken by NN-based systems. We propose an automated convex bounding algorithm for forward analysis of neural networks with general activation functions. For backward analysis, we present an efficient anytime algorithm to derive preimage approximations, which enables sound and complete quantitative verification for piecewise-linear neural networks.

 

Bio:

张喜悦, 英国牛津大学计算机科学系副研究员,2022年于北京大学数学科学学院获应用数学博士学位,2017年获信息与计算科学学士学位。研究领域为人工智能系统的安全性和可信性保障, 研究重点包括深度学习模型的自动验证、深度学习驱动系统的安全验证,以及相关泛化应用如AI安全。近期工作为深度学习系统的抽象和验证技术。