机器学习与数据科学博士生系列论坛(第三十九期)—— Low Rank Approximation with 1/\epsilon^{1/3} Matrix-Vector Products

发文时间:2022-11-07

Speaker(s):Xiaobo Yang (PKU)

Time:2022-11-07 16:00-17:00

Venue:腾讯会议 723 1564 5542

摘要: 
Low rank approximation of matrix plays an important role in areas such as numerical linear algebra, optimization and applied statistics. The Eckart–Young–Mirsky theorem shows that, under Schatten norms, the best k-rank approximation for a matrix A is its truncation of the first k singular values. However, for matrices with a large size, directly calculating their SVD truncations is expensive. Fortunately, people have developed many randomized algorithms to approximate the best low rank approximation with great precision and efficiency.

In this talk, we will introduce the latest work[Bakshi et al. 2022] on complexity of randomized low rank approximation, which shows, for any \epsilon accuracy, we only need O(1/\epsilon^{1/3}) matrix-vector products. This work greatly improves the O(1/\epsilon^{1/2}) bound in [Musco and Musco,2015] and gives the first nontrivial lower bound depending on \epsilon. Some remaining open problems will also be discussed.