Date:
Location:
Speaker: Sitan Chen (UC Berkeley)
Title: Learning Polynomial Transformations
Abstract: Generative models like variational auto-encoders, generative adversarial networks, and flow-based models have exploded in popularity as extraordinarily effective ways of modeling real-world data. At their heart, these models attempt to learn a parametric transformation of a simple, low-dimensional distribution into a complex, high-dimensional one. Yet despite their immense practical impact, very little is known about the learnability of such distributions from a theoretical perspective.
This talk concerns arguably the most natural incarnation of this problem: given samples from the pushforward of the Gaussian under an unknown polynomial p: ℝr → ℝd, can we approximately recover p (up to trivial symmetries)? I'll present the first polynomial-time algorithms for this task. These results leverage the sum-of-squares hierarchy, which has emerged from the theoretical computer science community in recent years as a powerful algorithmic tool for solving a number of high-dimensional statistical problems. Along the way, I will also highlight an intriguing connection to tensor ring decomposition, a popular variant of the matrix product state ansatz.
Based on joint work with Jerry Li, Yuanzhi Li, and Anru Zhang.
2022-05-10_sitan_chen_seminar.pdf | 719 KB |