My research interests include theoretical machine learning, signal processing, sparse recovery, and numerical linear algebra, and especially the applications in which some subset of these paradigms intersect. I like developing provably robust, efficient algorithms for inverse problems, sometimes in imaging applications.
Here are a few questions I've been thinking about recently:
And here are some older questions, not forgotten but on the back burner:
- Is there a clear theoretical justification for the empirical success of equivariance as an inductive prior in neural architectures? What's the right way to formulate this?
- Given data, can one learn an underlying dictionary if the corresponding coefficients are not sparse, but rather come from a known generative model?
- When does equivariance with respect to the wreath product group arise in deep learning applications?
- Is it possible to characterize the class of generative models under which Fourier phase retrieval is well-conditioned?
- What's the fastest way to rotationally align two spherical functions?
- What generalizations of (1) the restricted isometry property and (2) leverage score sampling might be useful for off-grid sparse recovery?
Women in Learning Theory Mentor, Spring 2020
Applied Math Departmental Student Advisory Committee, Spring 2019
Dean's Committee on Science and Quantitative Reasoning, Fall 2018
Undergraduate Learning Assistant, CS 365 (Design and Analysis of Algorithms), Spring 2018
Undergraduate Learning Assistant, CS 223 (Data Structures and Algorithms), Spring 2017
Undergraduate Learning Assistant, CS 201 (Introduction to Computer Science), Fall 2017