Hannah Lawrence

I am a first-year PhD student in computer science at MIT, studying machine learning theory. Previously, I was a research analyst at the Center for Computational Mathematics of the Flatiron Institute in New York, where I worked on developing algorithms at the interface of signal processing and deep learning, with the ultimate goal of cryoEM image reconstruction. Broadly, I enjoy working on mathematical algorithms for data science.

I spent summer 2019 at Microsoft Research, where I was lucky to be mentored by Cameron Musco (now at Amherst). I've also spent productive summers at Reservoir Labs and the Center for Computational Biology. I was an undergrad at Yale in applied math and computer science, where I had the extremely good fortune of being advised by Amin Karbasi and Dan Spielman.

Email  /  Github  /  LinkedIn  /  CV

profile photo
Research

My research interests include theoretical machine learning, signal processing, sparse recovery, and numerical linear algebra, and especially the applications in which some subset of these paradigms intersect. I like developing provably robust, efficient algorithms for inverse problems, sometimes in imaging applications.

Here are a few questions I've been thinking about recently:

  • Is there a clear theoretical justification for the empirical success of equivariance as an inductive prior in neural architectures? What's the right way to formulate this?
  • Given data, can one learn an underlying dictionary if the corresponding coefficients are not sparse, but rather come from a known generative model?
  • When does equivariance with respect to the wreath product group arise in deep learning applications?
  • Is it possible to characterize the class of generative models under which Fourier phase retrieval is well-conditioned?
And here are some older questions, not forgotten but on the back burner:
  • What's the fastest way to rotationally align two spherical functions?
  • What generalizations of (1) the restricted isometry property and (2) leverage score sampling might be useful for off-grid sparse recovery?

Phase Retrieval with Holography and Untrained Priors: Tackling the Challenges of Low-Photon Nanoscale Imaging
Hannah Lawrence * , David A. Barmherzig *, Henry Li, Michael Eickenberg, Marylou GabriƩ
Under review, 2020

By using a maximum-likelihood objective coupled with a deep decoder prior for images, we achieve superior image reconstruction for holographic phase retrieval, including under several challenging realistic conditions. To our knowledge, this is the first dataset-free machine learning approach for holographic phase retrieval.

Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition
Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi
Appearing at NeurIPS, 2020

We establish the minimax regret of switching-constrained online convex optimization, a realistic optimization framework where algorithms must act in real-time to minimize cumulative loss, but are penalized if they are too erratic.

Low-Rank Toeplitz Matrix Estimation via Random Ultra-Sparse Rulers
Hannah Lawrence, Jerry Li, Cameron Musco, Christopher Musco
Appeared at ICASSP, 2020

By building new, randomized "ruler" sampling constructions, we show how to use sublinear sparse Fourier transform algorithms for sample efficient, low-rank, Toeplitz covariance estimation.

Service
Women in Learning Theory Mentor, Spring 2020

Applied Math Departmental Student Advisory Committee, Spring 2019

Dean's Committee on Science and Quantitative Reasoning, Fall 2018

Undergraduate Learning Assistant, CS 365 (Design and Analysis of Algorithms), Spring 2018

Undergraduate Learning Assistant, CS 223 (Data Structures and Algorithms), Spring 2017

Undergraduate Learning Assistant, CS 201 (Introduction to Computer Science), Fall 2017

Website template credits.