Hannah Lawrence

I am a PhD student in machine learning at MIT, where I am fortunate to be advised by Ankur Moitra. I am also a member of the wonderful Atomic Architects, led by Tess Smidt. Previously, I was a summer research intern at the Open Catalyst Team at Meta FAIR, studying equivariance for chemistry applications. Before graduate school, I was a research analyst at the Center for Computational Mathematics of the Flatiron Institute in New York, where I worked on developing algorithms at the interface of equivariant deep learning and signal processing for cryoEM. Broadly, I enjoy developing theoretically principled tools for deep learning (often in scientific or image domains), with a focus on learning with symmetries.

I spent summer 2019 at Microsoft Research, where I was lucky to be mentored by Cameron Musco. I've also spent productive summers at Reservoir Labs and the Center for Computational Biology. I was an undergrad at Yale in applied math and computer science, where I had the good fortune of being advised by Amin Karbasi and Dan Spielman.

Finally, I co-founded the Boston Symmetry Group, which hosts a recurring workshop for researchers interested in symmetries in machine learning. Follow us on Twitter, shoot us an email, or join our mailing list if you're interested in attending!

Email  /  Github  /  LinkedIn  /  Twitter  /  Google Scholar   

profile photo
Research

My primary research interests include symmetry-aware (equivariant) machine learning and scientific applications. In addition, I enjoy developing theoretically principled tools for deep learning, for applications from vision to interpretability to PDEs.

Here is a non-exhaustive list of a few high-level questions I've been thinking about recently (or at least, the last time I updated this website):

  • What kinds of approximate symmetries arise in practice, e.g. in scientific applications? How should this structure inform our choice of architecture, and when is approximate symmetry still a powerful enough inductive bias to benefit learning?
  • What is the role of equivariance, e.g. to permutations, in large language models (LLMs)? To what extent is equivariance learned? To what extent should it be enforced?
  • How can we harness equivariance to learn useful representations, especially in applications with complicated symmetries (such as PDEs)?
  • How can we make canonicalization work, in theory and in practice, as an approach for enforcing symmetries?
  • Does equivariance have a role to play in NLP? In fairness?

Equivariant Frames and the Impossibility of Continuous Canonicalization
Nadav Dym*, Hannah Lawrence*, Jonathan Siegel*
Under review, 2023.

We demonstrate that, perhaps surprisingly, there is no continuous canonicalization (or even efficiently implementable frame) for many symmetry groups. We introduce a notion of weighted frames to circumvent this issue.

Learning Polynomial Problems with SL(2,R) Equivariance
Hannah Lawrence*, Mitchell Harris*
ICLR 2024, to appear.

We propose machine learning approaches, which are equivariant with respect to the non-compact group of area-preserving transformations SL(2,R), for learning to solve polynomial optimization problems.

On the hardness of learning under symmetries
Bobak T. Kiani*, Thien Le*, Hannah Lawrence*, Stefanie Jegelka, Melanie Weber
ICLR 2024, to appear.

We give statistical query lower bounds for learning symmetry-preserving neural networks and other invariant functions.

Self-Supervised Learning with Lie Symmetries for Partial Differential Equations
Grégoire Mialon*, Quentin Garrido*, Hannah Lawrence, Danyal Rehman, Bobak Kiani
ICLR, 2023.

We apply self-supervised learning to partial differential equations, using the equations' Lie point symmetries as augmentations.

Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems
Xuan Zhang*, Limei Wang*, Jacob Helwig*, Youzhi Luo*, Cong Fu*, Yaochen Xie*, ..., Hannah Lawrence, ..., Shuiwang Ji
Under review, 2023.

A survey of machine learning for physics.

Distilling Model Failures as Directions in Latent Space
Saachi Jain*, Hannah Lawrence*, Ankur Moitra, Aleksander Madry
ICLR (spotlight presentation), 2023. See also the blog post

We present a framework for automatically identifying and captioning coherent patterns of errors made by any trained model. The key? Keeping it simple: linear classifiers in a shared vision-language embedding space.

GULP: a prediction-based metric between representations
Enric Boix-Adsera, Hannah Lawrence, George Stepaniants, Philippe Rigollet
NeurIPS (Oral Presentation), 2022

We define a family of distance pseudometrics for comparing learned data representations, directly inspired by transfer learning. In particular, we define a distance between two representations based on how differently (worst-case over all downstream, bounded linear predictive tasks) they perform under ridge regression.

Barron's Theorem for Equivariant Networks
Hannah Lawrence
NeurIPS Workshop: Symmetry and Geometry in Neural Representations (Poster, to appear), 2022

We extend Barron’s Theorem for efficient approximation to invariant neural networks, in the cases of invariance to a permutation subgroup or the rotation group.

Toeplitz Low-Rank Approximation with Sublinear Query Complexity
Michael Kapralov, Hannah Lawrence, Mikhail Makarov, Cameron Musco, Kshiteej Sheth
Symposium on Discrete Algorithms (SODA), to appear, 2023

We prove that any nearly low-rank Toeplitz positive semidefinite matrix has a low-rank approximation that is itself Toeplitz, and give a sublinear query complexity algorithm for finding it.

Implicit Bias of Linear Equivariant Networks
Hannah Lawrence, Kristian Georgiev, Andrew Dienes, Bobak T. Kiani*
Appearing at ICML, 2022

We characterize the implicit bias of linear group-convolutional networks trained by gradient descent. In particular, we show that the learned linear function is biased towards low-rank matrices in Fourier space.

Phase Retrieval with Holography and Untrained Priors: Tackling the Challenges of Low-Photon Nanoscale Imaging
Hannah Lawrence * , David A. Barmherzig *, Henry Li, Michael Eickenberg, Marylou Gabrié
Appeared at MSML, 2021

By using a maximum-likelihood objective coupled with a deep decoder prior for images, we achieve superior image reconstruction for holographic phase retrieval, including under several challenging realistic conditions. To our knowledge, this is the first dataset-free machine learning approach for holographic phase retrieval.

Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition
Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi
Appeared at NeurIPS, 2020

We establish the minimax regret of switching-constrained online convex optimization, a realistic optimization framework where algorithms must act in real-time to minimize cumulative loss, but are penalized if they are too erratic.

Low-Rank Toeplitz Matrix Estimation via Random Ultra-Sparse Rulers
Hannah Lawrence, Jerry Li, Cameron Musco, Christopher Musco
Appeared at ICASSP, 2020

By building new, randomized "ruler" sampling constructions, we show how to use sublinear sparse Fourier transform algorithms for sample efficient, low-rank, Toeplitz covariance estimation.

Service
Organizer, Boston Symmetry Day, Fall and Spring 2023

Teaching Assistant, 6.S966 Symmetry and its Applications to Machine Learning, Spring 2023

Hertz Foundation Summer Workshop Committee, Fall 2021 and Spring 2022

Women in Learning Theory Mentor, Spring 2020

Applied Math Departmental Student Advisory Committee, Spring 2019

Dean's Committee on Science and Quantitative Reasoning, Fall 2018

Undergraduate Learning Assistant, CS 365 (Design and Analysis of Algorithms), Spring 2018

Undergraduate Learning Assistant, CS 223 (Data Structures and Algorithms), Spring 2017

Undergraduate Learning Assistant, CS 201 (Introduction to Computer Science), Fall 2017

Website template credits.