Publications

(*:co-first, +:co-last)
Neural Manifold Capacity Captures Representation Geometry, Correlations, and Task-Efficiency Across Species and Behaviors
Chi-Ning Chou, Luke Arend, Albert J. Wakhloo, Royoung Kim, Will Slatton, SueYeon Chung
We present effective Geometric measures from Correlated Manifold Capacity theory (GCMC) for probing the functional structure in neural representations. We utilize a statistical physics approach to establish analytical connections between neural co-variabilities and downstream read-out efficiency. These effective geometric measures capture both stimulus-driven and behavior-driven structures in neural population activities, while extracting computationally-relevant information from neural data into intuitive and interpretable analysis descriptors. [pdf] bioRxiv, 2024

Neural Population Geometry and Optimal Coding of Tasks with Shared Latent Structure
Albert J. Wakhloo, Will Slatton, and SueYeon Chung
Humans and animals can recognize latent structures in their environment and apply this information to efficiently navigate the world. Several recent works argue that the brain supports these abilities by forming neural representations that encode such latent structures in flexible, generalizable ways. However, it remains unclear what aspects of neural population activity are contributing to these computational capabilities. Here, we develop an analytical theory linking the mesoscopic statistics of a neural population’s activity to generalization performance on a multi-task learning problem. [pdf] aRxiv, 2024

Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds
Michael Kuoch*, Chi-Ning Chou*, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung
Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches. However, concrete methodologies for bridging the gap between these levels of abstraction remain elusive. In this work, we investigate the internal mechanisms of neural networks through the lens of neural population geometry, aiming to provide understanding at an intermediate level of abstraction, as a way to bridge that gap [pdf] [bib] Conference on Parsimony and Learning, 2024

Unsupervised learning on spontaneous retinal activity leads to efficient neural representation geometry
Andrew Ligeralde, Yilun Kuang, Thomas Edward Yerxa, Miah N. Pitcher, Marla Feller, SueYeon Chung
Prior to the onset of vision, neurons in the developing mammalian retina spontaneously fire in correlated activity patterns known as retinal waves. Experimental evidence suggests that retinal waves strongly influence the emergence of sensory representations before visual experience. We aim to model this early stage of functional development by using movies of neurally active developing retinas as pre-training data for neural networks [pdf] UniReps workshop, NeurIPS, 2023

A Spectral Theory of Neural Prediction and Alignment
Abdulkadir Canatar*, Jenelle Feather*, Albert Wakhloo, SueYeon Chung
Many different state-of-the-art deep neural networks yield similar neural predictions, but it remains unclear how to differentiate among models that perform equally well at predicting neural responses. To gain insight into this, we use a recent theoretical framework that relates the generalization error from regression to the spectral bias of the model activations and the alignment of the neural responses onto the learnable subspace of the model. [pdf] [bib] [code] Spotlight paper, NeurIPS, 2023

Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations
Thomas Yerxa, Yilun Kuang, Eero Simoncelli, SueYeon Chung
Self-supervised Learning (SSL) provides a strategy for constructing useful representations of images without relying on hand-assigned labels. Many such methods aim to map distinct views of the same scene or object to nearby points in the representation space, while employing some constraint to prevent representational collapse. Here we recast the problem in terms of efficient coding by adopting manifold capacity.[pdf][bib] NeurIPS, 2023

Social learning enhances stimulus representations in the auditory cortex
Nihaad Paraouty, Justin D. Yao, Léo Varnet, Chi-Ning Chou, SueYeon Chung, Dan H. Sanes
Social learning (SL) through experience with conspecifics can facilitate the acquisition of many behaviors. Thus, when Mongolian gerbils are exposed to a demonstrator performing an auditory discrimination task, their subsequent task acquisition is facilitated, even in the absence of visual cues. Here, we show that transient inactivation of auditory cortex (AC) during exposure caused a significant delay in task acquisition during the subsequent practice phase, suggesting that AC activity is necessary for SL. [pdf][bib] Nature Communications, 2023

Linear Classification of Neural Manifolds with Correlated Variability
Albert J. Wakhloo, Tamara J. Sussman, and SueYeon Chung
In this letter, we calculate how correlations between object representations affect the capacity, a measure of linear separability. We show that for spherical object manifolds, introducing correlations between centroids effectively pushes the spheres closer together, while introducing correlations between the spheres’ axes effectively shrinks their radii, revealing a duality between neural correlations and geometry [pdf][bib][code] Physical Review Letters, 2023

Unveiling the benefits of multitasking in disentangled representation formation
Jenelle Feather, SueYeon Chung
[pdf][bib] Trends in Cognitive Sciences, 2023

Transformation of acoustic information to sensory decision variables in the parietal cortex.
Justin D. Yao*, Klavdia O. Zemlianova*, David L. Hockera, Cristina Savina, Christine M. Constantinople, SueYeon Chung, and Dan H. Sanes.
[pdf] [bib] PNAS, 2023

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Samuel Lippl, L.F. Abbott, and SueYeon Chung,
Understanding the asymptotic behavior of gradient-descent training of deep neural networks is essential for revealing inductive biases and improving network performance. We derive the infinite-time training limit of a mathematically tractable class of deep nonlinear neural networks, gated linear networks (GLNs), and generalize these results to gated networks described by general homogeneous polynomials. [pdf] [bib] arXiv, 2022

Divisive Feature Normalization Improves Image Recognition Performance in AlexNet
Michelle Miller, SueYeon Chung, Kenneth D. Miller
Local divisive normalization provides a phenomenological description of many nonlinear response properties of neurons across visual cortical areas. To gain insight into the utility of this operation, we studied the effects on AlexNet of a local divisive normalization between features, with learned parameters. [pdf] [bib] ICLR, 2022

Neural population geometry: An approach for understanding biological and artificial neural networks
SueYeon Chung, Larry Abbott
We highlight recent studies of neural population geometry: untangling in perception, classification theory of manifolds, abstraction in cognitive systems, topology underlying cognitive maps, dynamic untangling in motor systems, and a dynamic approach to cognition. [pdf] [bib] Current Opinion in Neurobiology, 2021

Credit Assignment Through Broadcasting a Global Error Vector
David G. Clark, L.F. Abbott, SueYeon Chung
Here, we explore the extent to which a globally broadcast learning signal, coupled with local weight updates, enables training of DNNs. We present both a learning rule, called global error-vector broadcasting (GEVB), and a class of DNNs, called vectorized nonnegative networks (VNNs), inwhich this learning rule operates. [pdf] [code] [bib] NeurIPS, 2021

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello*, Jenelle Feather*, Hang Le*, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung
Using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. [pdf] [code] [bib] NeurIPS, 2021

Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim+, SueYeon Chung+
It is not entirely clear what aspects of sentence-level syntax are captured by vector-based language representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of interventional, input perturbation-based analyses of representations from pretrained language models. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of these representations to several kinds of structure in sentences. [pdf] [bib] ACL Workshop, Representation Learning for NLP, 2021

Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks
Landan Seguin, Anthony Ndirango, Neeli Mishra, SueYeon Chung, Tyler Lee
Although adversarial training is successful at mitigating adversarial attacks, the behavioral differences between adversarially-trained (AT) models and standard models are still poorly understood. Motivated by a recent study on learning robustness without input perturbations by distilling an AT model, we explore what is learned during adversarial training by analyzing the distribution of logits in AT models. [pdf] [bib] arXiv, 2021

On the geometry of generalization and memorization in deep neural networks
Cory Stephenson*, Suchismita Padhy*, Abhinav Ganesh, Yue Hui, Hanlin Tang, and SueYeon Chung
To examine the structure of when and where memorization occurs in a deep network, we use a recently developed replica-based mean field theoretic geometric analysis method. We find that all layers preferentially learn from examples which share features, and link this behavior to generalization performance.[pdf] [bib] ICLR, 2021

Emergence of Separable Manifolds in Deep Language Representations
Jonathan Mamou*, Hang Le*, Miguel A Del Rio, Cory Stephenson, Hanlin Tang, Yoon Kim, SueYeon Chung
We utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience that connects geometry of feature representations with linear separability of classes, to analyze language representations from large-scale contextual embedding models. We explore representations from different model families (BERT, RoBERTa, GPT, etc.) and find evidence for emergence of linguistic manifolds across layer depth (e.g., manifolds for part-of-speech tags), especially in ambiguous data (i.e, words with multiple part-of-speech tags, or part-of-speech classes including many words). [pdf] [code] [bib] ICML, 2020

Separability and geometry of object manifolds in deep neural networks
Uri Cohen*, Sueyeon Chung*, Daniel D. Lee, Haim Sompolinsky
We demonstrate that changes in the geometry of the associated object manifolds underlie this improved capacity, and shed light on the functional roles different levels in the hierarchy play to achieve it, through orchestrated reduction of manifolds’ radius, dimensionality and inter-manifold correlations. [pdf] [code] [bib] Nature Communications, 2020

On 1/n neural representation and robustness
Josue Nassar*, Piotr Aleksander Sokol*, SueYeon Chung, Kenneth D. Harris, Il Memming Park
In this work, we investigate the robustness to perturbations in neural networks by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks. We use adversarial robustness to probe Stringer et al's theory regarding the causal role of a 1/n covariance spectrum. [pdf] [bib] NeurIPS, 2020

Untangling in Invariant Speech Recognition
Cory Stephenson, Jenelle Feather, Suchismita Padhy, Oguz Elibol, Hanlin Tang, Josh McDermott, SueYeon Chung
Deep neural networks have achieved impressive performance in audio processing applications, both as sub-components of larger systems and as complete end-to-end systems by themselves. Despite their empirical successes, comparatively little is understood about how these audio models accomplish these tasks.In this work, we employ a recently developed statistical mechanical theory that connects geometric properties of network representations and the separability of classes to probe how information is untangled within neural networks trained to recognize speech. [pdf] [code] [bib] NeurIPS, 2019

Classification and Geometry of General Perceptual Manifolds
Sueyeon Chung, Daniel D. Lee, Haim Sompolinsky
We develop a statistical mechanical theory for the linear classification of manifolds with arbitrary geometry, revealing a remarkable relation to the mathematics of conic decomposition. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension, which can explain the classification capacity for manifolds of various geometries. [pdf] [bib] Physical Review X, 2018

Learning Data Manifolds with a Cutting Plane Method
Sueyeon Chung, Uri Cohen, Daniel D. Lee, Haim Sompolinsky
[pdf] [bib] Neural Computation, 2018

Linear readout of object manifolds
Sueyeon Chung, Daniel D. Lee, Haim Sompolinsky
We present a theory that characterizes the ability of a linear readout network, the perceptron, to classify objects from variable neural responses. We show how the readout perceptron capacity depends on the dimensionality, size, and shape of the object manifolds in its input neural representation. [pdf] [bib] Physical Review E, 2016