Geometry linked to untangling efficiency reveals structure and computation in neural populations
Chi-Ning Chou, Royoung Kim, Luke A Arend, Yao-Yuan Yang, Brett D Mensh, Won Mok Shim, Matthew G Perich, SueYeon Chung
We introduce GLUE (Geometry Linked to Untangling Efficiency) a data-driven analysis framework with manifold capacity theory, that links changes in the geometrical properties of neural activity patterns to representational untangling at the computational level. We applied GLUE to over seven neuroscience datasets—spanning multiple organisms, tasks, and recording techniques—and found that task-relevant representations untangle in many domains, including along the cortical hierarchy, through learning, and over the course of intrinsic neural dynamics. [
pdf] [
bib]
bioRxiv, 2025
Neural Population Geometry and Optimal Coding of Tasks with Shared Latent Structure
Albert J. Wakhloo, Will Slatton, and SueYeon Chung
Humans and animals can recognize latent structures in their environment and apply this information to efficiently navigate the world. Several recent works argue that the brain supports these abilities by forming neural representations that encode such latent structures in flexible, generalizable ways. However, it remains unclear what aspects of neural population activity are contributing to these computational capabilities. Here, we develop an analytical theory linking the mesoscopic statistics of a neural population’s activity to generalization performance on a multi-task learning problem. [
pdf]
arXiv, 2024
Linear Classification of Neural Manifolds with Correlated Variability
Albert J. Wakhloo, Tamara J. Sussman, and SueYeon Chung
In this letter, we calculate how correlations between object representations affect the capacity, a measure of linear separability. We show that for spherical object manifolds, introducing correlations between centroids effectively pushes the spheres closer together, while introducing correlations between the spheres’ axes effectively shrinks their radii, revealing a duality between neural correlations and geometry [
pdf][
bib]
Physical Review Letters, 2023
Neural population geometry: An approach for understanding biological and artificial neural networks
SueYeon Chung, Larry Abbott
We highlight recent studies of neural population geometry: untangling in perception, classification theory of manifolds, abstraction in cognitive systems, topology underlying cognitive maps, dynamic untangling in motor systems, and a dynamic approach to cognition. [
pdf] [
bib]
Current Opinion in Neurobiology, 2021
Classification and Geometry of General Perceptual Manifolds
Sueyeon Chung, Daniel D. Lee, Haim Sompolinsky
We develop a statistical mechanical theory for the linear classification of manifolds with arbitrary geometry, revealing a remarkable relation to the mathematics of conic decomposition. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension, which can explain the classification capacity for manifolds of various geometries. [
pdf] [
bib]
Physical Review X, 2018
Linear readout of object manifolds
Sueyeon Chung, Daniel D. Lee, Haim Sompolinsky
We present a theory that characterizes the ability of a linear readout network, the perceptron, to classify objects from variable neural responses. We show how the readout perceptron capacity depends on the dimensionality, size, and shape of the object manifolds in its input neural representation. [
pdf] [
bib]
Physical Review E, 2016
Variations in neuronal selectivity create efficient representational geometries for perception
Sonica Saraf, J. Anthony Movshon, SueYeon Chung
Neurons exhibit a wide variety of selective response properties, but the reasons for this diversity are unknown. Here, we related the distribution of neuronal tuning properties to the information capacity of the population. Our results from theory, simulations, and analysis of recordings from macaque primary visual cortex (V1) reveal that diversity of amplitude and bandwidth drive complementary changes to the representational geometry of a population. Amplitude diversity pushes the centers of the representations further apart, whereas bandwidth heterogeneity decorrelates the center locations. These geometric changes separate out representations for distinct stimuli, creating more efficient encoding. [
pdf] [
bib]
bioRxiv, 2025
Geometry linked to untangling efficiency reveals structure and computation in neural populations
Chi-Ning Chou, Royoung Kim, Luke A Arend, Yao-Yuan Yang, Brett D Mensh, Won Mok Shim, Matthew G Perich, SueYeon Chung
We introduce GLUE (Geometry Linked to Untangling Efficiency) a data-driven analysis framework with manifold capacity theory, that links changes in the geometrical properties of neural activity patterns to representational untangling at the computational level. We applied GLUE to over seven neuroscience datasets—spanning multiple organisms, tasks, and recording techniques—and found that task-relevant representations untangle in many domains, including along the cortical hierarchy, through learning, and over the course of intrinsic neural dynamics. [
pdf] [
bib]
bioRxiv, 2025
The Geometry of Prompting: Unveiling Distinct Mechanisms of Task Adaptation in Language Models
Artem Kirsanov, Chi-Ning Chou, Kyunghyun Cho, SueYeon Chung
We investigate how different prompting methods affect the geometry of representations in these models. Employing a framework grounded in statistical physics, we reveal that various prompting techniques, while achieving similar performance, operate through distinct representational mechanisms for task adaptation. Our analysis highlights the critical role of input distribution samples and label semantics in few-shot in-context learning. [
pdf][
bib]
NAACL, 2025
Neural Population Geometry and Optimal Coding of Tasks with Shared Latent Structure
Albert J. Wakhloo, Will Slatton, and SueYeon Chung
Humans and animals can recognize latent structures in their environment and apply this information to efficiently navigate the world. Several recent works argue that the brain supports these abilities by forming neural representations that encode such latent structures in flexible, generalizable ways. However, it remains unclear what aspects of neural population activity are contributing to these computational capabilities. Here, we develop an analytical theory linking the mesoscopic statistics of a neural population’s activity to generalization performance on a multi-task learning problem. [
pdf]
arXiv, 2024
Separability and geometry of object manifolds in deep neural networks
Uri Cohen*, Sueyeon Chung*, Daniel D. Lee, Haim Sompolinsky (*:co-first)
We demonstrate that changes in the geometry of the associated object manifolds underlie this improved capacity, and shed light on the functional roles different levels in the hierarchy play to achieve it, through orchestrated reduction of manifolds’ radius, dimensionality and inter-manifold correlations. [
pdf] [
code] [
bib]
Nature Communications, 2020
Untangling in Invariant Speech Recognition
Cory Stephenson, Jenelle Feather, Suchismita Padhy, Oguz Elibol, Hanlin Tang, Josh McDermott, SueYeon Chung
Deep neural networks have achieved impressive performance in audio processing applications, both as sub-components of larger systems and as complete end-to-end systems by themselves. Despite their empirical successes, comparatively little is understood about how these audio models accomplish these tasks.In this work, we employ a recently developed statistical mechanical theory that connects geometric properties of network representations and the separability of classes to probe how information is untangled within neural networks trained to recognize speech. [
pdf] [
code] [
bib]
NeurIPS, 2019
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello*, Jenelle Feather*, Hang Le*, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung (*:co-first)
Using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. [
pdf] [
code] [
bib]
NeurIPS, 2021
Transformation of acoustic information to sensory decision variables in the parietal cortex.
Justin D. Yao*, Klavdia O. Zemlianova*, David L. Hockera, Cristina Savina, Christine M. Constantinople, SueYeon Chung, and Dan H. Sanes. (*:co-first)
The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex- recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. Our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions[
pdf] [
bib]
PNAS, 2023
Contrastive-Equivariant Self-Supervised Learning Improves Alignment with Primate Visual Area IT
Thomas Yerxa, Jenelle Feather, Eero Simoncelli, SueYeon Chung
We introduce a novel framework for converting standard invariant SSL losses into “contrastive-equivariant” versions that encourage preservation of input transformations without supervised access to the transformation parameters. We demonstrate that our proposed method systematically increases the ability of models to predict responses in macaque inferior temporal cortex. [
pdf][
bib]
NeurIPS, 2024
A Spectral Theory of Neural Prediction and Alignment
Abdulkadir Canatar*, Jenelle Feather*, Albert Wakhloo, SueYeon Chung
Many different state-of-the-art deep neural networks yield similar neural predictions, but it remains unclear how to differentiate among models that perform equally well at predicting neural responses. To gain insight into this, we use a recent theoretical framework that relates the generalization error from regression to the spectral bias of the model activations and the alignment of the neural responses onto the learnable subspace of the model. [
pdf] [
bib] [
code]
Spotlight paper, NeurIPS, 2023
Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations
Thomas Yerxa, Yilun Kuang, Eero Simoncelli, SueYeon Chung
Self-supervised Learning (SSL) provides a strategy for constructing useful representations of images without relying on hand-assigned labels. Many such methods aim to map distinct views of the same scene or object to nearby points in the representation space, while employing some constraint to prevent representational collapse. Here we recast the problem in terms of efficient coding by adopting manifold capacity. [
pdf][
bib]
NeurIPS, 2023
Credit Assignment Through Broadcasting a Global Error Vector
David G. Clark, L.F. Abbott, SueYeon Chung
Here, we explore the extent to which a globally broadcast learning signal, coupled with local weight updates, enables training of DNNs. We present both a learning rule, called global error-vector broadcasting (GEVB), and a class of DNNs, called vectorized nonnegative networks (VNNs), inwhich this learning rule operates. [
pdf] [
code] [
bib]
NeurIPS, 2021
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello*, Jenelle Feather*, Hang Le*, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung (*:co-first)
Using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. [
pdf] [
code] [
bib]
NeurIPS, 2021