Research Highlights


Theory of Neural Manifolds

Neural Manifold Capacity Captures Representation Geometry, Correlations, and Task-Efficiency Across Species and Behaviors
Chi-Ning Chou, Luke Arend, Albert J. Wakhloo, Royoung Kim, Will Slatton, SueYeon Chung
We present effective Geometric measures from Correlated Manifold Capacity theory (GCMC) for probing the functional structure in neural representations. We utilize a statistical physics approach to establish analytical connections between neural co-variabilities and downstream read-out efficiency. These effective geometric measures capture both stimulus-driven and behavior-driven structures in neural population activities, while extracting computationally-relevant information from neural data into intuitive and interpretable analysis descriptors. [pdf] bioRxiv, 2024
Linear Classification of Neural Manifolds with Correlated Variability
Albert J. Wakhloo, Tamara J. Sussman, and SueYeon Chung
In this letter, we calculate how correlations between object representations affect the capacity, a measure of linear separability. We show that for spherical object manifolds, introducing correlations between centroids effectively pushes the spheres closer together, while introducing correlations between the spheres’ axes effectively shrinks their radii, revealing a duality between neural correlations and geometry [pdf][bib] Physical Review Letters, 2023
Neural population geometry: An approach for understanding biological and artificial neural networks
SueYeon Chung, Larry Abbott
We highlight recent studies of neural population geometry: untangling in perception, classification theory of manifolds, abstraction in cognitive systems, topology underlying cognitive maps, dynamic untangling in motor systems, and a dynamic approach to cognition. [pdf] [bib] Current Opinion in Neurobiology, 2021
Classification and Geometry of General Perceptual Manifolds
Sueyeon Chung, Daniel D. Lee, Haim Sompolinsky
We develop a statistical mechanical theory for the linear classification of manifolds with arbitrary geometry, revealing a remarkable relation to the mathematics of conic decomposition. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension, which can explain the classification capacity for manifolds of various geometries. [pdf] [bib] Physical Review X, 2018
Linear readout of object manifolds
Sueyeon Chung, Daniel D. Lee, Haim Sompolinsky
We present a theory that characterizes the ability of a linear readout network, the perceptron, to classify objects from variable neural responses. We show how the readout perceptron capacity depends on the dimensionality, size, and shape of the object manifolds in its input neural representation. [pdf] [bib] Physical Review E, 2016


Manifold analysis in brain-inspired ANN models and neural data

Separability and geometry of object manifolds in deep neural networks
Uri Cohen*, Sueyeon Chung*, Daniel D. Lee, Haim Sompolinsky (*:co-first)
We demonstrate that changes in the geometry of the associated object manifolds underlie this improved capacity, and shed light on the functional roles different levels in the hierarchy play to achieve it, through orchestrated reduction of manifolds’ radius, dimensionality and inter-manifold correlations. [pdf] [code] [bib] Nature Communications, 2020
Untangling in Invariant Speech Recognition
Cory Stephenson, Jenelle Feather, Suchismita Padhy, Oguz Elibol, Hanlin Tang, Josh McDermott, SueYeon Chung
Deep neural networks have achieved impressive performance in audio processing applications, both as sub-components of larger systems and as complete end-to-end systems by themselves. Despite their empirical successes, comparatively little is understood about how these audio models accomplish these tasks.In this work, we employ a recently developed statistical mechanical theory that connects geometric properties of network representations and the separability of classes to probe how information is untangled within neural networks trained to recognize speech. [pdf] [code] [bib] NeurIPS, 2019
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello*, Jenelle Feather*, Hang Le*, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung (*:co-first)
Using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. [pdf] [code] [bib] NeurIPS, 2021
Transformation of acoustic information to sensory decision variables in the parietal cortex.
Justin D. Yao*, Klavdia O. Zemlianova*, David L. Hockera, Cristina Savina, Christine M. Constantinople, SueYeon Chung, and Dan H. Sanes. (*:co-first)
The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex- recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. Our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions[pdf] [bib] PNAS, 2023


Bioplausible Learning

Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations
Thomas Yerxa, Yilun Kuang, Eero Simoncelli, SueYeon Chung
[pdf][bib] NeurIPS, 2023
Credit Assignment Through Broadcasting a Global Error Vector
David G. Clark, L.F. Abbott, SueYeon Chung
Here, we explore the extent to which a globally broadcast learning signal, coupled with local weight updates, enables training of DNNs. We present both a learning rule, called global error-vector broadcasting (GEVB), and a class of DNNs, called vectorized nonnegative networks (VNNs), inwhich this learning rule operates. [pdf] [code] [bib] NeurIPS, 2021
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello*, Jenelle Feather*, Hang Le*, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung (*:co-first)
Using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. [pdf] [code] [bib] NeurIPS, 2021