We continually interact with stimuli, such as images and sounds, and make inferences about a complex world. How our brain represents and processes the information internally is an intriguing and fundamental issue at the interface of neuroscience and computation. We employ tools of computational and theoretical neuroscience, to study systems from the neural level and through to perception and behavior.

We develop computational models of sensory neural processing based on the hypothesis that images and sounds have predictable and quantifiable regularities to which the brain is sensitive. The models are constructed through interplay with physiological and psychophysical data, and posit functional roles about neural processing. Additionally, a critical way to make progress is utilizing computational tools directly in experimental design and analysis. For example, we have worked extensively on spike-triggered approaches, leading to richer, non-linear characterization of neurons in retina and cortex.

Current specific interests include: (1) how neurons and percepts are affected by contextual information: spatially, what surrounds a given feature or object; temporally, what we have observed in the past, i.e., adaptation; (2) how neurons represent information hierarchically from one level of neural processing to the next; (3) how populations of neurons work together to achieve perception and behavior; and (4) how we decide where to look next in images.

Select projects

  • Sensory adaptation
  • A Bayesian framework for tilt perception
  • Deciding where to look next in images (collaborating experimentally with Leanne Chukoskie and Rich Krauzlis)
  • Natural signal statistics and neural representation
  • Spike-triggered neural characterization (collaborating experimentally with Nicole Rust and Tony Movshon, cortex; E J Chichilnisky, retina)
  • Spike-count distributions: beyond the mean firing rate (collaborating experimentally with Thomas Wachtler and Tom Albright)