1. Video game project
|
|
Video game with subject's eye position superimposed as a red cross |
Alex in the eye tracker playing the game |
Project Overview
Little is known about how the brain processes visual motion information under natural conditions, or about how moving scenes engage eye movements.
When the head is still, gaze patterns are sequences of smooth eye movements that track visual motion, saccadic jumps that aim the retinal fovea at different elements of the scene, and
fixations that hold gaze steady on an object of interest. While we can study all of these behaviors in the laboratory using tasks designed to isolate each form of eye movement, we ultimately
want to know how all of these behaviors are coordinated during natural viewing. Specifically, we would like to be able to predict what the eye movement is likely to be given the current retinal
inputs and the current state of the eye. So far, efforts to model gaze behavior during the viewing of Hollywood movies has been largely unsuccessful. This is likely due to the fact that the
cognitive state of the viewer was poorly controlled which created large variation in gaze patterns. Our approach is to simplify the visual scene to the point where we can more easily identify
which objects are engaging visual attention. To that end we have developed a Python-based video game environment inspired by Atari's Pong, one of the original video games.
Pong is a "tennis" game which, in our hands, involves a 3 sided arena, a ball that bounces off the arena walls, and a paddle that the player moves with a controller.
Subjects play the game while sitting in an eye tracker that monitors the direction of gaze in the right eye. The tracker we use is a Dual-Purkinje Image infrared tracker (Ward Electronics)
that has arc-minute resolution and a very fast response time that allows us to follow both smooth pursuit eye movements and saccadic jumps in gaze position. We find that the game engages
the viewers attention and creates across-subject similarities in patters of gaze behavior. The game simplifies the task of relating gaze behavior to the moving scene, allowing us to develop
a model of target-eye interaction.
What we find is that is isn't only what the ball just did, but what we extrapolate the ball will be doing in the near future that determines where we are going to look.
That makes sense - it takes the brain 100-300ms to process the incoming visual information and generate commands to move the eyes. If we waited for that data to make a movement, our eyes
would always be behind the the ball. To counteract sensory processing delays the brain has to form predictions - to learn how the target is likely to move in order to extrapolate where it
will be in the near future. We can see evidence of this prediction in the movements of athletes. Tennis and baseball players begin their swings while the ball is in flight, anticipating where
and when they will make contact. In our lab, we can measure information about future target motion in the Pong player's eye movements. Because we control the physics of the game environment,
we are able to manipulate how far into the future the ball's movement can be predicted -- by adding virtual turbulence that might deflect the direction of ball motion, changing the ball speed,
or creating variations in how the ball bounces off the arena walls. Experiments are underway to explore predictive gaze behavior and whether it adapts based on how predictable target motion is.
This research may lead to better diagnoses and a better understanding of diseases that affect brain function by detecting impairment in a subject's ability to incorporate prediction into gaze
behavior. Neurodegenerative diseases can impair how well recent experience is used to modify behavior, particularly those involving disorders of the basal ganglia. Learning rates in motor tasks
are slower in patients with Parkinson’s disease, and patients adapt less completely to changes in sensory feedback than do normal subjects. Eye movements are particularly sensitive to diseases
and conditions that affect brain function - in fact changes in gaze patterns are being used to help diagnose and differentiate between neurodegenerative disorders in their early stages before large
scale movements are impaired. With a better understanding of healthy gaze behavior, we hope to expand our research to patient populations with the goal of developing clinical tools for early
detection of neurodegenerative disorders.
Citation
S.A.Lee, M.Battifarano and L.C.Osborne(2014). Target motion predictability determines the predictability of gaze decisions from retinal inputs.
Poster presented at the annual meeting of the Society for Neurosciences in Washington DC, Nov. 2014
Poster presented at the Brain Research Foundation's Neuroscience Day in Chicago IL, Jan 2015
Abstract
In order to stabilize a moving target’s retinal image, the brain must make continuous visual estimates of target motion and evaluate the
trade-off between smoothly modulating eye movement and initiating a saccade. Smooth pursuit eye movement is used for continuous
acquisition of visual information, whereas saccadic movement can reduce a large retinal error in a short period of time. Lefèvre
and colleagues (2002) introduced the decision rule between pursuit and saccades (Eye-Crossing Time) during one dimensional visual
tracking of moving stimuli in humans. Our goal for this study is to expand this notion to investigate if there exists general oculomotor
computation for making eye movement decision. In order to achive this goal, three different experimental paradigms have been done in
human and monkeys: a 1D and 2D visual tracking with double step-ramp and a single-player version of the video game, Pong. Interestingly,
we observed that in the highly predictive situation, such as during the pong game, saccadic eye movement is not captured with the same
rule. Here, we apply information theoretic analysis to quantify the interaction between target, gaze, and time.
2. Shared sensory estimates for human motion perception and pursuit eye movements
Citation
T.Mukherjee, M.Battifarano and L.C.Osborne (2015).Shared sensory estimates for human motion perception and pursuit eye movements
Poster presented at the Brain Research Foundation's Neuroscience Day in Chicago IL, Jan 2015
Abstract
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways, or is centrally represented
sensory activity decoded independently to drive awareness and action? Questions of the brain's information flow pose a challenge because systems-level estimates
of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits
requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex mediates both the perception of visual
motion and it provides the visual inputs for behaviors like smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory
information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well-studied,
sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion.
Here we analyze variability in visually-driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulate
the signal to noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise
sources in the perception and action pathways arising from a common sensory estimate. We find that conditions that create poor, low-gain pursuit create a
discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile
much of the controversy on this topic.
3. Constraints on decoding of visual motion parameters for smooth pursuit from cortical population activity
Citation
M.Macellaio and L.C.Osborne(2015) Constraints on decoding of visual motion parameters for smooth pursuit from cortical population activity.
Poster presented at the annual meeting of the Society for Neurosciences in Washington DC, Nov. 2014
Poster presented at the Brain Research Foundation's Neuroscience Day in Chicago IL, Jan 2015
Abstract
In pursuit, visual estimates of retinal image motion are translated to motor commands to smoothly counter-rotate the eye in order to stabilize the retinal image of a moving target.
The visual inputs for pursuit behavior arise in the middle temporal cortical area (MT) where neuronal responses are tuned for motion direction and speed.
Motion direction discrimination in pursuit is approximately a factor of ten lower than MT neuron discrimination thresholds, suggesting that visual motion estimates are derived from cortical
population responses. In previous work, we have shown that variability in the initiation of pursuit arises from variability in motion estimation, as if visual estimates are decoded from cortical
inputs with errors, but are then loyally translated to movement by the oculomotor system. Here we show that while a variety of population decoding models can reproduce the accuracy of
pursuit eye speed and direction, reproducing the trial-to-trial eye movement variation places strong constraints on the coordinate frame and number of free parameters in the decoding model.
By extracting the principal components of the deviations from trial-averaged mean eye velocity for each trial of pursuit behavior, we estimate the trial error in speed, direction,
and movement onset time. We find that direction and speed errors in pursuit were significantly more highly correlated at oblique directions than at cardinal directions, a pattern
replicated by adding noise in motor coordinates aligned with the pulling direction of the eye muscles (i.e. horizontal, vertical). Initial results indicate that there is no such
anisotropy in direction-speed error correlations in perception, as would be expected with an effect due to added motor noise. However, using both correlated and uncorrelated Poisson
model neurons coupled to experimental measurements of actual pursuit behavior, we show that the most parsimonious decoding models operate in visual coordinates (direction, speed)
rather than in motor coordinates.
4. Efficient coding of visual motion signals in area MT and smooth pursuit
Citation
B.Liu and L.C.Osborne(2015) Efficient coding of visual motion signals in area MT and smooth pursuit.
Poster presented at the annual meeting of the Society for Neurosciences in Washington DC, Nov. 2014
Poster presented at the Brain Research Foundation's Neuroscience Day in Chicago IL, Jan 2015
Abstract
Performance in sensory-motor behaviors guides our understanding of many of the key computational functions of the brain: the representation of
sensory information, the translation of sensory signals to commands for movement, and the production of behavior. Eye movement behaviors
have become a valuable testing ground for theories of neural computation because the neural circuitry has been well characterized and
eye movements can be tightly coupled to cortical activity (Osborne et al., 2005-9). Here we show that smooth pursuit eye movements,
and the cortical sensory signals that mediate them, demonstrate the hallmarks of efficient sensory coding. Barlow (1961) proposed that
neurons should adapt their sensitivity as stimulus conditions change in order to maintain efficient coding of sensory inputs.
Evidence for efficient coding of temporal fluctuations in visual contrast has been observed in the retina, lateral geniculate
nucleus, and visual cortex (Wark et al., 2009, Sharpee et al., 2006). We asked whether adaptation to stimulus variance generalizes to
higher cortical areas whose neurons respond to features of visual signals that do not drive adaptation in the sensory periphery, and
whether such adaptation impacts performance of visually-driven behavior. Specifically, we have studied the impact of dynamic
fluctuations in motion direction on the gain of smooth pursuit and found neural correlates of pursuit adaptation in cortical area MT.
5. Spatial integration of visual motion signals for smooth pursuit eye movements
Citation
C.Simoncini, T.Mukherjee and L.C.Osborne(2015) Spatial integration of visual motion signals for smooth pursuit eye movements.
Poster presented at the Brain Research Foundation's Neuroscience Day in Chicago IL, Jan 2015
Abstract
In order to make appropriate gaze decisions, the brain must integrate information across the visual field to identify and locate objects, estimate motion,
and allocate attention. We simplify and decompose this complex problem by asking how visual motion is integrated across the visual field to drive smooth pursuit eye movements.
The pursuit system has become a valuable testing ground for theories of sensory estimation and motor control because the neural pathways are well described and the
relationship between motion estimation and pursuit initiation is very precise. Eye direction and speed can be a loyal representation of internal estimates of target
motion in the absence of experienced-based modulations of the oculomotor state. In past work, we have demonstrated that a linear model of motion integration across time
accounts for a high percentage of the variance in eye velocity and provides a good description of temporal filtering in the visual motion inputs for pursuit. Here we expand
upon that research to study both spatial and temporal motion integration visual stimuli, asking how motion signals are weighted across the visual scene to drive smooth
eye movements. We have analyzed pursuit of visual stimuli with spatially and temporally distributed motion directions and speeds in humans. We find that a spatially
and temporally restricted linear filter captures the relationship between motion signals and pursuit eye movements, accounting for 58% of the variation in eye direction
during pursuit initiation. The variation not captured by the best linear model is Gaussian distributed, indicative of noise in visual-motor processing. We find that motion
within ~6° of the fixation point is integrated over ~100 ms to drive pursuit in humans. The linear filter generalizes across stimulus forms that differ in their amount
of motion energy. These results suggest that relatively simple estimation rules may underlie the processing of moving scenes in natural gaze behavior.