jov.arvojournals.org/article.aspx?articleid=2121293
1 Users
0 Comments
19 Highlights
0 Notes
Tags
Top Highlights
We found that naïve subjects make predictive saccades on almost every trial when attempting to intercept a quickly moving target
Shortly before the bounce, subjects initiated a saccade ahead of the moving ball at a location above the bounce point where the ball would arrive shortly after the bounce and at a point in space where the ball would pass 300 to 400 ms later
The finding that subjects were able to compensate for variations in both ball speed and elasticity to form accurate predictions of where the ball would be after the bounce demonstrates that subjects in our task used information beyond what was available in the visual field to predict the future location of the ball.
The data also suggest that subject's spatial strategies were closely related to the time at which the ball would arrive at the fixation location
This predictive scaling was based on the information that was available at least 250 ms before the bounce and at least 400 ms before the time the ball arrived at the predicted location. It is unclear, however, how the duration of prediction was affected by the 80 ms of visuomotor lag that existed between physical movement and the subsequent updating of the visual display.
Although interceptions do not require pursuit, it is likely to be a good strategy since it has been shown that pursuit movements facilitate prediction of future locatio
In other contexts, subjects have also been found to make anticipatory saccades to locations above the bounce point, consistent with our findings. When catching (M. Hayhoe et al., 2005), playing squash (M. M. Hayhoe et al., 2012), and when playing table-tennis (Land & Furneaux, 1997) subjects also target points beyond the boun
In the present context, movements are based on some combination of the visual information specific to a particular trajectory with a memory-based component
Although the current visual image has an important role to play, predictive eye movements reveal that the observer's best guess at the future state of the environment is based, in part, on representations that reflect learnt statistical properties of dynamic visual environments
Periods of pursuit were classified on the basis of two criteria. The first criterion necessitates the difference between the subject's gaze vector and the vector extending from the position of the eye-in-space to the ball is equal to or less than 5°
The second criterion concerned pursuit gain, defined as the ratio of the ball's retinal velocity to the component of the gaze-in-world velocity that was in-line with the balls' movement vector. Pursuit was restricted to a period in which pursuit gain was between .3 and 1.2. Periods of pursuit separated by more than four frames were clumped together as a single pursuit, and isolated periods of pursuit less than 100 ms in duration were disregarded
On remaining trials the most common occurrence was a large saccade prior to the bounce, followed by one or more corrective post-bounce saccades that were presumably influenced by post-bounce information.
Saccades were identified using a second-order finite impulse response (FIR) filter, similar to the methods proposed by Duchowski et al. (2002), which involve filtering data with a kernel representative of a paradigmatic saccade, e.g., [0 1 2 3 2 1 0], producing a smoothed signal in which saccade amplitude and width are largely unaffected while erroneous signals are diminished. However, we chose to adopt a modified kernel [−1 0 1 2 3 2 1 0 −1], which had additional benefit of producing exaggerated valleys in the gaze velocity signal just before and after the saccade, facilitating their subsequent identification. As a first step, the algorithm identified isolated peaks of gaze-velocity greater than 40°/s. To identify the starts and ends of saccades, the filtered velocity signal was differenced to produce a measure of gaze acceleration. Saccades extended from the first frame prior to peak saccade velocity in which the acceleration signal rose above 20°/s2 until the frame on which the acceleration signal dropped below 20°/s2
Fixations were defined as periods in which gaze-velocity was below a threshold of 30°/s for a minimum of four frames (∼66 ms). To compensate for the influence of tracker noise, fixations that were separated temporally by less than three frames (∼50 ms) and spatially by less than 3° were grouped together as a single fixation and isolated periods of fixation of less than 100 ms in duration were disregarded.
Subjects did not appear to be targeting the neighborhood of the bounce point but instead directed saccades to a location well above the bounce-point, as depicted in Figure 4a for a single subject
To further investigate the claim that subjects' fixations were based on a prediction of the ball's post-bounce location, Figure 5b shows that data from Figure 5a, plotted as a function of the ball speed 170 ms after the bounce, which is the average time at which the ball reached its minimum distance from the fixation location
Thus, some time before the bounce, subjects are able to predict where the ball will be after the bounce on the basis of a combination of both the prebounce ball speed and elasticity, the former information deriving from the visual field and the latter from prior experienc
In addition to high spatial accuracy, subjects also demonstrated temporal accuracy in prediction of the time of the ball's arrival at the fixation point
Subsequently, the point-of-regard data was combined with motion capture data concerning eye position and head orientation to identify a single point along the subject's line of sight through the three-dimensional world. The unit vector extended from the subject's left-eye towards this point defined the orientation of the subject's gaze within a world-centered reference frame and will hereafter be referred to as gaze-in-world. Because tracking was monocular, vergence angle was not calculated.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.