Add like
Add dislike
Add to saved papers

Mapping eye movements in 3D: Preferential fixation of surface curvature minima during object recognition in stereo viewing.

The recognition of 3D object shape is a fundamental issue in vision science. Although our knowledge has advanced considerably, most prior studies have been restricted to 2D stimulus presentation that ignores stereo disparity. In previous work we have shown how analyses of eye movement patterns can be used to elucidate the kinds of shape information that support the recognition of multi-part 3D objects (e.g., Davitt et al. , JEP: HPP, 2014, 40, 451-456). Here we extend that work using a novel technique for the 3D mapping, and analyses, of eye movement patterns under conditions of stereo viewing. Eye movements were recorded while observers learned sets of surface-rendered multi-part novel objects, and during a subsequent recognition memory task in which they discriminated trained from untrained objects at different depth rotations. The tasks were performed binocularly with or without stereo disparity. Eye movements were mapped onto the underlying 3D object mesh using a ray tracing technique and a common reference frame between the eye tracker and 3D modelling environment. This allowed us to extrapolate the recorded screen coordinates for fixations from the eye tracker onto the 3D structure of the stereo-viewed objects. For the analysis we computed models of the spatial distributions of 3D surface curvature convexity, concavity and low-level image saliency. We then compared (fixation) data - model correspondences using ROC curves. Observers were faster and more accurate when viewing objects with stereo disparity. The spatial distributions of fixations were best accounted for by the 3D surface concavity model. The results support the hypothesis that stereo disparity facilities recognition, and that surface curvature minima play a key role in the recognition of 3D shape. More broadly, the novel techniques outlined for mapping eye movement patterns in 3D space should be of interest to vision researchers in a variety of domains. Meeting abstract presented at VSS 2015.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app