Read by QxMD icon Read

Journal of Vision

Rudolf Burggraaf, Jos N van der Geest, Maarten A Frens, Ignace T C Hooge
We studied changes in visual-search performance and behavior during adolescence. Search performance was analyzed in terms of reaction time and response accuracy. Search behavior was analyzed in terms of the objects fixated and the duration of these fixations. A large group of adolescents (N = 140; age: 12-19 years; 47% female, 53% male) participated in a visual-search experiment in which their eye movements were recorded with an eye tracker. The experiment consisted of 144 trials (50% with a target present), and participants had to decide whether a target was present...
May 1, 2018: Journal of Vision
Mark Vergeer, Juraj Mesik, Yihwa Baek, Kelton Wilmerding, Stephen A Engel
Exposure to oriented luminance contrast patterns causes a reduction in visual sensitivity specifically for the adapter orientation. This orientation selectivity is probably the most studied aspect of contrast adaptation, but it has rarely been measured with steady-state visually evoked potentials (SSVEPs), despite their becoming one of the more popular methods of human neuroscience. Here, we measured orientation selective adaptation by presenting a plaid stimulus of which the horizontal and vertical grating reversed contrast at different temporal frequencies, while recording EEG signals from occipital visual areas...
May 1, 2018: Journal of Vision
Khushbu Y Patel, Anudhi P Munasinghe, Richard F Murray
Lightness constancy is the ability to perceive surface reflectance correctly despite substantial changes in lighting intensity. A classic view is that lightness constancy is the result of a "discounting" of lighting intensity, and this continues to be a prominent view today. Logvinenko and Maloney (2006) have proposed an alternative approach to understanding lightness constancy, in which observers do not make explicit estimates of reflectance, and lightness constancy is instead based on a perceptual similarity metric that depends on both the reflectance and the illuminance of surfaces viewed under different lighting conditions...
May 1, 2018: Journal of Vision
Zijian Lu, Mathias Klinghammer, Katja Fiehler
In this study, we investigated the influence of gaze and prior knowledge about the reach target on the use of allocentric information for memory-guided reaching. Participants viewed a breakfast scene with five objects in the background and six objects on the table. Table objects served as potential reach targets. Participants first encoded the scene and, after a short delay, a test scene was presented with one table object missing and one, three, or five table objects horizontally shifted in the same direction...
April 1, 2018: Journal of Vision
Alexander Pastukhov, Christina Rita Zaus, Stepan Aleshin, Jochen Braun, Claus-Christian Carbon
When two bi-stable structure-from-motion (SFM) spheres are presented simultaneously, they tend to rotate in the same direction. This effect reflects a common state bias that is present for various multistable displays. However, it was also reported that when two spheres are positioned so that they touch each other, they tend to counterrotate instead. The latter effect is interpreted as a frictional interaction, indicating the influence of the embedded physics on our visual perception. Here, we examined the interplay between these two biases in two experiments using a wide range of conditions...
April 1, 2018: Journal of Vision
Thomas J McDougall, J Edwin Dickinson, David R Badcock
This study investigated contrast summation over area for moving targets applied to a fixed-size contrast pedestal-a technique originally developed by Meese and Summers (2007) to demonstrate strong spatial summation of contrast for static patterns at suprathreshold contrast levels. Target contrast increments (drifting gratings) were applied to either the entire 20% contrast pedestal (a full fixed-size drifting grating), or in the configuration of a checkerboard pattern in which the target increment was applied to every alternate check region...
April 1, 2018: Journal of Vision
Li Li, Long Ni, Markus Lappe, Diederick C Niehorster, Qi Sun
How do we judge the direction of self-motion (i.e., heading) in the presence of independent object motion? Previous studies that examined this question confounded the effects of a moving object's speed and its position on heading judgments, and did not examine whether the visual system uses salient nonmotion visual cues (such as color contrast and binocular disparity) to segment a moving object from global optic flow prior to heading estimation. The current study addressed these issues with both behavioral testing and computational modeling...
April 1, 2018: Journal of Vision
Jolande Fooken, Kathryn M Lalonde, Gurkiran K Mann, Miriam Spering
Eye and hand movements are closely linked when performing everyday actions. We conducted a perceptual-motor training study to investigate mutually beneficial effects of eye and hand movements, asking whether training in one modality benefits performance in the other. Observers had to predict the future trajectory of a briefly presented moving object, and intercept it at its assumed location as accurately as possible with their finger. Eye and hand movements were recorded simultaneously. Different training protocols either included eye movements or a combination of eye and hand movements with or without external performance feedback...
April 1, 2018: Journal of Vision
Inna Tsirlin, Linda Colpa, Herbert C Goltz, Agnes M F Wong
Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks...
April 1, 2018: Journal of Vision
Benjamin de Haas, D Samuel Schwarzkopf
Face perception is impaired for inverted images, and a prominent example of this is the Thatcher illusion: "Thatcherized" (i.e., rotated) eyes and mouths make a face look grotesque, but only if the whole face is seen upright rather than inverted. Inversion effects are often interpreted as evidence for configural face processing. However, recent findings have led to the alternative proposal that the Thatcher illusion rests on orientation sensitivity for isolated facial regions. Here, we tested whether the Thatcher effect depends not only on the orientation of facial regions but also on their visual-field location...
April 1, 2018: Journal of Vision
(no author information available yet)
No abstract text is available yet for this article.
April 1, 2018: Journal of Vision
Brittney Hartle, Laurie M Wilcox, Richard F Murray
The shape of the illusory surface in stereoscopic Kanizsa figures is determined by the interpolation of depth from the luminance edges of adjacent inducing elements. Despite ambiguity in the position of illusory boundaries, observers reliably perceive a coherent three-dimensional (3-D) surface. However, this ambiguity may contribute additional uncertainty to the depth percept beyond what is expected from measurement noise alone. We evaluated the intrinsic ambiguity of illusory boundaries by using a cue-combination paradigm to measure the reliability of depth percepts elicited by stereoscopic illusory surfaces...
April 1, 2018: Journal of Vision
Enrico Chiovetto, Cristóbal Curio, Dominik Endres, Martin Giese
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles...
April 1, 2018: Journal of Vision
Brian C McCann, Mary M Hayhoe, Wilson S Geisler
Little is known about distance discrimination in real scenes, especially at long distances. This is not surprising given the logistical difficulties of making such measurements. To circumvent these difficulties, we collected 81 stereo images of outdoor scenes, together with precisely registered range images that provided the ground-truth distance at each pixel location. We then presented the stereo images in the correct viewing geometry and measured the ability of human subjects to discriminate the distance between locations in the scene, as a function of absolute distance (3 m to 30 m) and the angular spacing between the locations being compared (2°, 5°, and 10°)...
April 1, 2018: Journal of Vision
David Alais, Garry Kong, Colin Palmer, Colin Clifford
Recent work from several groups has shown that perception of various visual attributes in human observers at a given moment is biased towards what was recently seen. This positive serial dependency is a kind of temporal averaging which exploits short-term correlations in visual scenes to reduce noise and stabilize perception. Here we test for serial dependencies in perception of head and eye direction using a simple reproduction method to measure perceived head/eye gaze direction in rapid sequences of briefly presented face stimuli...
April 1, 2018: Journal of Vision
Mary M Hayhoe
The essentially active nature of vision has long been acknowledged but has been difficult to investigate because of limitations in the available instrumentation, both for measuring eye and body movements and for presenting realistic stimuli in the context of active behavior. These limitations have been substantially reduced in recent years, opening up a wider range of contexts where experimental control is possible. Given this, it is important to examine just what the benefits are for exploring natural vision, with its attendant disadvantages...
April 1, 2018: Journal of Vision
Michele Fornaciai, Paola Binda, Guido Marco Cicchini
Does visual processing start anew after each eye movement, or is information integrated across saccades? Here we test a strong prediction of the integration hypothesis: that information acquired after a saccade interferes with the perception of images acquired before the saccade. We investigate perception of a basic visual feature, grating orientation, and we take advantage of a delayed interference phenomenon-in human participants, the reported orientation of a target grating, briefly presented at an eccentric location, is strongly biased toward the orientation of flanker gratings that are flashed shortly after the target...
April 1, 2018: Journal of Vision
Laila Hugrass, Thomas Verhellen, Eleanore Morrall-Earney, Caitlin Mallon, David Philip Crewther
More than 50 years ago, Hubel and Wiesel identified a subpopulation of geniculate magnocellular (M) neurons that are suppressed by diffuse red light. Since then, many human psychophysical studies have used red and green backgrounds to study the effects of M suppression on visual task performance, as a means to better understand neurodevelopmental disorders such as dyslexia and schizophrenia. Few of these studies have explicitly assessed the relative effects of red backgrounds on the M and P (parvocellular) pathways...
April 1, 2018: Journal of Vision
Christian Quaia, Lance M Optican, Bruce G Cumming
Psychophysical studies and our own subjective experience suggest that, in natural viewing conditions (i.e., at medium to high contrasts), monocularly and binocularly viewed scenes appear very similar, with the exception of the improved depth perception provided by stereopsis. This phenomenon is usually described as a lack of binocular summation. We show here that there is an exception to this rule: Ocular following eye movements induced by the sudden motion of a large stimulus, which we recorded from three human subjects, are much larger when both eyes see the moving stimulus, than when only one eye does...
April 1, 2018: Journal of Vision
Phillip Xin Cheng, Anina N Rich
In real-world searches such as airport baggage screening and radiological examinations, miss errors can be life threatening. Misses increase for additional targets after detecting an initial target, termed "subsequent search misses" (SSMs), and also when targets are more often absent than present, termed the low-prevalence effect. Real-world search tasks often contain more than one target, but the prevalence of these multitarget occasions varies. For example, a cancerous tumor sometimes coexists with a benign tumor and sometimes exists alone...
April 1, 2018: Journal of Vision
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"