Read by QxMD icon Read

Journal of Vision

Hiroshi Ueda, Kentaro Yamamoto, Katsumi Watanabe
To respond to movements of others and understand the intention of others' actions, it is important to accurately extract motion information from body movements. Here, using original and spatially scrambled point-light biological motions in upright and inverted orientations, we investigated the effect of global and local biological motion information on speed perception and sensitivity. The speed discrimination task revealed that speed sensitivity was higher for the original than for scrambled stimuli (Experiment 1) and higher for upright than for inverted stimuli (Experiment 2)...
March 1, 2018: Journal of Vision
Elise A Piazza, Rachel N Denison, Michael A Silver
Incoming sensory signals are often ambiguous and consistent with multiple perceptual interpretations. Information from one sensory modality can help to resolve ambiguity in another modality, but the mechanisms by which multisensory associations come to influence the contents of conscious perception are unclear. We asked whether and how novel statistical information about the coupling between sounds and images influences the early stages of awareness of visual stimuli. We exposed subjects to consistent, arbitrary pairings of sounds and images and then measured the impact of this recent passive statistical learning on subjects' initial conscious perception of a stimulus by employing binocular rivalry, a phenomenon in which incompatible images presented separately to the two eyes result in a perceptual alternation between the two images...
March 1, 2018: Journal of Vision
Andrew Stockman, G Bruce Henning, Sharif Anwar, Robert Starba, Andrew T Rider
Cone signals in the luminance or achromatic pathway were investigated by measuring how the perceptual timing of M- or L-cone-detected flicker depended on temporal frequency and chromatic adaptation. Relative timings were measured, as a function of temporal frequency, by superimposing M- or L-cone-isolating flicker on "equichromatic" flicker (flicker of the same wavelength as the background) and asking observers to vary contrast and phase to cancel the perception of flicker. Measurements were made in four observers on up to 35 different backgrounds varying in wavelength and radiance...
February 1, 2018: Journal of Vision
Jonathan C Flavell, Brendan T Barrett, John G Buckley, Julie M Harris, Andrew J Scally, Nathan B Beebe, Alice G Cruickshank, Simon J Bennett
An ability to predict the time-to-contact (TTC) of moving objects that become momentarily hidden is advantageous in everyday life and could be particularly so in fast-ball sports. Prediction motion (PM) experiments have sought to test this ability using tasks where a disappearing target moves toward a stationary destination. Here, we developed two novel versions of the PM task in which the destination either moved away from (Chase) or toward (Attract) the moving target. The target and destination moved with different speeds such that collision occurred 750, 1,000 or 1,250 ms after target occlusion...
February 1, 2018: Journal of Vision
(no author information available yet)
No abstract text is available yet for this article.
February 1, 2018: Journal of Vision
Martin Bossard, Daniel R Mestre
Humans and most animals are able to navigate in their environment, which generates sensorial information of various kinds, such as proprioceptive cues and optic flow. Previous research focusing on the visual effects of walking (bob, sway, and lunge head motion) has shown that the perception of forward self-motion experienced by static observers can be modulated by adding simulated viewpoint oscillations to the radial flow. In three experimental studies, we examined the effects of several viewpoint oscillation frequencies on static observers' perception of the distance traveled, assuming the assessment of distance traveled to be part of the path integration process...
February 1, 2018: Journal of Vision
J Edwin Dickinson, Krystle Haley, Vanessa K Bowden, David R Badcock
Objects are often identified by the shape of their contours. In this study, visual search tasks were used to reveal a visual dimension critical to the analysis of the shape of a boundary-defined area. Points of maximum curvature on closed paths are important for shape coding and it was shown here that target patterns are readily identified among distractors if the angle subtended by adjacent curvature maxima at the target pattern's center differs from that created in the distractors. A search asymmetry, indicated by a difference in performance in the visual search task when the roles of target and distractor patterns are reversed, was found when the critical subtended angle was only present in one of the patterns...
February 1, 2018: Journal of Vision
Wilson S Geisler
This theoretical note describes a simple equation that closely approximates the psychometric functions of template-matching observers with arbitrary levels of position and orientation uncertainty. We show that the approximation is accurate for detection of targets in white noise, 1/f noise, and natural backgrounds. In its simplest form, this equation, which we call the uncertain normal integral (UNI) function, has two parameters: one that varies only with the level of uncertainty and one that varies only with the other properties of the stimuli...
February 1, 2018: Journal of Vision
Brad C Motter
Visual crowding is a fundamental constraint on our ability to identify peripheral objects in cluttered environments. This study proposes a descriptive model for understanding crowding based on the tuning selectivity for stimuli within the receptive field (RF) and examines potential neural correlates in cortical area V4. For V4 neurons, optimally sized, letter-like stimuli are much smaller than the RF. This permits stimulus conflation, the fusing of separate objects into a single identity, to occur within the RF of single neurons...
January 1, 2018: Journal of Vision
Alexandra C Schmid, Katja Doerschner
Research on the visual perception of materials has mostly focused on the surface qualities of rigid objects. The perception of substance like materials is less explored. Here, we investigated the contribution of, and interaction between, surface optics and mechanical properties to the perception of nonrigid, breaking materials. We created novel animations of materials ranging from soft to hard bodies that broke apart differently when dropped. In Experiment 1, animations were rendered as point-light movies varying in dot density, as well as "full-cue" optical versions ranging from translucent glossy to opaque matte under a natural illumination field...
January 1, 2018: Journal of Vision
Selam W Habtegiorgis, Katharina Rifai, Siegfried Wahl
Spatially varying distortions in optical elements-for instance prisms and progressive power lenses-modulate the visual world disparately in different visual areas. Saccadic eye movements in such a complexly distorted environment thereby continuously alter the retinal location of the distortions. Yet the visual system achieves perceptual constancy by compensating for distortions irrespective of their retinal relocations at different fixations. Here, we assessed whether the visual system retains its plasticity to distortions across saccades to attain stability...
January 1, 2018: Journal of Vision
Jan Drewes, Weina Zhu, David Melcher
The study of how visual processing functions in the absence of visual awareness has become a major research interest in the vision-science community. One of the main sources of evidence that stimuli that do not reach conscious awareness-and are thus "invisible"-are still processed to some degree by the visual system comes from studies using continuous flash suppression (CFS). Why and how CFS works may provide more general insight into how stimuli access awareness. As spatial and temporal properties of stimuli are major determinants of visual perception, we hypothesized that these properties of the CFS masks would be of significant importance to the achieved suppression depth...
January 1, 2018: Journal of Vision
Alexandre Reynaud, Robert F Hess
Stereoscopic vision uses the disparity between the images received by the two eyes to derive three-dimensional estimates. Here, we were interested in providing a measure of the strength of binocular vision alternate to disparity processing. In particular, we wanted to assess the spatial dependence of sensitivity to detect interocular correlation (IOC). Thus we designed dichoptic stimuli composed of bandpass textures whose IOC is sinusoidally modulated at different correlation frequencies and compared sensitivity to these stimuli to that of analogous stimuli modulated in disparity...
January 1, 2018: Journal of Vision
Zahra Hussain, Andrew T Astle, Ben S Webb, Paul V McGraw
The misalignment of visual input in strabismus disrupts positional judgments. We measured positional accuracy in the extrafoveal visual field (1°-7° eccentricity) of a large group of strabismic subjects and a normal control group to identify positional distortions associated with the direction of strabismus. Subjects performed a free localization task in which targets were matched in opposite hemifields whilst fixating on a central cross. The constant horizontal error of each response was taken as a measure of accuracy, in addition to radial and angular error...
January 1, 2018: Journal of Vision
Jérôme Tagu, Karine Doré-Mazars, Judith Vergne, Christelle Lemoine-Lardennois, Dorine Vergilino-Perez
It is well known that the saccadic system presents multiple asymmetries. Notably, temporal (as opposed to nasal) saccades, centripetal (as opposed to centrifugal) saccades (i.e., the recentering bias) and saccades from the abducting eye (as opposed to the concomitant saccades from the adducting eye) exhibit higher peak velocities. However, these naso-temporal and centripetal-centrifugal asymmetries have always been studied separately. It is thus unknown which asymmetry prevails when there is a conflict between both asymmetries, i...
January 1, 2018: Journal of Vision
Maria J Barraza-Bernal, Katharina Rifai, Siegfried Wahl
Patients with central scotoma use a preferred retinal locus (PRL) of fixation to perform visual tasks. Some of the conditions that cause central scotoma are progressive, and as a consequence, the PRL needs to be adjusted throughout the progression. The present study investigates the peripheral locus of fixation in subjects under a simulation of progressive central scotoma. Five normally sighted subjects participated in the study. A foveally centered mask of varying size was presented to simulate the scotoma...
January 1, 2018: Journal of Vision
Aurélie Calabrèse, Long To, Yingchen He, Elizabeth Berkholtz, Paymon Rafian, Gordon E Legge
Our purpose was to compare reading performance measured with the MNREAD Acuity Chart and an iPad application (app) version of the same test for both normally sighted and low-vision participants. Our methods included 165 participants with normal vision and 43 participants with low vision tested on the standard printed MNREAD and on the iPad app version of the test. Maximum Reading Speed, Critical Print Size, Reading Acuity, and Reading Accessibility Index were compared using linear mixed-effects models to identify any potential differences in test performance between the printed chart and the iPad app...
January 1, 2018: Journal of Vision
Jerrold Jeyachandra, Yoongoo Nam, YoungWook Kim, Gunnar Blohm, Aarlenne Z Khan
Transsaccadic memory is a process by which remembered object information is updated across a saccade. To date, studies on transsaccadic memory have used simple stimuli-that is, a single dot or feature of an object. It remains unknown how transsaccadic memory occurs for more realistic, complex objects with multiple features. An object's location is a central feature for transsaccadic updating, as it is spatially variant, but other features such as size are spatially invariant. How these spatially variant and invariant features of an object are remembered and updated across saccades is not well understood...
January 1, 2018: Journal of Vision
Yingchen He, MiYoung Kwon, Gordon E Legge
The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters...
January 1, 2018: Journal of Vision
Julius Orlowski, Ohad Ben-Shahar, Hermann Wagner
How do we find what we are looking for? A target can be in plain view, but it may be detected only after extensive search. During a search we make directed attentional deployments like saccades to segment the scene until we detect the target. Depending on difficulty, the search may be fast with few attentional deployments or slow with many, shorter deployments. Here we study visual search in barn owls by tracking their overt attentional deployments-that is, their head movements-with a camera. We conducted a low-contrast feature search, a high-contrast orientation conjunction search, and a low-contrast orientation conjunction search, each with set sizes varying from 16 to 64 items...
January 1, 2018: Journal of Vision
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"