Read by QxMD icon Read

Journal of Vision

Ryusuke Hayashi, Osamu Watanabe, Hiroki Yokoyama, Shin'ya Nishida
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods...
June 1, 2017: Journal of Vision
Anouk M van Loon, Katya Olmos-Solis, Christian N L Olivers
Visual search is thought to be guided by an active visual working memory (VWM) representation of the task-relevant features, referred to as the search template. In three experiments using a probe technique, we investigated which eye movement metrics reveal which search template is activated prior to the search, and distinguish it from future relevant or no longer relevant VWM content. Participants memorized a target color for a subsequent search task, while being instructed to keep central fixation. Before the search display appeared, we briefly presented two task-irrelevant colored probe stimuli to the left and right from fixation, one of which could match the current target template...
June 1, 2017: Journal of Vision
Anthony M Norcia, Francesca Pei, Peter J Kohler
The development of spatiotemporal interactions giving rise to classical receptive field properties has been well studied in animal models, but little is known about the development of putative nonclassical mechanisms in any species. Here we used visual evoked potentials to study the developmental status of spatiotemporal interactions for stimuli that were biased to engage long-range spatiotemporal integration mechanisms. We compared responses to widely spaced stimuli presented either in temporal succession or at the same time...
June 1, 2017: Journal of Vision
(no author information available yet)
No abstract text is available yet for this article.
June 1, 2017: Journal of Vision
Jing Huang, Karl R Gegenfurtner, Alexander C Schütz, Jutta Billino
Saccadic eye movements provide an opportunity to study closely interwoven perceptual, motor, and cognitive changes during aging. Here, we investigated age effects on different mechanisms of saccadic plasticity. We compared age effects in two different adaptation paradigms that tap into low- and high-level adaptation processes. A total of 27 senior adults and 25 young adults participated in our experiments. In our first experiment, we elicited adaptation by a double-step paradigm, which is designed to trigger primarily low-level, gradual motor adaptation...
June 1, 2017: Journal of Vision
Maria Olkkonen, Geoffrey K Aguirre, Russell A Epstein
Neural responses to stimuli are often attenuated by repeated presentation. When observed in blood oxygen level-dependent signals, this attenuation is known as fMRI adaptation (fMRIa) or fMRI repetition suppression. According to a prominent account, fMRIa reflects the fulfillment of perceptual expectations during recognition of repeated items (Summerfield, Trittschuh, Monti, Mesulam, & Egner, 2008). Supporting this idea, expectation has been shown to modulate fMRIa under some circumstances; however, it is not currently known whether expectation similarly modulates recognition performance...
June 1, 2017: Journal of Vision
(no author information available yet)
No abstract text is available yet for this article.
June 1, 2017: Journal of Vision
Jinfeng Huang, Ju Liang, Yifeng Zhou, Zili Liu
We investigated the controversy regarding double training in motion discrimination learning. We collected data from 43 participants in a motion direction discrimination learning task with either double training (i.e., training plus exposure) or single training (i.e., no exposure). By pooling these data with those in the literature, we had data in double training from 28 participants and in single training from 36 participants. We found that, in double training, the transfer along the exposed direction was less than that along the trained direction, indicating incomplete transfer...
June 1, 2017: Journal of Vision
Lukasz Grzeczkowski, Aline Cretenoud, Michael H Herzog, Fred W Mast
Perceptual learning is usually assumed to occur within sensory areas or when sensory evidence is mapped onto decisions. Subsequent procedural and motor processes, involved in most perceptual learning experiments, are thought to play no role in the learning process. Here, we show that this is not the case. Observers trained with a standard three-line bisection task and indicated the offset direction of the central line by pressing either a left or right push button. Before and after training, observers adjusted the central line of the same bisection stimulus using a computer mouse...
June 1, 2017: Journal of Vision
Matthew V Pachai, Allison B Sekuler, Patrick J Bennett, Philippe G Schyns, Meike Ramon
What makes identification of familiar faces seemingly effortless? Recent studies using unfamiliar face stimuli suggest that selective processing of information conveyed by horizontally oriented spatial frequency components supports accurate performance in a variety of tasks involving matching of facial identity. Here, we studied upright and inverted face discrimination using stimuli with which observers were either unfamiliar or personally familiar (i.e., friends and colleagues). Our results reveal increased sensitivity to horizontal spatial frequency structure in personally familiar faces, further implicating the selective processing of this information in the face processing expertise exhibited by human observers throughout their daily lives...
June 1, 2017: Journal of Vision
Hongjing Lu, Bosco S Tjan, Zili Liu
Using an "information meter" provided by ideal observer analysis, we measured the efficiency with which human observers processed different walking stimuli against luminance noise and spatial uncertainty to either detect the presence of a walker or to discriminate the walking direction. Human efficiency was examined across four renderings of a human walker: contour, point lights, silhouette, and skeleton. We replicated the previous finding of low discrimination efficiency in biological motion (Gold, Tadin, Cook, & Blake, 2008) and also found low detection efficiency for biological motion...
June 1, 2017: Journal of Vision
Dicle N Dövencioglu, Ohad Ben-Shahar, Pascal Barla, Katja Doerschner
Dynamic visual information facilitates three-dimensional shape recognition. It is still unclear, however, whether the motion information generated by moving specularities across a surface is congruent to that available from optic flow produced by a matte-textured shape. Whereas the latter is directly linked to the first-order properties of the shape and its motion relative to the observer, the specular flow, the image flow generated by a specular object, is less sensitive to the object's motion and is tightly related to second-order properties of the shape...
June 1, 2017: Journal of Vision
Chuan Hou, Yee-Joon Kim, Preeti Verghese
Vernier acuity determines the relative position of visual features with a precision better than the sampling resolution of cone receptors in the retina. Because Vernier displacement is thought to be mediated by orientation-tuned mechanisms, Vernier acuity is presumed to be processed in striate visual cortex (V1). However, there is considerable evidence suggesting that Vernier acuity is dependent not only on structures in V1 but also on processing in extrastriate cortical regions. Here we used functional magnetic resonance imaging-informed electroencephalogram source imaging to localize the cortical sources of Vernier acuity in observers with normal vision...
June 1, 2017: Journal of Vision
Ying Yang, Yang Xu, Carol A Jew, John A Pyles, Robert E Kass, Michael J Tarr
Humans are experts at face individuation. Although previous work has identified a network of face-sensitive regions and some of the temporal signatures of face processing, as yet, we do not have a clear understanding of how such face-sensitive regions support learning at different time points. To study the joint spatio-temporal neural basis of face learning, we trained subjects to categorize two groups of novel faces and recorded their neural responses using magnetoencephalography (MEG) throughout learning...
June 1, 2017: Journal of Vision
Taylor R Hayes, John M Henderson
From the earliest recordings of eye movements during active scene viewing to the present day, researchers have commonly reported individual differences in eye movement scan patterns under constant stimulus and task demands. These findings suggest viewer individual differences may be important for understanding gaze control during scene viewing. However, the relationship between scan patterns and viewer individual differences during scene viewing remains poorly understood because scan patterns are difficult to analyze...
May 1, 2017: Journal of Vision
Che-Chun Su, Lawrence K Cormack, Alan C Bovik
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images...
May 1, 2017: Journal of Vision
Peter Vangorp, Pascal Barla, Roland W Fleming
Most previous work on gloss perception has examined the strength and sharpness of specular reflections in simple bidirectional reflectance distribution functions (BRDFs) having a single specular component. However, BRDFs can be substantially more complex and it is interesting to ask how many additional perceptual dimensions there could be in the visual representation of surface reflectance qualities. To address this, we tested materials with two specular components that elicit an impression of hazy gloss. Stimuli were renderings of irregularly shaped objects under environment illumination, with either a single Ward specular BRDF component (Ward, 1992), or two such components, with the same total specular reflectance but different sharpness parameters, yielding both sharp and blurry highlights simultaneously...
May 1, 2017: Journal of Vision
Christian Vater, Ralf Kredel, Ernst-Joachim Hossner
Previous studies of multiple-object tracking have shown that gaze behavior is affected by target collisions and target-distractor crowding. Therefore, in order to experimentally disentangle this collision-crowding confound, we examined events of target collisions with the bordering frame and crowding with distractors. We hypothesized that collisions are particularly demanding for covert attentional processing, whereas crowding particularly challenges peripheral vision. Results show that gaze is located closer to targets when they are crowded, as would be expected to reduce negative crowding effects by utilizing the higher spatial acuity of foveal vision...
May 1, 2017: Journal of Vision
Jirui Li, Amirsaman Sajad, Robert Marino, Xiaogang Yan, Saihong Sun, Hongying Wang, J Douglas Crawford
The relative contributions of egocentric versus allocentric cues on goal-directed behavior have been examined for reaches, but not saccades. Here, we used a cue conflict task to assess the effect of allocentric landmarks on gaze behavior. Two head-unrestrained macaques maintained central fixation while a target flashed in one of eight radial directions, set against a continuously present visual landmark (two horizontal/vertical lines spanning the visual field, intersecting at one of four oblique locations 11° from the target)...
May 1, 2017: Journal of Vision
Michael Papinutto, Junpeng Lao, Meike Ramon, Roberto Caldara, Sébastien Miellet
In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces-the Facespan-remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm...
May 1, 2017: Journal of Vision
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"