Read by QxMD icon Read

Vision Research

Patricia Dore, Ardian Dumani, Geddes Wyatt, Alex J Shepherd
This study explored associations between local and global shape perception on coloured backgrounds, colour discrimination, and non-verbal IQ (NVIQ). Five background colours were chosen for the local and global shape tasks that were tailored for the cone-opponent pathways early in the visual system (cardinal colour directions: L-M, loosely, reddish-greenish; and S-(L+M), or tritan colours, loosely, blueish-yellowish; where L, M and S refer to the long, middle and short wavelength sensitive cones). Participants also completed the Farnsworth-Munsell 100-hue test (FM100) to determine whether performance on the local and global shape tasks correlated with colour discrimination overall, or with performance on the L-M and tritan subsets of the FM100 test...
March 9, 2018: Vision Research
Tom Foulsham, Emma Frost, Lilly Sage
When observers view an image, their initial eye movements are not equally distributed but instead are often biased to the left of the picture. This pattern has been linked to pseudoneglect, the spatial bias to the left that is observed in line bisection and a range of other perceptual and attentional tasks. Pseudoneglect is often explained according to the dominance of the right-hemisphere in the neural control of attention, a view bolstered by differences between left- and right-handed participants in both line bisection and eye movements...
February 27, 2018: Vision Research
Stuart Anstis, Juno Kim
Reducing the amount of motion information can surprisingly make motion look faster (e.g., motion behind Venetian blinds). We found that a textured pattern moving to the right at speeds ranging from 0.34 to 5.5°/s appeared to move 50% faster when viewed through a short (0.5°) compared with a long (4.5°) horizontal slot. Perceived speed varied inversely with the log of the slot length. We varied the length of rectangular apertures over a tenfold range and manipulated their size, shape, and orientation. We attribute the field-size effect mostly to landmarks provided by the ends of the slots, but we also examined temporal and spatial frequency and lateral inhibition of motion...
February 27, 2018: Vision Research
Kun Guo, Yoshi Soornack, Rebecca Settle
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns...
February 26, 2018: Vision Research
Mark Vergeer, Naoki Kogo, Andrey R Nikolaev, Nihan Alp, Veerle Loozen, Brenda Schraepen, Johan Wagemans
Shape perception is intrinsically holistic: combinations of features give rise to configurations with emergent properties that are different from the sum of the parts. The current study investigated neural markers of holistic shape representations learned by means of categorization training. We used the EEG frequency tagging technique, where two parts of a shape stimulus were 'tagged' by modifying their contrast at different temporal frequencies. Signals from both parts are integrated and, as a result, emergent frequency components (so-called, intermodulation responses, IMs), caused by nonlinear interaction of two frequency signals, are observed in the EEG spectrum...
February 20, 2018: Vision Research
Valentina Proietti, Sarah Laurence, Claire M Matthew, Xiaomei Zhou, Catherine J Mondloch
Adults' ability to recognize individual faces is shaped by experience. Young adults recognize own-age and own-race faces more accurately than other-age and other-race faces. The own-age and own-race biases have been attributed to differential perceptual experience and to differences in how in-group vs. out-group faces are processed, with in-group faces being processed at the individual level and out-group faces being processed at the categorical level. To examine this social categorization hypothesis, young adults studied young and older faces in Experiment 1 and own- and other-race faces in Experiment 2...
February 15, 2018: Vision Research
Sarah L Elliott, Steven K Shevell
The visual system must transform a point-by-point biological representation from the photoreceptors into neural representations of separate objects. Even a uniform circular patch of light that slowly modulates in luminance can be segmented into separate central and surrounding areas merely by introducing black lines to outline a central square. The black lines cause brightness induction in the center even though the light inside and outside the square is always identical, as predicted by spatial antagonism between the square central area and its surround...
February 15, 2018: Vision Research
Rebecca M Foerster
Before acting humans saccade to a target object to extract relevant visual information. Even when acting on remembered objects, locations previously occupied by relevant objects are fixated during imagery and memory tasks - a phenomenon called "looking-at-nothing". While looking-at-nothing was robustly found in tasks encouraging declarative memory built-up, results are mixed in the case of procedural sensorimotor tasks. Eye-guidance to manual targets in complete darkness was observed in a task practiced for days beforehand, while investigations using only a single session did not find fixations to remembered action targets...
February 9, 2018: Vision Research
Lihui Wang, Sheng Li, Xiaolin Zhou, Jan Theeuwes
Mounting evidence has shown that a task-irrelevant, previously reward-associated stimulus can capture attention even when attending to this stimulus impairs the processing of the current target. Here we investigate whether a stimulus that merely signals the availability of reward could capture attention and interfere with target processing when it is located outside of attentional focus. In three experiments, a target was always presented at the bottom of the lower visual field to attract focal attention. A distractor signalling high or low reward availability for the current trial was presented around the target with a variable distance between them...
February 1, 2018: Vision Research
Ipek Oruc, Fakhri Shafai, Shyam Murthy, Paula Lages, Thais Ton
Experience plays a fundamental role in the development of visual function. Exposure to different types of faces is an important factor believed to shape face perception ability. Contents of daily exposure to faces, i.e., the face-diet, of infants have been documented in previous studies. While face perception involves a protracted development and continues to be malleable well into adulthood, an empirical study of the adult face-diet has been lacking. We collected first-person perspective footage from 30 adults during the course of their daily activities...
January 19, 2018: Vision Research
Naphtali Abudarham, Galit Yovel
Many studies have shown better recognition for faces we have greater experience with, relative to unfamiliar faces. However, it is still not clear if and how the representation of faces changes during the process of familiarization. In a previous study, we discovered a subset of facial features, for which we have high perceptual sensitivity (PS), that were critical for determining the identity of unfamiliar faces. This was done by assigning values to 20 different facial features based on perceptual rating, converting faces into feature-vectors, and measuring the correlations between face similarity ratings and distances between feature-vectors (i...
January 19, 2018: Vision Research
Andrew E Silva, Zili Liu
Locally paired dot stimuli that contain opposing motion signals at roughly the same spatial locations (counter-phase stimuli) have been reported to produce percepts devoid of global motion. Counter-phase stimuli are also thought to elicit a reduced neural response at motion processing brain area MT/V5, an effect known as motion opponency. The current study examines the effect of vertical counter-phase background motion on behavioral discrimination of horizontal target motion. We found that counter-phase backgrounds generally produced lower behavioral thresholds than locally unbalanced backgrounds, an effect consistent with the idea that counter-phase motion elicits opponency...
January 17, 2018: Vision Research
Peggy Gerardin, Michel Dojat, Kenneth Knoblauch, Frédéric Devinck
Conjoint measurement was used to investigate the joint influences of the luminance of the background and the inner contour on hue- and brightness filling-in for a stimulus configuration generating a water-color effect (WCE), i.e., a wiggly bi-chromatic contour enclosing a region with the lower luminance component on the exterior. Two stimuli with the background and inner contour luminances covarying independently were successively presented, and in separate experiments, the observer judged which member of the pair's interior regions contained a stronger hue or was brighter...
January 17, 2018: Vision Research
Fakhri Shafai, Ipek Oruc
The other-race effect is the finding of diminished performance in recognition of other-race faces compared to those of own-race. It has been suggested that the other-race effect stems from specialized expert processes being tuned exclusively to own-race faces. In the present study, we measured recognition contrast thresholds for own- and other-race faces as well as houses for Caucasian observers. We have factored face recognition performance into two invariant aspects of visual function: efficiency, which is related to neural computations and processing demanded by the task, and equivalent input noise, related to signal degradation within the visual system...
December 30, 2017: Vision Research
Morgan E McIntyre, Derek H Arnold
When a moving surface alternates in colour and direction, perceptual couplings of colour and motion can differ from their physical correspondence. Periods of motion tend to be perceptually bound with physically delayed colours - a colour / motion perceptual asynchrony. This can be eliminated by motion transparency. Here we show that the colour / motion perceptual asynchrony is not invariably eliminated by motion transparency. Nor is it an inevitable consequence given a particular physical input. Instead, it can emerge when moving surfaces are perceived as alternating in direction, even if those surfaces seem transparent, and it is eliminated when surfaces are perceived as moving invariably...
December 28, 2017: Vision Research
Martin Rolfs, Nicholas Murray-Smith, Marisa Carrasco
Traditional perceptual learning protocols rely almost exclusively on long periods of uninterrupted fixation. Taking a first step towards understanding perceptual learning in natural vision, we had observers report the orientation of a briefly flashed stimulus (clockwise or counterclockwise from a reference orientation) presented strictly during saccade preparation at a location offset from the saccade target. For each observer, the saccade direction, stimulus location, and orientation remained the same throughout training...
December 22, 2017: Vision Research
Alexander Leube, Stephanie Kostial, Guy Alex Ochakovski, Arne Ohlendorf, Siegfried Wahl
The purpose of the study was to investigate the sign-dependent response to real and simulated spherical defocus on the visual acuity under monochromatic light conditions. The investigation included 15 myopic participants with a mean spherical equivalent error of -2.98 ± 2.17 D. Visual acuity (VA) was tested with and without spherical defocus using the source method (simulated defocus) and the observer method (lens-induced defocus) in a range of ± 3.0 D in 1.0 D steps. VA was assessed using Landolt Ćs, while the threshold was determined with an adaptive staircase procedure...
December 22, 2017: Vision Research
Maryam Ahmadi, Elizabeth A McDevitt, Michael A Silver, Sara C Mednick
Studies of visual cortical responses following visual perceptual learning (VPL) have produced diverse results, revealing neural changes in early and/or higher-level visual cortex as well as changes in regions responsible for higher cognitive processes such as attentional control. In this study, we investigated substrates of VPL in the human brain by recording visual evoked potentials with high-density electroencephalography (hdEEG) before (Session 1) and after (Session 2) training on a texture discrimination task (TDT), with two full nights of sleep between sessions...
December 22, 2017: Vision Research
Isabelle Bülthoff, Betty J Mohler, Ian M Thornton
Viewing faces in motion or attached to a body instead of isolated static faces improves their subsequent recognition. Here we enhanced the ecological validity of face encoding by having observers physically moving in a virtual room populated by life-size avatars. We compared the recognition performance of this active group to two control groups. The first control group watched a passive reenactment of the visual experience of the active group. The second control group saw static screenshots of the avatars. All groups performed the same old/new recognition task after learning...
December 21, 2017: Vision Research
Jessica Galliussi, Lukasz Grzeczkowski, Walter Gerbino, Michael H Herzog, Paolo Bernardis
Perceptual learning can occur for a feature irrelevant to the training task, when it is sub-threshold and outside of the focus of attention (task-irrelevant perceptual learning, TIPL); however, TIPL does not occur when the task-irrelevant feature is supra-threshold. Here, we asked the question whether TIPL occurs when the task-irrelevant feature is sub-threshold but within the focus of spatial attention. We tested participants in three different discrimination tasks performed on a 3-dot stimulus: a horizontal Vernier task and a vertical bisection task (during pre- and post-training sessions), and a luminance task (during training)...
December 20, 2017: Vision Research
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"