Read by QxMD icon Read

Visual Cognition

Sven Panis, Katrien Torfs, Celine R Gillebert, Johan Wagemans, Glyn W Humphreys
Multiple accounts have been proposed to explain category-specific recognition impairments. Some suggest that category-specific deficits may be caused by a deficit in recurrent processing between the levels of a hierarchically organized visual object recognition system. Here, we tested predictions of interactive processing theories on the emergence of category-selective naming deficits in neurologically intact observers and in patient GA, a single case showing a category-specific impairment for natural objects after a herpes simplex encephalitis infection...
2017: Visual Cognition
Jordana S Wynn, Michael B Bone, Michelle C Dragan, Kari L Hoffman, Bradley R Buchsbaum, Jennifer D Ryan
Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or "scanpath" elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes...
January 2, 2016: Visual Cognition
Carl Erick Hagmann, Mary C Potter
Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities...
2016: Visual Cognition
Jianhong Shen, Thomas J Palmeri
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models...
2016: Visual Cognition
Brett C Bays, Nicholas B Turk-Browne, Aaron R Seitz
Statistical learning refers to the extraction of probabilistic relationships between stimuli and is increasingly used as a method to understand learning processes. However, numerous cognitive processes are sensitive to the statistical relationships between stimuli and any one measure of learning may conflate these processes; to date little research has focused on differentiating these processes. To understand how multiple processes underlie statistical learning, here we compared, within the same study, operational measures of learning from different tasks that may be differentially sensitive to these processes...
2016: Visual Cognition
Alexander J Kirkham, Steven P Tipper
In spatial compatibility tasks, when the spatial location of a stimulus is irrelevant it nevertheless interferes when a response is required in a different spatial location. For example, response with a left key-press is slowed when the stimulus is presented to the right as compared to the left side of a computer screen. However, in some conditions this interference effect is not detected in reaction time (RT) measures. It is typically assumed that the lack of effect means the irrelevant spatial code was not analysed or that the information rapidly decayed before response...
September 14, 2015: Visual Cognition
Henryk Bukowski, Jari K Hietanen, Dana Samson
Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3)...
September 14, 2015: Visual Cognition
Andrew K Mackenzie, Julie M Harris
Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle...
July 3, 2015: Visual Cognition
Francesco Marini, Berry van den Berg, Marty G Woldorff
When attending for impending visual stimuli, cognitive systems prepare to identify relevant information while ignoring irrelevant, potentially distracting input. Recent work (Marini et al., 2013) showed that a supramodal distracter-filtering mechanism is invoked in blocked designs involving expectation of possible distracter stimuli, although this entails a cost (distraction-filtering cost) on speeded performance when distracters are expected but not presented. Here we used an arrow-flanker task to study whether an analogous cost, potentially reflecting the recruitment of a specific distraction-filtering mechanism, occurs dynamically when potential distraction is cued trial-to-trial (cued distracter-expectation cost)...
February 1, 2015: Visual Cognition
Amandine Lassalle, Roxane J Itier
Recent gaze cueing studies using dynamic cue sequences have reported increased attention orienting by gaze with faces expressing fear, surprise or anger. Here, we investigated whether the type of dynamic cue sequence used impacted the magnitude of this effect. When the emotion was expressed before or concurrently with gaze shift, no modulation of gaze-oriented attention by emotion was seen. In contrast, when the face cue averted gaze before expressing an emotion (as if reacting to the object after first localizing it), the gaze orienting effect was clearly increased for fearful, surprised and angry faces compared to neutral faces...
January 1, 2015: Visual Cognition
Tashina Graves, Howard E Egeth
When participants search for a shape (e.g., a circle) among a set of homogenous shapes (e.g., triangles) they are subject to distraction by color singletons that are more salient than the target. However, when participants search for a shape among heterogeneous shapes, the presence of a non-target color singleton does not slow responses to the target. Attempts have been made to explain these results from both bottom-up and top-down perspectives. What both accounts have in common is that they do not predict the occurrence of attentional capture on typical feature search displays...
2015: Visual Cognition
Daniel A Gajewski, Courtney P Wallin, John W Philbeck
Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance...
2015: Visual Cognition
David A Ross, Isabel Gauthier
Holistic processing is a hallmark of face processing. There is evidence that holistic processing is strongest for faces at identification distance, 2 - 10 meters from the observer. However, this evidence is based on tasks that have been little used in the literature and that are indirect measures of holistic processing. We use the composite task- a well validated and frequently used paradigm - to measure the effect of viewing distance on holistic processing. In line with previous work, we find a congruency x alignment effect that is strongest for faces that are close (2m equivalent distance) than for faces that are further away (24m equivalent distance)...
2015: Visual Cognition
Luiz Pessoa
Visual processing is influenced by stimulus-driven and goal-driven factors. Recent interest has centered on understanding how reward might provide additional contributions to visual perception and unraveling the underlying neural mechanisms. In this review, I suggest that the impact of reward on vision is not unitary and depends on the type of experimental manipulation. With this in mind, I outline a possible classification of the main paradigms employed in the literature and discuss potential brain processes that operate during some of the experimental manipulations described...
2015: Visual Cognition
Patryk A Laurent, Michelle G Hall, Brian A Anderson, Steven Yantis
Visual attention has long been known to be drawn to stimuli that are physically salient or congruent with task-specific goals. Several recent studies have shown that attention is also captured by stimuli that are neither salient nor task-relevant, but that are rendered in a color that has previously been associated with reward. We investigated whether another feature dimension-orientation-can be associated with reward via learning and thereby elicit value-driven attentional capture. In a training phase, participants received a monetary reward for identifying the color of Gabor patches exhibiting one of two target orientations...
January 1, 2015: Visual Cognition
Brian A Anderson
When stimuli are associated with reward outcome, their visual features acquire high attentional priority such that stimuli possessing those features involuntarily capture attention. Whether a particular feature is predictive of reward, however, will vary with a number of contextual factors. One such factor is spatial location: for example, red berries are likely to be found in low-lying bushes, whereas yellow bananas are likely to be found on treetops. In the present study, I explore whether the attentional priority afforded to reward-associated features is modulated by such location-based contingencies...
2015: Visual Cognition
Dongho Kim, Aaron R Seitz, Takeo Watanabe
Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies...
January 2015: Visual Cognition
A Caglar Tas, Cathleen M Moore, Andrew Hollingworth
No abstract text is available yet for this article.
September 1, 2014: Visual Cognition
James R Miller, Mark W Becker, Taosheng Liu
We investigated the nature of the bandwidth limit in the consolidation of visual information into visual short-term memory. In the first two experiments, we examined whether previous results showing differential consolidation bandwidth for color and orientation resulted from methodological differences by testing the consolidation of color information with methods used in prior orientation experiments. We briefly presented two color patches with masks, either sequentially or simultaneously, followed by a location cue indicating the target...
August 1, 2014: Visual Cognition
Ashleigh M Maxcey, Geoffrey F Woodman
Retrieval-induced forgetting is a phenomenon in which groups of stimuli are initially learned, but then a subset of those stimuli are subsequently remembered via retrieval practice, causing the forgetting of the other initially learned items. This phenomenon has almost exclusively been studied using linguistic stimuli. The goal of the present study was to determine whether our memory for simultaneously learned visual stimuli was subject to a similar type of memory impairment. Participants were shown real-world objects, then they practiced recognizing a subset of these remembered objects, and finally their memory was tested for all learned objects...
July 2014: Visual Cognition
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"