Read by QxMD icon Read


(no author information available yet)
[This corrects the article DOI: 10.1177/2041669517707769.].
May 2017: I-Perception
Richard Wiseman, Adrian M Owen
Past research shows that in drawn or photographic portraits, people are significantly more likely to be posed facing to their right than their left. We examined whether the same type of bias exists among sagittal images of the human brain. An exhaustive search of Google images using the term 'brain sagittal view' yielded 425 images of a left or right facing brain. The direction of each image was coded and revealed that 80% of the brains were right-facing. This bias was present in images that did not contain any representation of a human head...
May 2017: I-Perception
Stella T T Cheng, Gary Y H Lam, Carol K S To
Enhanced low-level pitch perception has been universally reported in autism spectrum disorders (ASD). This study examined whether tone language speakers with ASD exhibit this advantage. The pitch perception skill of 20 Cantonese-speaking adults with ASD was compared with that of 20 neurotypical individuals. Participants discriminated pairs of real syllable, pseudo-syllable (syllables that do not conform the phonotactic rules or are accidental gaps), and non-speech (syllables with attenuated high-frequency segmental content) stimuli contrasting pitch levels...
May 2017: I-Perception
V S Ramachandran, Zeve Marcus
Synesthetes, who see printed black letters and numbers as being colored, are thought to have enhanced cross-activation between brain modules for color and form. Since the McCollough effect also results from oriented contours (i.e., form) evoking specific colors, we conjectured that synesthetes may experience an enhanced McCollough effect, and find that this is indeed true.
May 2017: I-Perception
Annabelle S Redfern, Christopher P Benton
We recognise familiar faces irrespective of their expression. This ability, crucial for social interactions, is a fundamental feature of face perception. We ask whether this constancy of facial identity may be compromised by changes in expression. This, in turn, addresses the issue of whether facial identity and expression are processed separately or interact. Using an identification task, participants learned the identities of two actors from naturalistic (so-called ambient) face images taken from movies. Training was either with neutral images or their expressive counterparts, perceived expressiveness having been determined experimentally...
May 2017: I-Perception
Daniel R Coates, Johan Wagemans, Bilge Sayim
Peripheral vision is strongly limited by crowding, the deleterious influence of neighboring stimuli on target perception. Many quantitative aspects of this phenomenon have been characterized, but the specific nature of the perceptual degradation remains elusive. We utilized a drawing technique to probe the phenomenology of peripheral vision, using the Rey-Osterrieth Complex Figure, a standard neuropsychological clinical instrument. The figure was presented at 12° or 6° in the right visual field, with eye tracking to ensure that the figure was only presented when observers maintained stable fixation...
May 2017: I-Perception
George Mather, Rob Lee
In January 2017, a large wind turbine blade was installed temporarily in a city square as a public artwork. At first sight, media photographs of the installation appeared to be fakes - the blade looks like it could not really be part of the scene. Close inspection of the object shows that its paradoxical visual appearance can be attributed to unconscious assumptions about object shape and light source direction.
May 2017: I-Perception
Diederick C Niehorster, Li Li
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion...
May 2017: I-Perception
Diederick C Niehorster, Li Li, Markus Lappe
The advent of inexpensive consumer virtual reality equipment enables many more researchers to study perception with naturally moving observers. One such system, the HTC Vive, offers a large field-of-view, high-resolution head mounted display together with a room-scale tracking system for less than a thousand U.S. dollars. If the position and orientation tracking of this system is of sufficient accuracy and precision, it could be suitable for much research that is currently done with far more expensive systems...
May 2017: I-Perception
Takahiro Kawabe
In a cartoon, we often receive an animacy impression from a dynamic nonanimate object, such as a sponge or a flour sack, which does not have an animal-like shape. We hypothesize that the animacy impression of a nonanimal object could stem from dynamic patterns that are possibly fundamental for biological motion perception. Here we show that observers recognize the animacy of human jump actions from the combination of deformation and translation. We extracted vertical motion vectors from the uppermost and lowermost points in point-light jumper stimuli and assigned the vectors to a uniform rectangle...
May 2017: I-Perception
Sae Kaneko, Stuart Anstis
In simultaneous contrast of spatial frequency (SF), a test grating surrounded by a coarser inducing grating looks apparently finer. We combined this effect with another visual illusion; the fact that flickering the inducing grating raises its apparent SF. We found that the inducer's apparent, not physical spatial frequency, drove the simultaneous contrast that it induced into a test grating. Thus, when the inducer was made to flicker, its SF appeared to be higher and consequently, the test's SF appeared lower than before...
May 2017: I-Perception
Linda Bowns, William H A Beaudot
We describe a mobile app that measures early cortical visual processing suitable for use in clinics. The app is called Component Extraction and Motion Integration Test (CEMIT). Observers are asked to respond to the direction of translating plaids that move in one of two very different directions. The plaids have been selected so that the plaid components move in one of the directions and the plaid pattern moves in the other direction. In addition to correctly responding to the pattern motion, observers demonstrate their ability to correctly extract the movement (and therefore the orientation) of the underlying components at specific spatial frequencies...
May 2017: I-Perception
Hiroshi Ashida, Alan Ho, Akiyoshi Kitaoka, Stuart Anstis
The perceived speed of a ring of equally spaced dots moving around a circular path appears faster as the number of dots increases (Ho & Anstis, 2013, Best Illusion of the Year contest). We measured this "spinner" effect with radial sinusoidal gratings, using a 2AFC procedure where participants selected the faster one between two briefly presented gratings of different spatial frequencies (SFs) rotating at various angular speeds. Compared with the reference stimulus with 4 c/rev (0.64 c/rad), participants consistently overestimated the angular speed for test stimuli of higher radial SFs but underestimated that for a test stimulus of lower radial SFs...
May 2017: I-Perception
Behrang Keshavarz, Martina Speck, Bruce Haycock, Stefan Berti
Illusory self-motion (vection) can be generated by visual stimulation. The purpose of the present study was to compare behavioral vection measures including intensity ratings, duration, and onset time across different visual display types. Participants were exposed to a pattern of alternating black-and-white horizontal or vertical bars that moved either in vertical or horizontal direction, respectively. Stimuli were presented on four types of displays in randomized order: (a) large field of view dome projection, (b) combination of three computer screens, (c) single computer screen, (d) large field of view flat projection screen...
May 2017: I-Perception
Kenri Kodaka, Ayaka Kanazawa
The paradigm of the rubber hand illusion was applied to a shadow to determine whether the body-shadow is a good candidate for the alternative belonging to our body. Three kinds of shadows, a physical hand, a hand-shaped cloth, and a rectangle cloth, were tested for this purpose. The questionnaire results showed that both anatomical similarity and visuo-proprioception correlation were effective in enhancing illusory ownership of the shadow. According to the proprioceptive drift measurement, whether the shadow purely originated from the physical body was a critical factor in yielding the significantly positive drift...
May 2017: I-Perception
Jan Koenderink, Andrea van Doorn
The "planispheric optic array" is a full-horizon Mercator projection of the optic array. Such pictures of the environment are coming in common use with the availability of cheap full-view cameras of reasonable quality. This introduces the question of whether the public will actually profit from such pictorial information in terms of an understanding of the spatial layout of the depicted scene. Test images include four persons located at the corners of a square centered at the camera. The persons point at each other in various combinations...
May 2017: I-Perception
Jose A Ordoñana, Ana Laucirica
This work attempts to study the way higher music graduate students segment a contemporary music work, Itinerant, and to understand the influence of musical feature on segmentation. It attempts to test the theory stating that saliences contribute to organising the music surface. The 42 students listened to the work several times and, in real time, they were requested to indicate the places on the score where they perceived structural boundaries. This work is characterised by its linearity, which could hinder identification of saliences and thereby, the establishment of structural boundaries...
May 2017: I-Perception
Deborah Apthorp, Scott Griffiths, David Alais, John Cass
We examined the recently discovered phenomenon of Adaptation-Induced Blindness (AIB), in which highly visible gratings with gradual onset profiles become invisible after exposure to a rapidly flickering grating, even at very high contrasts. Using very similar stimuli to those in the original AIB experiment, we replicated the original effect across multiple contrast levels, with observers at chance in detecting the gradual onset stimuli at all contrasts. Then, using full-contrast target stimuli with either abrupt or gradual onsets, we tested both the orientation tuning and interocular transfer of AIB...
March 2017: I-Perception
Sethu Karthikeyan, Vijayachandra Ramachandra
The study examined third-party listeners' ability to detect the Hellos spoken to prevalidated happy, neutral, and sad facial expressions. The average detection accuracies from the happy and sad (HS), happy and neutral (HN), and sad and neutral (SN) listening tests followed the average vocal pitch differences between the two sets of Hellos in each of the tests; HS and HN detection accuracies were above chance reflecting the significant pitch differences between the respective Hellos. The SN detection accuracy was at chance reflecting the lack of pitch difference between sad and neutral Hellos...
March 2017: I-Perception
Matthew J Stainer, Kenneth C Scott-Brown, Benjamin W Tatler
Multiplex viewing of static or dynamic scenes is an increasing feature of screen media. Most existing multiplex experiments have examined detection across increasing scene numbers, but currently no systematic evaluation of the factors that might produce difficulty in processing multiplexes exists. Across five experiments we provide such an evaluation. Experiment 1 characterises difficulty in change detection when the number of scenes is increased. Experiment 2 reveals that the increased difficulty across multiple-scene displays is caused by the total amount of visual information accounts for differences in change detection times, regardless of whether this information is presented across multiple scenes, or contained in one scene...
March 2017: I-Perception
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"