Read by QxMD icon Read

Listening and spoken language

Zahra Polat, Erdoğan Bulut, Ahmet Ataş
BACKGROUND: Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI) users' performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment...
September 2016: Balkan Medical Journal
Rebecca A Gilbert, Graham J Hitch, Tom Hartley
The capacity of serially ordered auditory-verbal short-term memory (AVSTM) is sensitive to the timing of the material to be stored, and both temporal processing and AVSTM capacity are implicated in the development of language. We developed a novel "rehearsal-probe" task to investigate the relationship between temporal precision and the capacity to remember serial order. Participants listened to a sub-span sequence of spoken digits and silently rehearsed the items and their timing during an unfilled retention interval...
October 19, 2016: Quarterly Journal of Experimental Psychology: QJEP
Toni C Becker, Karla K McGregor
BACKGROUND: Increasing numbers of students with developmental language impairment (LI) are pursuing post-secondary education. OBJECTIVE: To determine whether college students with LI find spoken lectures to be a challenging learning context. METHOD: Study participants were college students, 34 with LI and 34 with normal language development (ND). Each took a baseline test of general topic knowledge, watched and listened to a 30min lecture, and took a posttest on specific information from the lecture...
September 29, 2016: Journal of Communication Disorders
Natasha Warner, Anne Cutler
BACKGROUND/AIMS: Evidence from spoken word recognition suggests that for English listeners, distinguishing full versus reduced vowels is important, but discerning stress differences involving the same full vowel (as in mu- from music or museum) is not. In Dutch, in contrast, the latter distinction is important. This difference arises from the relative frequency of unstressed full vowels in the two vocabularies. The goal of this paper is to determine how this difference in the lexicon influences the perception of stressed versus unstressed vowels...
October 7, 2016: Phonetica
Christine Yoshinaga-Itano, Mallene Wiggin
Hearing is essential for the development of speech, spoken language, and listening skills. Children previously went undiagnosed with hearing loss until they were 2.5 or 3 years of age. The auditory deprivation during this critical period of development significantly impacted long-term listening and spoken language outcomes. Due to the advent of universal newborn hearing screening, the average age of diagnosis has dropped to the first few months of life, which sets the stage for outcomes that include children with speech, spoken language, and auditory skill testing in the normal range...
November 2016: Seminars in Speech and Language
Wan-Yun Yu, Jie-Li Tsai
Previous psycholinguistic studies have demonstrated that people tend to direct fixations toward the visual object to which spoken input refers during language comprehension. However, it is still unclear how the visual scene, especially the semantic consistency between object and background, affects the word-object mapping process during comprehension. Two visual world paradigm experiments were conducted to investigate how the scene consistency dynamically influenced the language-driven eye movements in a speech comprehension and a scene comprehension task...
September 15, 2016: Acta Psychologica
Evi Jacobs, Margreet C Langereis, Johan H M Frijns, Rolien H Free, Andre Goedegebure, Cas Smits, Robert J Stokroos, Saskia A M Ariens-Meijer, Emmanuel A M Mylanus, Anneke M Vermeulen
BACKGROUND: Impaired auditory speech perception abilities in deaf children with hearing aids compromised their verbal intelligence enormously. The availability of unilateral cochlear implantation (CI) auditory speech perception and spoken vocabulary enabled them to reach near ageappropriate levels. This holds especially for children in spoken language environments. However, speech perception in complex listening situations and the acquisition of complex verbal skills remains difficult...
November 2016: Research in Developmental Disabilities
Alessandro Presacco, Jonathan Z Simon, Samira Anderson
The ability to understand speech is significantly degraded by aging, particularly in noisy environments. One way that older adults cope with this hearing difficulty is through the use of contextual cues. Several behavioral studies have shown that older adults are better at following a conversation when the target speech signal has high contextual content or when the background distractor is not meaningful. Specifically, older adults gain significant benefit in focusing on and understanding speech if the background is spoken by a talker in a language that is not comprehensible to them (i...
September 7, 2016: Journal of Neurophysiology
Jing Jiang, Kamila Borowiak, Luke Tudge, Carolin Otto, Katharina von Kriegstein
Eye contact occurs frequently and voluntarily during face-to-face verbal communication. However, the neural mechanisms underlying eye contact when it is accompanied by spoken language remain unexplored to date. Here we used a novel approach, fixation-based event-related (FIBER) functional magnetic resonance imaging (fMRI), to simulate the listener making eye contact with a speaker during verbal communication. Participants' eye movements and fMRI data were recorded simultaneously while they were freely viewing a pre-recorded speaker talking...
August 30, 2016: Social Cognitive and Affective Neuroscience
Diane Corcoran Nielsen, Barbara Luetke, Meigan McLean, Deborah Stryker
Research suggests that English-language proficiency is critical if students who are deaf or hard of hearing (D/HH) are to read as their hearing peers. One explanation for the traditionally reported reading achievement plateau when students are D/HH is the inability to hear insalient English morphology. Signing Exact English can provide visual access to these features. The authors investigated the English morphological and syntactic abilities and reading achievement of elementary and middle school students at a school using simultaneously spoken and signed Standard American English facilitated by intentional listening, speech, and language strategies...
2016: American Annals of the Deaf
Christof Karmonik, Anthony Brandt, Jeff Anderson, Forrest Brooks, Julie Lytle, Elliott Silverman, Jeff T Frazier
Listening to familiar music has recently been reported to be beneficial during recovery from stroke. A better understanding of changes in functional connectivity and information flow is warranted in order to further optimize and target this approach through music therapy. Twelve healthy volunteers listened to seven different auditory samples during an fMRI scanning session: a musical piece chosen by the volunteer that evokes a strong emotional response (referred to as: "self-selected emotional"), two unfamiliar music pieces (Invention #1 by J...
July 27, 2016: Brain Connectivity
Alexandra P Key, Dorita Jones, Sarika U Peters
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response...
September 2016: Biological Psychology
Christina Y Tzeng, Jessica E D Alexander, Sabrina K Sidaras, Lynne C Nygaard
Foreign-accented speech contains multiple sources of variation that listeners learn to accommodate. Extending previous findings showing that exposure to high-variation training facilitates perceptual learning of accented speech, the current study examines to what extent the structure of training materials affects learning. During training, native adult speakers of American English transcribed sentences spoken in English by native Spanish-speaking adults. In Experiment 1, training stimuli were blocked by speaker, sentence, or randomized with respect to speaker and sentence (Variable training)...
July 11, 2016: Journal of Experimental Psychology. Human Perception and Performance
Anthony Shook, Viorica Marian
When listening to speech in a second language, bilinguals' perception of acoustic-phonetic properties is often influenced by the features that are important in the native language of the bilingual. Furthermore, changes in the perception of segmental contrasts due to L1 experience can influence L2 lexical access during comprehension. The present study investigates whether the effect of L1 experience on L2 processing seen at the segmental level extends to suprasegmental processing. In an eye-tracking task, Mandarin-English bilinguals heard an auditorily presented English word and selected which of two visually presented Chinese characters represented the correct Mandarin translation...
June 2016: Journal of the Acoustical Society of America
Ulrike Lemke, Jana Besser
Listening effort has been recognized as an important dimension of everyday listening, especially with regard to the comprehension of spoken language. At constant levels of comprehension performance, the level of effort exerted and perceived during listening can differ considerably across listeners and situations. In this article, listening effort is used as an umbrella term for two different types of effort that can arise during listening. One of these types is processing effort, which is used to denote the utilization of "extra" mental processing resources in listening conditions that are adverse for an individual...
July 2016: Ear and Hearing
Arthur Wingfield
The goal of this article is to trace the evolution of models of working memory and cognitive resources from the early 20th century to today. Linear flow models of information processing common in the 1960s and 1970s centered on the transfer of verbal information from a limited-capacity short-term memory store to long-term memory through rehearsal. Current conceptions see working memory as a dynamic system that includes both maintaining and manipulating information through a series of interactive components that include executive control and attentional resources...
July 2016: Ear and Hearing
Thomas Lunner, Mary Rudner, Tove Rosenbom, Jessica Ågren, Elaine Hoi Ning Ng
In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure...
July 2016: Ear and Hearing
J Birulés-Muntané, S Soto-Faraco
Watching English-spoken films with subtitles is becoming increasingly popular throughout the world. One reason for this trend is the assumption that perceptual learning of the sounds of a foreign language, English, will improve perception skills in non-English speakers. Yet, solid proof for this is scarce. In order to test the potential learning effects derived from watching subtitled media, a group of intermediate Spanish students of English as a foreign language watched a 1h-long episode of a TV drama in its original English version, with English, Spanish or no subtitles overlaid...
2016: PloS One
Joey L Weidema, M P Roncaglia-Denissen, Henkjan Honing
Whether pitch in language and music is governed by domain-specific or domain-general cognitive mechanisms is contentiously debated. The aim of the present study was to investigate whether mechanisms governing pitch contour perception operate differently when pitch information is interpreted as either speech or music. By modulating listening mode, this study aspired to demonstrate that pitch contour perception relies on domain-specific cognitive mechanisms, which are regulated by top-down influences from language and music...
2016: Frontiers in Psychology
Jonathan E Peelle, Arthur Wingfield
During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here, we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension...
July 2016: Trends in Neurosciences
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"