keyword
MENU ▼
Read by QxMD icon Read
search

Infant speech perception

keyword
https://www.readbyqxmd.com/read/28467888/auditory-object-perception-a-neurobiological-model-and-prospective-review
#1
Julie A Brefczynski-Lewis, James W Lewis
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects"...
April 30, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28443053/perceptual-improvement-of-lexical-tones-in-infants-effects-of-tone-language-experience
#2
Feng-Ming Tsao
To learn words in a tonal language, tone-language learners should not only develop better abilities for perceiving consonants and vowels, but also for lexical tones. The divergent trend of enhancing sensitivity to native phonetic contrasts and reduced sensitivity to non-native phonetic contrast is theoretically essential to evaluate effects of listening to an ambient language on speech perception development. The loss of sensitivity in discriminating lexical tones among non-tonal language-learning infants was apparent between 6 and 12 months of age, but only few studies examined trends of differentiating native lexical tones in infancy...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28421015/music-and-its-inductive-power-a-psychobiological-and-evolutionary-approach-to-musical-emotions
#3
Mark Reybrouck, Tuomas Eerola
The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28395548/incorporating-ceiling-effects-during-analysis-of-speech-perception-data-from-a-paediatric-cochlear-implant-cohort
#4
Hanneke Bruijnzeel, Guido Cattani, Inge Stegeman, Vedat Topsakal, Wilko Grolman
OBJECTIVE: To compare speech perception between children with a different age at cochlear implantation. DESIGN: We evaluated speech perception by comparing consonant-vowel-consonant (auditory) (CVC(A)) scores at five-year follow-up of children implanted between 1997 and 2010. The proportion of children from each age-at-implantation group reaching the 95%CI of CVC(A) ceiling scores (>95%) was calculated to identify speech perception differences masked by ceiling effects...
April 10, 2017: International Journal of Audiology
https://www.readbyqxmd.com/read/28367052/articulating-what-infants-attune-to-in-native-speech
#5
Catherine T Best, Louis M Goldstein, Hosung Nam, Michael D Tyler
To become language users, infants must embrace the integrality of speech perception and production. That they do so, and quite rapidly, is implied by the native-language attunement they achieve in each domain by 6-12 months. Yet research has most often addressed one or the other domain, rarely how they interrelate. Moreover, mainstream assumptions that perception relies on acoustic patterns whereas production involves motor patterns entail that the infant would have to translate incommensurable information to grasp the perception-production relationship...
October 1, 2016: Ecological Psychology: a Publication of the International Society for Ecological Psychology
https://www.readbyqxmd.com/read/28338496/infants-and-adults-use-of-temporal-cues-in-consonant-discrimination
#6
Laurianne Cabrera, Lynne Werner
OBJECTIVES: Adults can use slow temporal envelope cues, or amplitude modulation (AM), to identify speech sounds in quiet. Faster AM cues and the temporal fine structure, or frequency modulation (FM), play a more important role in noise. This study assessed whether fast and slow temporal modulation cues play a similar role in infants' speech perception by comparing the ability of normal-hearing 3-month-olds and adults to use slow temporal envelope cues in discriminating consonants contrasts...
March 23, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28337157/pitch-perception-in-the-first-year-of-life-a-comparison-of-lexical-tones-and-musical-pitch
#7
Ao Chen, Catherine J Stevens, René Kager
Pitch variation is pervasive in speech, regardless of the language to which infants are exposed. Lexical tone is influenced by general sensitivity to pitch. We examined whether the development in lexical tone perception may develop in parallel with perception of pitch in other cognitive domains namely music. Using a visual fixation paradigm, 100 and one 4- and 12-month-old Dutch infants were tested on their discrimination of Chinese rising and dipping lexical tones as well as comparable three-note musical pitch contours...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28335558/modeling-the-development-of-audiovisual-cue-integration-in-speech-perception
#8
Laura M Getz, Elke R Nordeen, Sarah C Vrabic, Joseph C Toscano
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories...
March 21, 2017: Brain Sciences
https://www.readbyqxmd.com/read/28291832/electrophysiological-and-hemodynamic-mismatch-responses-in-rats-listening-to-human-speech-syllables
#9
Mahdi Mahmoudzadeh, Ghislaine Dehaene-Lambertz, Fabrice Wallois
Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs...
2017: PloS One
https://www.readbyqxmd.com/read/28124795/the-role-of-auditory-and-visual-speech-in-word-learning-at-18%C3%A2-months-and-in-adulthood
#10
Mélanie Havy, Afra Foroud, Laurel Fais, Janet F Werker
Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in silence. They were then tested on recognition in the same or the other modality. Both 18-month-old infants and adults learned the lexical mappings when the words were presented auditorily and recognized the mapping at test when the word was presented in either modality, but only adults learned new words in a visual-only presentation...
January 26, 2017: Child Development
https://www.readbyqxmd.com/read/28087242/brains-for-birds-and-babies-neural-parallels-between-birdsong-and-speech-acquisition
#11
REVIEW
Jonathan Prather, Kazuo Okanoya, Johan J Bolhuis
Language as a computational cognitive mechanism appears to be unique to the human species. However, there are remarkable behavioral similarities between song learning in songbirds and speech acquisition in human infants that are absent in non-human primates. Here we review important neural parallels between birdsong and speech. In both cases there are separate but continually interacting neural networks that underlie vocal production, sensorimotor learning, and auditory perception and memory. As in the case of human speech, neural activity related to birdsong learning is lateralized, and mirror neurons linking perception and performance may contribute to sensorimotor learning...
January 10, 2017: Neuroscience and Biobehavioral Reviews
https://www.readbyqxmd.com/read/28060872/audio-visual-perception-of-gender-by-infants-emerges-earlier-for-adult-directed-speech
#12
Anne-Raphaëlle Richoz, Paul C Quinn, Anne Hillairet de Boisferon, Carole Berger, Hélène Loevenbruck, David J Lewkowicz, Kang Lee, Marjorie Dole, Roberto Caldara, Olivier Pascalis
Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS...
2017: PloS One
https://www.readbyqxmd.com/read/27785865/mothers-speak-differently-to-infants-at-risk-for-dyslexia
#13
Marina Kalashnikova, Usha Goswami, Denis Burnham
Dyslexia is a neurodevelopmental disorder manifested in deficits in reading and spelling skills that is consistently associated with difficulties in phonological processing. Dyslexia is genetically transmitted, but its manifestation in a particular individual is thought to depend on the interaction of epigenetic and environmental factors. We adopt a novel interactional perspective on early linguistic environment and dyslexia by simultaneously studying two pre-existing factors, one maternal and one infant, that may contribute to these interactions; and two behaviours, one maternal and one infant, to index the effect of these factors...
October 27, 2016: Developmental Science
https://www.readbyqxmd.com/read/27774058/-it-don-t-mean-a-thing-if-it-ain-t-got-that-swing-an-alternative-concept-for-understanding-the-evolution-of-dance-and-music-in-human-beings
#14
Joachim Richter, Roya Ostovar
The functions of dance and music in human evolution are a mystery. Current research on the evolution of music has mainly focused on its melodic attribute which would have evolved alongside (proto-)language. Instead, we propose an alternative conceptual framework which focuses on the co-evolution of rhythm and dance (R&D) as intertwined aspects of a multimodal phenomenon characterized by the unity of action and perception. Reviewing the current literature from this viewpoint we propose the hypothesis that R&D have co-evolved long before other musical attributes and (proto-)language...
2016: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/27759424/infant-diet-related-changes-in-syllable-processing-between-4-and-5-months-implications-for-developing-native-language-sensitivity
#15
R T Pivik, Aline Andres, Shasha Bai, Mario A Cleves, Kevin B Tennal, Yuyuan Gu, Thomas M Badger
Since maturational processes triggering increased attunement to native language features in early infancy are sensitive to dietary factors, infant-diet related differences in brain processing of native-language speech stimuli might indicate variations in the onset of this tuning process. We measured cortical responses (ERPs) to syllables in 4 and 5 month old infants fed breast milk, milk formula, or soy formula and found syllable discrimination (P350) and syntactic-related functions (P600) but not syllable perception (P170) varied by diet, but not gender or background measures...
May 2016: Developmental Neuropsychology
https://www.readbyqxmd.com/read/27587921/clinical-experience-of-using-cortical-auditory-evoked-potentials-in-the-treatment-of-infant-hearing-loss-in-australia
#16
Simone Punch, Bram Van Dun, Alison King, Lyndal Carter, Wendy Pearce
This article presents the clinical protocol that is currently being used within Australian Hearing for infant hearing aid evaluation using cortical auditory evoked potentials (CAEPs). CAEP testing is performed in the free field at two stimulus levels (65 dB sound pressure level [SPL], followed by 55 or 75 dB SPL) using three brief frequency-distinct speech sounds /m/, /ɡ/, and /t/, within a standard audiological appointment of up to 90 minutes. CAEP results are used to check or guide modifications of hearing aid fittings or to confirm unaided hearing capability...
February 2016: Seminars in Hearing
https://www.readbyqxmd.com/read/27498221/audio-visual-speech-perception-in-infants-and-toddlers-with-down-syndrome-fragile-x-syndrome-and-williams-syndrome
#17
Dean D'Souza, Hana D'Souza, Mark H Johnson, Annette Karmiloff-Smith
Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants...
August 2016: Infant Behavior & Development
https://www.readbyqxmd.com/read/27449816/learning-words-and-learning-sounds-advances-in-language-development
#18
REVIEW
Marilyn M Vihman
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating 'perceptual narrowing' in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms...
February 2017: British Journal of Psychology
https://www.readbyqxmd.com/read/27432002/emancipation-of-the-voice-vocal-complexity-as-a-fitness-indicator
#19
John L Locke
Although language is generally spoken, most evolutionary proposals say little about any changes that may have induced vocal control. Here I suggest that the interaction of two changes in our species-one in sociality, the other in life history-liberated the voice from its affective moorings, enabling it to serve as a fitness cue or signal. The modification of life history increased the helplessness of infants, thus their competition for care, pressuring them to emit, and parents (and others) to evaluate, new vocal cues in bids for attention...
February 2017: Psychonomic Bulletin & Review
https://www.readbyqxmd.com/read/27397111/fundamental-frequency-variation-in-crying-of-mandarin-and-german-neonates
#20
Kathleen Wermke, Yufang Ruan, Yun Feng, Daniela Dobnig, Sophia Stephan, Peter Wermke, Li Ma, Hongyu Chang, Youyi Liu, Volker Hesse, Hua Shu
OBJECTIVES: This study examined whether prenatal exposure to either a tonal or a nontonal maternal language affects fundamental frequency (fo) properties in neonatal crying. STUDY DESIGN: This is a population prospective study. PARTICIPANTS: A total of 102 neonates within the first week of life served as the participants. METHODS: Spontaneously uttered cries (N = 6480) by Chinese (tonal language group) and German neonates (nontonal group) were quantitatively analyzed...
July 7, 2016: Journal of Voice: Official Journal of the Voice Foundation
keyword
keyword
48031
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"