Read by QxMD icon Read

Infant speech perception

Mélanie Havy, Afra Foroud, Laurel Fais, Janet F Werker
Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in silence. They were then tested on recognition in the same or the other modality. Both 18-month-old infants and adults learned the lexical mappings when the words were presented auditorily and recognized the mapping at test when the word was presented in either modality, but only adults learned new words in a visual-only presentation...
January 26, 2017: Child Development
Jonathan Prather, Kazuo Okanoya, Johan J Bolhuis
Language as a computational cognitive mechanism appears to be unique to the human species. However, there are remarkable behavioral similarities between song learning in songbirds and speech acquisition in human infants that are absent in non-human primates. Here we review important neural parallels between birdsong and speech. In both cases there are separate but continually interacting neural networks that underlie vocal production, sensorimotor learning, and auditory perception and memory. As in the case of human speech, neural activity related to birdsong learning is lateralized, and mirror neurons linking perception and performance may contribute to sensorimotor learning...
January 10, 2017: Neuroscience and Biobehavioral Reviews
Anne-Raphaëlle Richoz, Paul C Quinn, Anne Hillairet de Boisferon, Carole Berger, Hélène Loevenbruck, David J Lewkowicz, Kang Lee, Marjorie Dole, Roberto Caldara, Olivier Pascalis
Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS...
2017: PloS One
Marina Kalashnikova, Usha Goswami, Denis Burnham
Dyslexia is a neurodevelopmental disorder manifested in deficits in reading and spelling skills that is consistently associated with difficulties in phonological processing. Dyslexia is genetically transmitted, but its manifestation in a particular individual is thought to depend on the interaction of epigenetic and environmental factors. We adopt a novel interactional perspective on early linguistic environment and dyslexia by simultaneously studying two pre-existing factors, one maternal and one infant, that may contribute to these interactions; and two behaviours, one maternal and one infant, to index the effect of these factors...
October 27, 2016: Developmental Science
Joachim Richter, Roya Ostovar
The functions of dance and music in human evolution are a mystery. Current research on the evolution of music has mainly focused on its melodic attribute which would have evolved alongside (proto-)language. Instead, we propose an alternative conceptual framework which focuses on the co-evolution of rhythm and dance (R&D) as intertwined aspects of a multimodal phenomenon characterized by the unity of action and perception. Reviewing the current literature from this viewpoint we propose the hypothesis that R&D have co-evolved long before other musical attributes and (proto-)language...
2016: Frontiers in Human Neuroscience
R T Pivik, Aline Andres, Shasha Bai, Mario A Cleves, Kevin B Tennal, Yuyuan Gu, Thomas M Badger
Since maturational processes triggering increased attunement to native language features in early infancy are sensitive to dietary factors, infant-diet related differences in brain processing of native-language speech stimuli might indicate variations in the onset of this tuning process. We measured cortical responses (ERPs) to syllables in 4 and 5 month old infants fed breast milk, milk formula, or soy formula and found syllable discrimination (P350) and syntactic-related functions (P600) but not syllable perception (P170) varied by diet, but not gender or background measures...
May 2016: Developmental Neuropsychology
Simone Punch, Bram Van Dun, Alison King, Lyndal Carter, Wendy Pearce
This article presents the clinical protocol that is currently being used within Australian Hearing for infant hearing aid evaluation using cortical auditory evoked potentials (CAEPs). CAEP testing is performed in the free field at two stimulus levels (65 dB sound pressure level [SPL], followed by 55 or 75 dB SPL) using three brief frequency-distinct speech sounds /m/, /ɡ/, and /t/, within a standard audiological appointment of up to 90 minutes. CAEP results are used to check or guide modifications of hearing aid fittings or to confirm unaided hearing capability...
February 2016: Seminars in Hearing
Dean D'Souza, Hana D'Souza, Mark H Johnson, Annette Karmiloff-Smith
Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants...
August 2016: Infant Behavior & Development
Marilyn M Vihman
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating 'perceptual narrowing' in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms...
February 2017: British Journal of Psychology
John L Locke
Although language is generally spoken, most evolutionary proposals say little about any changes that may have induced vocal control. Here I suggest that the interaction of two changes in our species-one in sociality, the other in life history-liberated the voice from its affective moorings, enabling it to serve as a fitness cue or signal. The modification of life history increased the helplessness of infants, thus their competition for care, pressuring them to emit, and parents (and others) to evaluate, new vocal cues in bids for attention...
July 18, 2016: Psychonomic Bulletin & Review
Kathleen Wermke, Yufang Ruan, Yun Feng, Daniela Dobnig, Sophia Stephan, Peter Wermke, Li Ma, Hongyu Chang, Youyi Liu, Volker Hesse, Hua Shu
OBJECTIVES: This study examined whether prenatal exposure to either a tonal or a nontonal maternal language affects fundamental frequency (fo) properties in neonatal crying. STUDY DESIGN: This is a population prospective study. PARTICIPANTS: A total of 102 neonates within the first week of life served as the participants. METHODS: Spontaneously uttered cries (N = 6480) by Chinese (tonal language group) and German neonates (nontonal group) were quantitatively analyzed...
July 7, 2016: Journal of Voice: Official Journal of the Voice Foundation
Nawal Abboub, Natalie Boll-Avetisyan, Anjali Bhatara, Barbara Höhle, Thierry Nazzi
Rhythm in music and speech can be characterized by a constellation of several acoustic cues. Individually, these cues have different effects on rhythmic perception: sequences of sounds alternating in duration are perceived as short-long pairs (weak-strong/iambic pattern), whereas sequences of sounds alternating in intensity or pitch are perceived as loud-soft, or high-low pairs (strong-weak/trochaic pattern). This perceptual bias-called the Iambic-Trochaic Law (ITL)-has been claimed to be an universal property of the auditory system applying in both the music and the language domains...
2016: Frontiers in Human Neuroscience
Youngja Nam, Linda Polka
Previous research revealing universal biases in infant vowel perception forms the basis of the Natural Referent Vowel (NRV) framework (Polka & Bohn, 2011). To explore the feasibility of extending this framework to consonant manner perception, we investigated perception of the stop vs. fricative consonant contrast /b/-/v/ to test the hypothesis that young infants will display a perceptual bias grounded in the acoustic-phonetic properties of these sounds. We examined perception of stop-initial /bas/ and fricative-initial /vas/ syllables in English-learning and French-learning 5- to 6-month-olds...
October 2016: Cognition
Jennifer Phan, Derek M Houston, Chad Ruffin, Jonathan Ting, Rachael Frush Holt
BACKGROUND: To learn words and acquire language, children must be able to discriminate and correctly perceive phonemes. Although there has been much research on the general language outcomes of children with cochlear implants (CIs), little is known about the development of speech perception with regard to specific speech processes, such as speech discrimination. PURPOSE: The purpose of this study was to investigate the development of speech discrimination in infants with CIs and identify factors that might correlate with speech discrimination skills...
June 2016: Journal of the American Academy of Audiology
B Mikic, A Jotic, D Miric, M Nikolic, N Jankovic, N Arsovic
INTRODUCTION: Incidence of children with autism spectrum disorder (ASD) is rising through the years with estimated 1 in 68 in the US in 2014. This incidence is also rising in the population of congenitally deaf children. Favorable outcome after early cochlear implantation is expected due to plasticity and reorganization capacity of brain in infants and toddlers, but outcomes could be significantly modified in children with diagnosed ASD. Current methods of screening for autism have difficulties in establishing diagnosis in children who have both autism and other developmental delays, especially at such an early age...
June 2016: European Annals of Otorhinolaryngology, Head and Neck Diseases
Hélia Soares
AIM: To investigate the effect of implementing the Touchpoints methodology by nurses in the following variables: quality of mother-infant interaction; infant development; maternal representations of child temperament and mothers' perception of the quality of relationship with nurses. METHODS: Quasi-experimental longitudinal study, including 86 child-mother dyads distributed equally for: Group with Intervention (GI) (n=43), Group without Intervention (GWI) (n=43)...
May 9, 2016: Nursing Children and Young People
Daniel A Abrams, Tianwen Chen, Paola Odriozola, Katherine M Cheng, Amanda E Baker, Aarthi Padmanabhan, Srikanth Ryali, John Kochalka, Carl Feinstein, Vinod Menon
The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child's life is mother's voice: Infants discriminate their mother's voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother's voice or whether this brain activity is related to children's social communication abilities...
May 31, 2016: Proceedings of the National Academy of Sciences of the United States of America
Sophie Ter Schure, Caroline Junge, Paul Boersma
Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech sounds than when their input contains a one-peaked frequency distribution. Effects of frequency distributions on phonetic learning have been tested almost exclusively for auditory input. But auditory speech is usually accompanied by visual information, that is, by visible articulations...
2016: Frontiers in Psychology
Nayara Freitas Fernandes, Elisabete Honda Yamaguti, Marina Morettin, Orozimbo Alves Costa
PURPOSE: To analyze speech perception in children with pre-lingual hearing loss with auditory neuropathy spectrum disorder users of bilateral hearing aid. METHODS: This is a descriptive and exploratory study carried out at the Research Center Audiological (HRAC/USP). The study included four children aged between 8 years and 3 months and 12 years and 2 months. Lists of monosyllabic words, two syllables, nonsense words and sentences, the Infant Toddler-Meaningful Auditory Integration Scale (IT-MAIS) and the Meaningful Use of Speech Scale (MUSS), hearing, and language categories were used...
January 2016: CoDAS
Hanneke Bruijnzeel, Fuat Ziylan, Inge Stegeman, Vedat Topsakal, Wilko Grolman
OBJECTIVE: This review aimed to evaluate the additional benefit of pediatric cochlear implantation before 12 months of age considering improved speech and language development and auditory performance. MATERIALS AND METHODS: We conducted a search in PubMed, EMBASE and CINAHL databases and included studies comparing groups with different ages at implantation and assessing speech perception and speech production, receptive language and/or auditory performance. We included studies with a high directness of evidence (DoE)...
2016: Audiology & Neuro-otology
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"