keyword
MENU ▼
Read by QxMD icon Read
search

acoustic phonetics

keyword
https://www.readbyqxmd.com/read/28717151/what-drives-sound-symbolism-different-acoustic-cues-underlie-sound-size-and-sound-shape-mappings
#1
Klemens Knoeferle, Jixing Li, Emanuela Maggioni, Charles Spence
Sound symbolism refers to the non-arbitrary mappings that exist between phonetic properties of speech sounds and their meaning. Despite there being an extensive literature on the topic, the acoustic features and psychological mechanisms that give rise to sound symbolism are not, as yet, altogether clear. The present study was designed to investigate whether different sets of acoustic cues predict size and shape symbolism, respectively. In two experiments, participants judged whether a given consonant-vowel speech sound was large or small, round or angular, using a size or shape scale...
July 17, 2017: Scientific Reports
https://www.readbyqxmd.com/read/28679267/acoustic-characteristics-of-punjabi-retroflex-and-dental-stops
#2
Qandeel Hussain, Michael Proctor, Mark Harvey, Katherine Demuth
The phonological category "retroflex" is found in many Indo-Aryan languages; however, it has not been clearly established which acoustic characteristics reliably differentiate retroflexes from other coronals. This study investigates the acoustic phonetic properties of Punjabi retroflex /ʈ/ and dental /ʈ̪/ in word-medial and word-initial contexts across /i e a o u/, and in word-final context across /i a u/. Formant transitions, closure and release durations, and spectral moments of release bursts are compared in 2280 stop tokens produced by 30 speakers...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28653556/supra-segmental-changes-in-speech-production-as-a-result-of-spectral-feedback-degradation-comparison-with-lombard-speech
#3
Elizabeth D Casserly, Yeling Wang, Nicholas Celestin, Lily Talesnick, David B Pisoni
Perturbations to acoustic speech feedback have been typically localized to specific phonetic characteristics, for example, fundamental frequency (F0) or the first two formants (F1/F2), or affect all aspects of the speech signal equally, for example, via the addition of background noise. This paper examines the consequences of a more selective global perturbation: real-time cochlear implant (CI) simulation of acoustic speech feedback. Specifically, we examine the potential similarity between speakers' response to noise vocoding and the characteristics of Lombard speech...
June 1, 2017: Language and Speech
https://www.readbyqxmd.com/read/28639708/young-infants-word-comprehension-given-an-unfamiliar-talker-or-altered-pronunciations
#4
Elika Bergelson, Daniel Swingley
To understand spoken words, listeners must appropriately interpret co-occurring talker characteristics and speech sound content. This ability was tested in 6- to 14-months-olds by measuring their looking to named food and body part images. In the new talker condition (n = 90), pictures were named by an unfamiliar voice; in the mispronunciation condition (n = 98), infants' mothers "mispronounced" the words (e.g., nazz for nose). Six- to 7-month-olds fixated target images above chance across conditions, understanding novel talkers, and mothers' phonologically deviant speech equally...
June 22, 2017: Child Development
https://www.readbyqxmd.com/read/28618810/enhancement-effects-of-clear-speech-and-word-initial-position-in-korean-glides
#5
Seung-Eun Chang
The current study investigated the enhancement effect in Korean speakers' clear speech and word-initial position, using acoustic analyses of the Korean glides /w/ and /j/. The results showed that the transitions of glides /w/ and /j/ at onset were enhanced in clear speech with an expanded vowel space. An expanded vowel space was also observed in the word-initial position, but the expansion was not statistically significant. However, the significant interaction between speaking style and word position revealed that the articulatory and global modifications in clear speech were noticeably greater at onset in the word-medial compared to the word-initial position...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28601721/a-universal-bias-in-adult-vowel-perception-by-ear-or-by-eye
#6
Matthew Masapollo, Linda Polka, Lucie Ménard
Speech perceivers are universally biased toward "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). This bias is demonstrated in phonetic discrimination tasks as a directional asymmetry: a change from a relatively less to a relatively more focal vowel results in significantly better performance than a change in the reverse direction. We investigated whether the critical information for this directional effect is limited to the auditory modality, or whether visible articulatory information provided by the speaker's face also plays a role...
September 2017: Cognition
https://www.readbyqxmd.com/read/28599541/prosodic-exaggeration-within-infant-directed-speech-consequences-for-vowel-learnability
#7
Frans Adriaans, Daniel Swingley
Perceptual experiments with infants show that they adapt their perception of speech sounds toward the categories of the native language. How do infants learn these categories? For the most part, acoustic analyses of natural infant-directed speech have suggested that phonetic categories are not presented to learners as separable clusters of sounds in acoustic space. As a step toward explaining how infants begin to solve this problem, the current study proposes that the exaggerated prosody characteristic of infant-directed speech may highlight for infants certain speech-sound tokens that collectively form more readily identifiable categories...
May 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28595176/individual-talker-and-token-covariation-in-the-production-of-multiple-cues-to-stop-voicing
#8
Meghan Clayards
BACKGROUND/AIMS: Previous research found that individual talkers have consistent differences in the production of segments impacting the perception of their speech by others. Speakers also produce multiple acoustic-phonetic cues to phonological contrasts. Less is known about how multiple cues covary within a phonetic category and across talkers. We examined differences in individual talkers across cues and whether token-by-token variability is a result of intrinsic factors or speaking style by examining within-category correlations...
June 9, 2017: Phonetica
https://www.readbyqxmd.com/read/28575731/empirical-test-of-the-performance-of-an-acoustic-phonetic-approach-to-forensic-voice-comparison-under-conditions-similar-to-those-of-a-real-case
#9
Ewald Enzinger, Geoffrey Stewart Morrison
In a 2012 case in New South Wales, Australia, the identity of a speaker on several audio recordings was in question. Forensic voice comparison testimony was presented based on an auditory-acoustic-phonetic-spectrographic analysis. No empirical demonstration of the validity and reliability of the analytical methodology was presented. Unlike the admissibility standards in some other jurisdictions (e.g., US Federal Rule of Evidence 702 and the Daubert criteria, or England & Wales Criminal Practice Directions 19A), Australia's Unified Evidence Acts do not require demonstration of the validity and reliability of analytical methods and their implementation before testimony based upon them is presented in court...
May 17, 2017: Forensic Science International
https://www.readbyqxmd.com/read/28483485/sensory-motor-relationships-in-speech-production-in-post-lingually-deaf-cochlear-implanted-adults-and-normal-hearing-seniors-evidence-from-phonetic-convergence-and-speech-imitation
#10
Lucie Scarbel, Denis Beautemps, Jean-Luc Schwartz, Marc Sato
Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients...
May 5, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28464686/the-lombard-effect-observed-in-speech-produced-by-cochlear-implant-users-in-noisy-environments-a-naturalistic-study
#11
Jaewook Lee, Hussnain Ali, Ali Ziaei, Emily A Tobey, John H L Hansen
The Lombard effect is an involuntary response speakers experience in the presence of noise during voice communication. This phenomenon is known to cause changes in speech production such as an increase in intensity, pitch structure, formant characteristics, etc., for enhanced audibility in noisy environments. Although well studied for normal hearing listeners, the Lombard effect has received little, if any, attention in the field of cochlear implants (CIs). The objective of this study is to analyze speech production of CI users who are postlingually deafened adults with respect to environmental context...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464676/a-cross-dialectal-acoustic-study-of-saterland-frisian-vowels
#12
Heike E Schoormann, Wilbert J Heeringa, Jörg Peters
Previous investigations on Saterland Frisian report a large vowel inventory, including up to 20 monophthongs and 16 diphthongs in stressed position. Conducting a cross-dialectal acoustic study on Saterland Frisian vowels in Ramsloh, Scharrel, and Strücklingen, the objective is to provide a phonetic description of vowel category realization and to identify acoustic dimensions which may enhance the discrimination of neighboring categories within the crowded vowel space of the endangered minority language. All vowels were elicited in a /hVt/ frame...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464636/directional-asymmetries-reveal-a-universal-bias-in-adult-vowel-perception
#13
Matthew Masapollo, Linda Polka, Monika Molnar, Lucie Ménard
Research on cross-language vowel perception in both infants and adults has shown that for many vowel contrasts, discrimination is easier when the same pair of vowels is presented in one direction compared to the reverse direction. According to one account, these directional asymmetries reflect a universal bias favoring "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). An alternative, but not mutually exclusive, account is that such effects reflect an experience-dependent bias favoring prototypical instances of native-language vowel categories...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28443053/perceptual-improvement-of-lexical-tones-in-infants-effects-of-tone-language-experience
#14
Feng-Ming Tsao
To learn words in a tonal language, tone-language learners should not only develop better abilities for perceiving consonants and vowels, but also for lexical tones. The divergent trend of enhancing sensitivity to native phonetic contrasts and reduced sensitivity to non-native phonetic contrast is theoretically essential to evaluate effects of listening to an ambient language on speech perception development. The loss of sensitivity in discriminating lexical tones among non-tonal language-learning infants was apparent between 6 and 12 months of age, but only few studies examined trends of differentiating native lexical tones in infancy...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28439232/mapping-the-speech-code-cortical-responses-linking-the-perception-and-production-of-vowels
#15
William L Schuerman, Antje S Meyer, James M McQueen
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded...
2017: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/28409649/lexical-stress-contrast-marking-in-fluent-and-non-fluent-aphasia-in-spanish-the-relationship-between-acoustic-cues-and-compensatory-strategies
#16
Lorraine Baqué
This study sought to investigate stress production in Spanish by patients with Broca's (BA) and conduction aphasia (CA) as compared to controls. Our objectives were to assess whether: a) there were many abnormal acoustic correlates of stress as produced by patients, b) these abnormalities had a phonetic component and c) ability for articulatory compensation for stress marking was preserved. The results showed abnormal acoustic values in both BA and CA's productions, affecting not only duration but also F0 and intensity cues, and an interaction effect of stress pattern and duration on intensity cubes in BA, but not in CA or controls...
April 14, 2017: Clinical Linguistics & Phonetics
https://www.readbyqxmd.com/read/28406683/evaluating-the-sources-and-functions-of-gradiency-in-phoneme-categorization-an-individual-differences-approach
#17
Efthymia C Kapnoula, Matthew B Winn, Eun Jong Kong, Jan Edwards, Bob McMurray
During spoken language comprehension listeners transform continuous acoustic cues into categories (e.g., /b/ and /p/). While long-standing research suggests that phonetic categories are activated in a gradient way, there are also clear individual differences in that more gradient categorization has been linked to various communication impairments such as dyslexia and specific language impairments (Joanisse, Manis, Keating, & Seidenberg, 2000; López-Zamora, Luque, Álvarez, & Cobos, 2012; Serniclaes, Van Heghe, Mousty, Carré, & Sprenger-Charolles, 2004; Werker & Tees, 1987)...
April 13, 2017: Journal of Experimental Psychology. Human Perception and Performance
https://www.readbyqxmd.com/read/28384645/perception-of-japanese-pitch-accent-without-f0
#18
Yukiko Sugiyama
Phonological contrasts are typically encoded with multiple acoustic correlates to ensure efficient communication. Studies have shown that such phonetic redundancy is found not only in segmental contrasts, but also in suprasegmental contrasts such as tone. In Japanese, fundamental frequency (F0) is the primary cue for pitch accent. However, little is known about its secondary cues. In the present study, a perception experiment was conducted to examine whether any secondary cues exist for Japanese accent. First, minimal pairs of final-accented and unaccented words were identified using a database, resulting in 14 pairs of words...
2017: Phonetica
https://www.readbyqxmd.com/read/28326994/effects-of-lexical-competition-and-dialect-exposure-on-phonological-priming
#19
Cynthia G Clopper, Abby Walker
A cross-modal lexical decision task was used to explore the effects of lexical competition and dialect exposure on phonological form priming. Relative to unrelated auditory primes, matching real word primes facilitated lexical decision for visual real word targets, whereas competing minimal pair primes inhibited lexical decision. These effects were robust across two English vowel pairs (mid-front and low-front) and for two listener groups (mono-dialectal and multi-dialectal). However, both the most robust facilitation and the most robust inhibition were observed for the mid-front vowel words with few phonological competitors for the mono-dialectal listener group...
March 2017: Language and Speech
https://www.readbyqxmd.com/read/28320669/regularized-speaker-adaptation-of-kl-hmm-for-dysarthric-speech-recognition
#20
Myungjong Kim, Younggwan Kim, Joohong Yoo, Jun Wang, Hoirin Kim
This paper addresses the problem of recognizing the speech uttered by patients with dysarthria, which is a motor speech disorder impeding the physical production of speech. Patients with dysarthria have articulatory limitation, and therefore, they often have trouble in pronouncing certain sounds, resulting in undesirable phonetic variation. Modern automatic speech recognition systems designed for regular speakers are ineffective for dysarthric sufferers due to the phonetic variation. To capture the phonetic variation, Kullback-Leibler divergence based hidden Markov model (KL-HMM) is adopted, where the emission probability of state is parametrized by a categorical distribution using phoneme posterior probabilities obtained from a deep neural network-based acoustic model...
March 13, 2017: IEEE Transactions on Neural Systems and Rehabilitation Engineering
keyword
keyword
69149
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"