keyword
MENU ▼
Read by QxMD icon Read
search

acoustic phonetics

keyword
https://www.readbyqxmd.com/read/29874293/why-piss-is-ruder-than-pee-the-role-of-sound-in-affective-meaning-making
#1
Arash Aryani, Markus Conrad, David Schmidtke, Arthur Jacobs
Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning) and words' sound (i...
2018: PloS One
https://www.readbyqxmd.com/read/29861132/a-spatial-map-of-onset-and-sustained-responses-to-speech-in-the-human-superior-temporal-gyrus
#2
Liberty S Hamilton, Erik Edwards, Edward F Chang
To derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals. That is, equally important to processing phonetic features is the detection of acoustic cues that give structure and context to the information we hear. How the brain organizes this information is unknown. Using data-driven computational methods on high-density intracranial recordings from 27 human participants, we reveal the functional distinction of neural responses to speech in the posterior superior temporal gyrus according to either onset or sustained response profiles...
May 25, 2018: Current Biology: CB
https://www.readbyqxmd.com/read/29860083/neural-representation-of-vowel-formants-in-tonotopic-auditory-cortex
#3
Julia M Fisher, Frederic K Dick, Deborah F Levy, Stephen M Wilson
Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex...
May 31, 2018: NeuroImage
https://www.readbyqxmd.com/read/29857746/effects-of-noise-and-talker-intelligibility-on-judgments-of-accentedness
#4
Sarah Gittleman, Kristin J Van Engen
The goal of this study was to determine how noise affects listeners' subjective judgments of foreign-accented speech and how those judgments relate to the intelligibility of foreign-accented talkers. Fifty native English listeners heard native Mandarin speakers and native English speakers producing English sentences in quiet and in three levels of noise. Participants judged the accent of each speaker on a scale from 1 (native-like) to 9 (foreign). The results show that foreign-accented talkers were rated as less accented in the presence of noise, and that, while lower talker intelligibility was generally associated with higher (more foreign) accent ratings, the presence of noise significantly attenuated this relationship...
May 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29852503/effects-of-word-position-on-the-acoustic-realization-of-vietnamese-final-consonants
#5
Thi Thuy Hien Tran, Nathalie Vallée, Lionel Granjon
A variety of studies have shown differences between phonetic features of consonants according to their prosodic and/or syllable (onset vs. coda) positions. However, differences are not always found, and interactions between the various factors involved are complex and not well understood. Our study compares acoustical characteristics of coda consonants in Vietnamese taking into account their position within words. Traditionally described as monosyllabic, Vietnamese is partially polysyllabic at the lexical level...
May 28, 2018: Phonetica
https://www.readbyqxmd.com/read/29792525/modelling-category-goodness-judgments-in-children-with-residual-sound-errors
#6
Sarah Hamilton Dugan, Noah Silbert, Tara McAllister, Jonathan L Preston, Carolyn Sotto, Suzanne E Boyce
This study investigates category goodness judgments of /r/ in adults and children with and without residual speech errors (RSEs) using natural speech stimuli. Thirty adults, 38 children with RSE (ages 7-16) and 35 age-matched typically developing (TD) children provided category goodness judgments on whole words, recorded from 27 child speakers, with /r/ in various phonetic environments. The salient acoustic property of /r/ - the lowered third formant (F3) - was normalized in two ways. A logistic mixed-effect model quantified the relationships between listeners' responses and the third formant frequency, vowel context and clinical group status...
May 24, 2018: Clinical Linguistics & Phonetics
https://www.readbyqxmd.com/read/29790122/talking-points-a-modulating-circle-reduces-listening-effort-without-improving-speech-recognition
#7
Julia F Strand, Violet A Brown, Dennis L Barbour
Speech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423-425 1969; Sumby & Pollack in The Journal of the Acoustical Society of America, 26(2), 212-215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall et al...
May 22, 2018: Psychonomic Bulletin & Review
https://www.readbyqxmd.com/read/29779940/encoding-of-articulatory-kinematic-trajectories-in-human-speech-sensorimotor-cortex
#8
Josh Chartier, Gopala K Anumanchipalli, Keith Johnson, Edward F Chang
When speaking, we dynamically coordinate movements of our jaw, tongue, lips, and larynx. To investigate the neural mechanisms underlying articulation, we used direct cortical recordings from human sensorimotor cortex while participants spoke natural sentences that included sounds spanning the entire English phonetic inventory. We used deep neural networks to infer speakers' articulator movements from produced speech acoustics. Individual electrodes encoded a diversity of articulatory kinematic trajectories (AKTs), each revealing coordinated articulator movements toward specific vocal tract shapes...
May 8, 2018: Neuron
https://www.readbyqxmd.com/read/29777983/young-infants-discrimination-of-subtle-phonetic-contrasts
#9
Megha Sundara, Céline Ngon, Katrin Skoruppa, Naomi H Feldman, Glenda Molina Onario, James L Morgan, Sharon Peperkamp
It is generally accepted that infants initially discriminate native and non-native contrasts and that perceptual reorganization within the first year of life results in decreased discrimination of non-native contrasts, and improved discrimination of native contrasts. However, recent findings from Narayan, Werker, and Beddor (2010) surprisingly suggested that some acoustically subtle native-language contrasts might not be discriminated until the end of the first year of life. We first provide countervailing evidence that young English-learning infants can discriminate the Filipino contrast tested by Narayan et al...
May 16, 2018: Cognition
https://www.readbyqxmd.com/read/29764295/acoustic-sources-of-accent-in-second-language-japanese-speech
#10
Kaori Idemaru, Peipei Wei, Lucy Gubbins
This study reports an exploratory analysis of the acoustic characteristics of second language (L2) speech which give rise to the perception of a foreign accent. Japanese speech samples were collected from American English and Mandarin Chinese speakers ( n = 16 in each group) studying Japanese. The L2 participants and native speakers ( n = 10) provided speech samples modeling after six short sentences. Segmental (vowels and stops) and prosodic features (rhythm, tone, and fluency) were examined. Native Japanese listeners ( n = 10) rated the samples with regard to degrees of foreign accent...
May 1, 2018: Language and Speech
https://www.readbyqxmd.com/read/29745524/-an-acoustic-articulatory-study-of-the-nasal-finals-in-students-with-and-without-hearing-loss
#11
Qing Wang, Jing Bai, Peiyun Xue, Xueying Zhang, Pei Feng
The central aim of this experiment was to compare the articulatory and acoustic characteristics of students with normal hearing (NH) and school aged children with hearing loss (HL), and to explore the articulatory-acoustic relations during the nasal finals. Fourteen HL and 10 control group were enrolled in this study, and the data of 4 HL students were removed because of their high pronunciation error rate. Data were collected using an electromagnetic articulography. The acoustic data and kinematics data of nasal finals were extracted by the phonetics and data processing software, and all data were analyzed by t test and correlation analysis...
April 1, 2018: Sheng Wu Yi Xue Gong Cheng Xue za Zhi, Journal of Biomedical Engineering, Shengwu Yixue Gongchengxue Zazhi
https://www.readbyqxmd.com/read/29742545/perceptual-discrimination-of-speaking-style-under-cochlear-implant-simulation
#12
Terrin N Tamati, Esther Janse, Deniz Başkent
OBJECTIVES: Real-life, adverse listening conditions involve a great deal of speech variability, including variability in speaking style. Depending on the speaking context, talkers may use a more casual, reduced speaking style or a more formal, careful speaking style. Attending to fine-grained acoustic-phonetic details characterizing different speaking styles facilitates the perception of the speaking style used by the talker. These acoustic-phonetic cues are poorly encoded in cochlear implants (CIs), potentially rendering the discrimination of speaking style difficult...
May 9, 2018: Ear and Hearing
https://www.readbyqxmd.com/read/29716257/assessing-the-importance-of-several-acoustic-properties-to-the-perception-of-spontaneous-speech
#13
Ryan G Podlubny, Terrance M Nearey, Grzegorz Kondrak, Benjamin V Tucker
Spoken language manifests itself as change over time in various acoustic dimensions. While it seems clear that acoustic-phonetic information in the speech signal is key to language processing, little is currently known about which specific types of acoustic information are relatively more informative to listeners. This problem is likely compounded when considering reduced speech: Which specific acoustic information do listeners rely on when encountering spoken forms that are highly variable, and often include altered or elided segments? This work explores contributions of spectral shape, f0 contour, target duration, and time varying intensity in the perception of reduced speech...
April 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29649804/regional-variation-in-fundamental-frequency-of-american-english-vowels
#14
Ewa Jacewicz, Robert Allen Fox
We examined whether the fundamental frequency (f0) of vowels is influenced by regional variation, aiming to (1) establish how the relationship between vowel height and f0 ("intrinsic f0") is utilized in regional vowel systems and (2) determine whether regional varieties differ in their implementation of the effects of phonetic context on f0 variations. An extended set of acoustic measures explored f0 in vowels in isolated tokens (experiment 1) and in connected speech (experiment 2) from 36 women representing 3 different varieties of American English...
April 11, 2018: Phonetica
https://www.readbyqxmd.com/read/29604687/the-role-of-gesture-delay-in-coda-r-weakening-an-articulatory-auditory-and-acoustic-study
#15
Eleanor Lawson, Jane Stuart-Smith, James M Scobbie
The cross-linguistic tendency of coda consonants to weaken, vocalize, or be deleted is shown to have a phonetic basis, resulting from gesture reduction, or variation in gesture timing. This study investigates the effects of the timing of the anterior tongue gesture for coda /r/ on acoustics and perceived strength of rhoticity, making use of two sociolects of Central Scotland (working- and middle-class) where coda /r/ is weakening and strengthening, respectively. Previous articulatory analysis revealed a strong tendency for these sociolects to use different coda /r/ tongue configurations-working- and middle-class speakers tend to use tip/front raised and bunched variants, respectively; however, this finding does not explain working-class /r/ weakening...
March 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29589808/on-the-role-of-cognitive-abilities-in-second-language-vowel-learning
#16
Payam Ghaffarvand Mokari, Stefan Werner
This study investigated the role of different cognitive abilities-inhibitory control, attention control, phonological short-term memory (PSTM), and acoustic short-term memory (AM)-in second language (L2) vowel learning. The participants were 40 Azerbaijani learners of Standard Southern British English. Their perception of L2 vowels was tested through a perceptual discrimination task before and after five sessions of high-variability phonetic training. Inhibitory control was significantly correlated with gains from training in the discrimination of L2 vowel pairs...
March 1, 2018: Language and Speech
https://www.readbyqxmd.com/read/29582572/linking-cognitive-and-social-aspects-of-sound-change-using-agent-based-modeling
#17
Jonathan Harrington, Felicitas Kleber, Ulrich Reubold, Florian Schiel, Mary Stevens
The paper defines the core components of an interactive-phonetic (IP) sound change model. The starting point for the IP-model is that a phonological category is often skewed phonetically in a certain direction by the production and perception of speech. A prediction of the model is that sound change is likely to come about as a result of perceiving phonetic variants in the direction of the skew and at the probabilistic edge of the listener's phonological category. The results of agent-based computational simulations applied to the sound change in progress, /u/-fronting in Standard Southern British, were consistent with this hypothesis...
March 26, 2018: Topics in Cognitive Science
https://www.readbyqxmd.com/read/29556206/a-musical-approach-to-speech-melody
#18
Ivan Chow, Steven Brown
We present here a musical approach to speech melody, one that takes advantage of the intervallic precision made possible with musical notation. Current phonetic and phonological approaches to speech melody either assign localized pitch targets that impoverish the acoustic details of the pitch contours and/or merely highlight a few salient points of pitch change, ignoring all the rest of the syllables. We present here an alternative model using musical notation, which has the advantage of representing the pitch of all syllables in a sentence as well as permitting a specification of the intervallic excursions among syllables and the potential for group averaging of pitch use across speakers...
2018: Frontiers in Psychology
https://www.readbyqxmd.com/read/29497744/speech-adaptation-to-kinematic-recording-sensors-perceptual-and-acoustic-findings
#19
Christopher Dromey, Elise Hunter, Shawn L Nissen
Purpose: This study used perceptual and acoustic measures to examine the time course of speech adaptation after the attachment of electromagnetic sensor coils to the tongue, lips, and jaw. Method: Twenty native English speakers read aloud stimulus sentences before the attachment of the sensors, immediately after attachment, and again 5, 10, 15, and 20 min later. They read aloud continuously between recordings to encourage adaptation. Sentence recordings were perceptually evaluated by 20 native English listeners, who rated 150 stimuli (which included 31 samples that were repeated to assess rater reliability) using a visual analog scale with the end points labeled as "precise" and "imprecise...
March 15, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29471380/dysarthria-in-mandarin-speaking-children-with-cerebral-palsy-speech-subsystem-profiles
#20
Li-Mei Chen, Katherine C Hustad, Ray D Kent, Yu Ching Lin
Purpose: This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences. Method: Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach...
March 15, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
keyword
keyword
69149
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"