Read by QxMD icon Read

Journal of Phonetics

James W Dias, Theresa C Cook, Lawrence D Rosenblum
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum...
May 2016: Journal of Phonetics
Stefan A Frisch, Sylvie M Wodzinski
Velar-vowel coarticulation in English, resulting in so-called velar fronting in front vowel contexts, was studied using ultrasound imaging of the tongue during /k/ onsets of monosyllabic words with no coda or a labial coda. Ten native English speakers were recorded and analyzed. A variety of coarticulation patterns that often appear to contain small differences in typical closure location for similar vowels was found. An account of the coarticulation pattern is provided using a virtual target model of stop consonant production where there are two /k/ allophones in English, one for front vowels and one for non-front vowels...
May 2016: Journal of Phonetics
Anne Pycha, Delphine Dahan
We investigate the hypothesis that duration and spectral differences in vowels before voiceless versus voiced codas originate from a single source, namely the reorganization of articulatory gestures relative to one another in time. As a test case, we examine the American English diphthong /aɪ/, in which the acoustic manifestations of the nucleus /a/ and offglide /ɪ/ gestures are relatively easy to identify, and we use the ratio of nucleus-to-offglide duration as an index of the temporal distance between these gestures...
May 2016: Journal of Phonetics
Andrew R Plummer, Mary E Beckman
Moulin-Frier et al. (2016) proffer a conceptual framework and computational modeling architecture for the investigation of the emergence of phonological universals for spoken languages. They validate the framework and architecture by testing to see whether universals such as the prevalence of triangular vowel systems that show adequate dispersion in the F1-F2-F3 space can fall out of simulations of referential communication between social agents, without building principles such as dispersion directly into the model...
November 1, 2015: Journal of Phonetics
Melissa A Redford
Speaking is an intentional activity. It is also a complex motor skill; one that exhibits protracted development and the fully automatic character of an overlearned behavior. Together these observations suggest an analogy with skilled behavior in the non-language domain. This analogy is used here to argue for a model of production that is grounded in the activity of speaking and structured during language acquisition. The focus is on the plan that controls the execution of fluent speech; specifically, on the units that are activated during the production of an intonational phrase...
November 1, 2015: Journal of Phonetics
Jessamyn Schertz, Taehong Cho, Andrew Lotto, Natasha Warner
The current work examines native Korean speakers' perception and production of stop contrasts in their native language (L1, Korean) and second language (L2, English), focusing on three acoustic dimensions that are all used, albeit to different extents, in both languages: voice onset time (VOT), f0 at vowel onset, and closure duration. Participants used all three cues to distinguish the L1 Korean three-way stop distinction in both production and perception. Speakers' productions of the L2 English contrasts were reliably distinguished using both VOT and f0 (even though f0 is only a very weak cue to the English contrast), and, to a lesser extent, closure duration...
September 1, 2015: Journal of Phonetics
Daniel Fogerty
The present study investigated how non-linguistic, indexical information about talker identity interacts with contributions to sentence intelligibility by the time-varying amplitude (temporal envelope) and fundamental frequency (F0). Young normal-hearing adults listened to sentences that preserved the original consonants but replaced the vowels with a single vowel production. This replacement vowel selectively preserved amplitude or F0 cues of the original vowel, but replaced cues to phonetic identity. Original vowel duration was always preserved...
September 2015: Journal of Phonetics
Erik C Tracy, Sierra A Bainter, Nicholas P Satariano
While numerous studies have demonstrated that a male speaker's sexual orientation can be identified from relatively long passages of speech, few studies have evaluated whether listeners can determine sexual orientation when presented with word-length stimuli. If listeners are able to distinguish between self-identified gay and heterosexual male speakers of American English, it is unclear whether they form their judgments based on a phoneme, such as a vowel or consonant, or multiple phonemes, such as a vowel and a consonant...
September 2015: Journal of Phonetics
Bozena Pajak, Roger Levy
The end-result of perceptual reorganization in infancy is currently viewed as a reconfigured perceptual space, "warped" around native-language phonetic categories, which then acts as a direct perceptual filter on any non-native sounds: naïve-listener discrimination of non-native-sounds is determined by their mapping onto native-language phonetic categories that are acoustically/articulatorily most similar. We report results that suggest another factor in non-native speech perception: some perceptual sensitivities cannot be attributed to listeners' warped perceptual space alone, but rather to enhanced general sensitivity along phonetic dimensions that the listeners' native language employs to distinguish between categories...
September 1, 2014: Journal of Phonetics
Eva Reinisch, David R Wozny, Holger Mitterer, Lori L Holt
Listeners use lexical or visual context information to recalibrate auditory speech perception. After hearing an ambiguous auditory stimulus between /aba/ and /ada/ coupled with a clear visual stimulus (e.g., lip closure in /aba/), an ambiguous auditory-only stimulus is perceived in line with the previously seen visual stimulus. What remains unclear, however, is what exactly listeners are recalibrating: phonemes, phone sequences, or acoustic cues. To address this question we tested generalization of visually-guided auditory recalibration to 1) the same phoneme contrast cued differently (i...
July 1, 2014: Journal of Phonetics
Argyro Katsika, Jelena Krivokapić, Christine Mooshammer, Mark Tiede, Louis Goldstein
This study investigates the coordination of boundary tones as a function of stress and pitch accent. Boundary tone coordination has not been experimentally investigated previously, and the effect of prominence on this coordination, and whether it is lexical (stress-driven) or phrasal (pitch accent-driven) in nature is unclear. We assess these issues using a variety of syntactic constructions to elicit different boundary tones in an Electromagnetic Articulography (EMA) study of Greek. The results indicate that the onset of boundary tones co-occurs with the articulatory target of the final vowel...
May 1, 2014: Journal of Phonetics
Benjamin Parrell, Louis Goldstein, Sungbok Lee, Dani Byrd
Much evidence has been found for pervasive links between the manual and speech motor systems, including evidence from infant development, deictic pointing, and repetitive tapping and speaking tasks. We expand on the last of these paradigms to look at intra- and cross-modal effects of emphatic stress, as well as the effects of coordination in the absence of explicit rhythm. In this study, subjects repeatedly tapped their finger and synchronously repeated a single spoken syllable. On each trial, subjects placed an emphatic stress on one finger tap or one spoken syllable...
January 2014: Journal of Phonetics
Hema Sirsa, Melissa A Redford
This study explored whether the sound structure of Indian English (IE) varies with the divergent native languages of its speakers or whether it is similar regardless of speakers' native languages. Native Hindi (Indo-Aryan) and Telugu (Dravidian) speakers produced comparable phrases in IE and in their native languages. Naïve and experienced IE listeners were then asked to judge whether different sentences had been spoken by speakers with the same or different native language backgrounds. The findings were an interaction between listener experience and speaker background such that only experienced listeners appropriately distinguished IE sentences produced by speakers with different native language backgrounds...
November 2013: Journal of Phonetics
Jeff Berry, Gary Weismer
A locus equation describes a 1st order regression fit to a scatter of vowel steady-state frequency values predicting vowel onset frequency values. Locus equation coefficients are often interpreted as indices of coarticulation. Speaking rate variations with a constant consonant-vowel form are thought to induce changes in the degree of coarticulation. In the current work, the hypothesis that locus slope is a transparent index of coarticulation is examined through the analysis of acoustic samples of large-scale, nearly continuous variations in speaking rate...
November 2013: Journal of Phonetics
Eriko Atagi, Tessa Bent
Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task-a task in which listeners freely group talkers based on audio samples-has been a useful tool for examining listeners' representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech...
November 1, 2013: Journal of Phonetics
Benjamin Parrell, Sungbok Lee, Dani Byrd
Prosodic structure has large effects on the temporal realization of speech via the shaping of articulatory events. It is important for speech scientists to be able to systematically quantify these prosodic effects on articulation in a way that is capable both of differentiating between the degree of prosodic lengthening associated with varying linguistic contexts and that is generalizable across speakers. The current paper presents a novel method to automatically quantify boundary strength from articulatory speech data based on functional data analysis (FDA)...
November 2013: Journal of Phonetics
Xin Xie, Carol A Fowler
This study examined the intelligibility of native and Mandarin-accented English speech for native English and native Mandarin listeners. In the latter group, it also examined the role of the language environment and English proficiency. Three groups of listeners were tested: native English listeners (NE), Mandarin-speaking Chinese listeners in the US (M-US) and Mandarin listeners in Beijing, China (M-BJ). As a group, M-US and M-BJ listeners were matched on English proficiency and age of acquisition. A nonword transcription task was used...
September 2013: Journal of Phonetics
Hyunjung Lee, Stephen Politzer-Ahles, Allard Jongman
The current study investigated the perception of the three-way distinction among Korean voiceless stops in non-tonal Seoul and tonal Kyungsang Korean. The question addressed is whether listeners from these two dialects differ in the way they perceive the three stops. Forty-two Korean listeners (21 each from Seoul and South Kyungsang) were tested in a perception experiment with stimuli in which VOT and F0 were systematically manipulated. Analyses of the perceptual identification functions show that VOT and F0 cues trade off each other for the perception of the three stops...
March 2013: Journal of Phonetics
Hosung Nam, Louis M Goldstein, Sara Giulivi, Andrea G Levitt, D H Whalen
There is a tendency for spoken consonant-vowel (CV) syllables, in babbling in particular, to show preferred combinations: labial consonants with central vowels, alveolars with front, and velars with back. This pattern was first described by MacNeilage and Davis, who found the evidence compatible with their "frame-then-content" (F/C) model. F/C postulates that CV syllables in babbling are produced with no control of the tongue (and therefore effectively random tongue positions) but systematic oscillation of the jaw...
March 1, 2013: Journal of Phonetics
Eun Jong Kong, Mary E Beckman, Jan Edwards
The age at which children master adult-like voiced stops can generally be predicted by voice onset time (VOT): stops with optional short lag are early, those with obligatory lead are late. However, Japanese voiced stops are late despite having a short lag variant, whereas Greek voiced stops are early despite having consistent voicing lead. This cross-sectional study examines the acoustics of word-initial stops produced by English-, Japanese-, and Greek-speaking children aged 2 to 5, to investigate how these seemingly exceptional mastery patterns relate to use of other phonetic correlates...
November 2012: Journal of Phonetics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"