journal
MENU ▼
Read by QxMD icon Read
search

Journal of the Acoustical Society of America

journal
https://www.readbyqxmd.com/read/28464693/extension-and-evaluation-of-a-near-end-listening-enhancement-algorithm-for-listeners-with-normal-and-impaired-hearing
#1
Jan Rennies, Jakob Drefs, David Hülsmeier, Henning Schepker, Simon Doclo
In many applications in which speech is played back via a sound reinforcement system such as public address systems and mobile phones, speech intelligibility is degraded by additive environmental noise. A possible solution to maintain high intelligibility in noise is to pre-process the speech signal based on the estimated noise power at the position of the listener. The previously proposed AdaptDRC algorithm [Schepker, Rennies, and Doclo (2015). J. Acoust. Soc. Am. 138, 2692-2706] applies both frequency shaping and dynamic range compression under an equal-power constraint, where the processing is adaptively controlled by short-term estimates of the speech intelligibility index...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464692/effects-of-hearing-aid-dynamic-range-compression-on-spatial-perception-in-a-reverberant-environment
#2
Henrik Gert Hassager, Alan Wiinberg, Torsten Dau
This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners' spatial perception in a reverberant environment. Three compression schemes-independent compression at each ear, linked compression between the two ears, and "spatially ideal" compression operating solely on the dry source signal-were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464691/estimating-the-spectral-tilt-of-the-glottal-source-from-telephone-speech-using-a-deep-neural-network
#3
Emma Jokinen, Paavo Alku
Estimation of the spectral tilt of the glottal source has several applications in speech analysis and modification. However, direct estimation of the tilt from telephone speech is challenging due to vocal tract resonances and distortion caused by speech compression. In this study, a deep neural network is used for the tilt estimation from telephone speech by training the network with tilt estimates computed by glottal inverse filtering. An objective evaluation shows that the proposed technique gives more accurate estimates for the spectral tilt than previously used techniques that estimate the tilt directly from telephone speech without glottal inverse filtering...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464690/how-many-images-are-in-an-auditory-scene
#4
Xuan Zhong, William A Yost
If an auditory scene consists of many spatially separated sound sources, how many sound sources can be processed by the auditory system? Experiment I determined how many speech sources could be localized simultaneously on the azimuth plane. Different words were played from multiple loudspeakers, and listeners reported the total number of sound sources and their individual locations. In experiment II the accuracy of localizing one speech source in a mixture of multiple speech sources was determined. An extra sound source was added to an existing set of sound sources, and the task was to localize that extra source...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464689/analysis-of-human-scream-and-its-impact-on-text-independent-speaker-verification
#5
John H L Hansen, Mahesh Kumar Nandwana, Navid Shokouhi
Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464688/variability-in-muscle-activation-of-simple-speech-motions-a-biomechanical-modeling-approach
#6
Negar M Harandi, Jonghye Woo, Maureen Stone, Rafeef Abugharbieh, Sidney Fels
Biomechanical models of the oropharynx facilitate the study of speech function by providing information that cannot be directly derived from imaging data, such as internal muscle forces and muscle activation patterns. Such models, when constructed and simulated based on anatomy and motion captured from individual speakers, enable the exploration of inter-subject variability of speech biomechanics. These models also allow one to answer questions, such as whether speakers produce similar sounds using essentially the same motor patterns with subtle differences, or vastly different motor equivalent patterns...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464687/streaming-and-sound-localization-with-a-preceding-distractor
#7
Norbert Kopčo, Gabriela Andrejková, Virginia Best, Barbara Shinn-Cunningham
Localization of a 2-ms click target was previously shown to be influenced by a preceding identical distractor for inter-click-intervals up to 400 ms [Kopčo, Best, and Shinn-Cunningham (2007). J. Acoust. Soc. Am. 121, 420-432]. Here, two experiments examined whether perceptual organization plays a role in this effect. In the experiments, the distractor was designed either to be grouped with the target (a single-click distractor) or to be processed in a separate stream (an 8-click train). The two distractors affected performance differently, both in terms of bias and variance, suggesting that grouping and streaming play a role in localization in multisource environments...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464686/the-lombard-effect-observed-in-speech-produced-by-cochlear-implant-users-in-noisy-environments-a-naturalistic-study
#8
Jaewook Lee, Hussnain Ali, Ali Ziaei, Emily A Tobey, John H L Hansen
The Lombard effect is an involuntary response speakers experience in the presence of noise during voice communication. This phenomenon is known to cause changes in speech production such as an increase in intensity, pitch structure, formant characteristics, etc., for enhanced audibility in noisy environments. Although well studied for normal hearing listeners, the Lombard effect has received little, if any, attention in the field of cochlear implants (CIs). The objective of this study is to analyze speech production of CI users who are postlingually deafened adults with respect to environmental context...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464685/the-localization-of-non-individualized-virtual-sounds-by-hearing-impaired-listeners
#9
Douglas S Brungart, Julie I Cohen, Danielle Zion, Griffin Romigh
Although many studies have evaluated the performance of virtual audio displays with normal hearing listeners, very little information is available on the effect that hearing loss has on the localization of virtual sounds. In this study, normal hearing (NH) and hearing impaired (HI) listeners were asked to localize noise stimuli with short (250 ms), medium (1000 ms), and long (4000 ms) durations both in the free field and with a non-individualized head-tracked virtual audio display. The results show that the HI listeners localized sounds less accurately than the NH listeners, and that both groups consistently localized virtual sounds less accurately than free-field sounds...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464684/perceptions-and-effects-of-the-acoustic-environment-in-quiet-residential-areas
#10
Guillermo Rey Gozalo, Juan Miguel Barrigón Morillas
Many cities have historical areas clearly distinguished from the rest because of the architecture, urban planning, and functionality. In many cases, these aspects give one the possibility of finding a characteristic acoustic environment and also developing quiet areas. Through an examination of sound levels and surveys, the perception of residents and passers-by concerning the acoustic environment of the old town of Cáceres and its relation with the characteristics of the urban environment were analysed. In addition, the perception and the effects of noise pollution of low intensity were studied...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464683/source-sparsity-control-of-sound-field-reproduction-using-the-elastic-net-and-the-lasso-minimizers
#11
P-A Gauthier, P Lecomte, A Berry
Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464682/speech-recognition-in-one-and-two-talker-maskers-in-school-age-children-and-adults-development-of-perceptual-masking-and-glimpsing
#12
Emily Buss, Lori J Leibold, Heather L Porter, John H Grose
Children perform more poorly than adults on a wide range of masked speech perception paradigms, but this effect is particularly pronounced when the masker itself is also composed of speech. The present study evaluated two factors that might contribute to this effect: the ability to perceptually isolate the target from masker speech, and the ability to recognize target speech based on sparse cues (glimpsing). Speech reception thresholds (SRTs) were estimated for closed-set, disyllabic word recognition in children (5-16 years) and adults in a one- or two-talker masker...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464681/sensorimotor-adaptation-affects-perceptual-compensation-for-coarticulation
#13
William L Schuerman, Srikantan Nagarajan, James M McQueen, John Houde
A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464680/broadband-signal-response-of-thermo-acoustic-devices-and-its-applications
#14
L H Tong, S K Lai, C W Lim
Thermo-acoustic (TA) transducers are generation of sound speakers without any mechanical vibration system which exhibit an extremely wide frequency response range. In this paper, acoustic field responses to broadband input signals applied to both free-standing and nano-thinfilm-substrate thermo-acoustic devices are developed theoretically by using the Fourier transformation. A series of signals, including single-frequency signal, square root signal, periodic triangle wave signal, and periodic rectangular pulse signal, are applied to these TA devices in simulations and the acoustic pressure responses are investigated...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464679/an-extended-surface-target-for-high-frequency-multibeam-echo-sounder-calibration
#15
John L Heaton, Glen Rice, Thomas C Weber
An extended calibration target has been developed for calibrating the intensity output of a multibeam echo sounder (MBES). The target was constructed of chain links arranged similar to a curtain, providing an extended surface target with a mean scattering strength of -17.8 dB at 200 kHz. The target was used to calibrate a 200 kHz MBES, and the MBES was subsequently used to collect seafloor backscatter over sand and gravel seafloors. Field results were compared with calibrated split-beam echo sounder measurements at an incidence angle of 45°...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464678/prediction-of-sound-transmission-through-and-radiation-from-panels-using-a-wave-and-finite-element-method
#16
Yi Yang, Brian R Mace, Michael J Kingan
This paper describes the extension of a wave and finite element (WFE) method to the prediction of noise transmission through, and radiation from, infinite panels. The WFE method starts with a conventional finite element model of a small segment of the panel. For a given frequency, the mass and stiffness matrices of the segment are used to form the structural dynamic stiffness matrix. The acoustic responses of the fluids surrounding the structure are modelled analytically. The dynamic stiffness matrix of the segment is post-processed using periodic structure theory, and coupled with those of the fluids...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464677/bi-directional-audiovisual-influences-on-temporal-modulation-discrimination
#17
Leonard Varghese, Samuel R Mathias, Seth Bensussen, Kenny Chou, Hannah R Goldberg, Yile Sun, Robert Sekuler, Barbara G Shinn-Cunningham
Cross-modal interactions of auditory and visual temporal modulation were examined in a game-like experimental framework. Participants observed an audiovisual stimulus (an animated, sound-emitting fish) whose sound intensity and/or visual size oscillated sinusoidally at either 6 or 7 Hz. Participants made speeded judgments about the modulation rate in either the auditory or visual modality while doing their best to ignore information from the other modality. Modulation rate in the task-irrelevant modality matched the modulation rate in the task-relevant modality (congruent conditions), was at the other rate (incongruent conditions), or had no modulation (unmodulated conditions)...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464676/a-cross-dialectal-acoustic-study-of-saterland-frisian-vowels
#18
Heike E Schoormann, Wilbert J Heeringa, Jörg Peters
Previous investigations on Saterland Frisian report a large vowel inventory, including up to 20 monophthongs and 16 diphthongs in stressed position. Conducting a cross-dialectal acoustic study on Saterland Frisian vowels in Ramsloh, Scharrel, and Strücklingen, the objective is to provide a phonetic description of vowel category realization and to identify acoustic dimensions which may enhance the discrimination of neighboring categories within the crowded vowel space of the endangered minority language. All vowels were elicited in a /hVt/ frame...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464675/simulation-of-acoustic-guided-wave-propagation-in-cortical-bone-using-a-semi-analytical-finite-element-method
#19
Daniel Pereira, Guillaume Haiat, Julio Fernandes, Pierre Belanger
Axial transmission techniques have been extensively studied for cortical bone quality assessment. However, the modeling of ultrasonic guided waves propagation in such a complex medium remains challenging. The aim of this paper is to develop a semi-analytical finite element method to simulate the propagation of guided waves in an irregular, multi-layer, and heterogeneous bone cross-section modeled with anisotropic and viscoelastic material properties. The accuracy of the simulations was verified against conventional time-domain three-dimensional finite element...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464674/robust-speaker-identification-via-fusion-of-subglottal-resonances-and-cepstral-features
#20
Jinxi Guo, Ruochen Yang, Harish Arsikere, Abeer Alwan
This letter investigates the use of subglottal resonances (SGRs) for noise-robust speaker identification (SID). It is motivated by the speaker specificity and stationarity of subglottal acoustics, and the development of noise-robust SGR estimation algorithms which are reliable at low signal-to-noise ratios for large datasets. A two-stage framework is proposed which combines the SGRs with different cepstral features. The cepstral features are used in the first stage to reduce the number of target speakers for a test utterance, and then SGRs are used as complementary second-stage features to conduct identification...
April 2017: Journal of the Acoustical Society of America
journal
journal
25741
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"