keyword
MENU ▼
Read by QxMD icon Read
search

speech signal processing

keyword
https://www.readbyqxmd.com/read/28817812/the-mir-34a-bcl-2-pathway-contributes-to-auditory-cortex-neuron-apoptosis-in-age-related-hearing-loss
#1
Qiuhong Huang, Yongkang Ou, Hao Xiong, Haidi Yang, Zhigang Zhang, Suijun Chen, Yongyi Ye, Yiqing Zheng
HYPOTHESIS: The miR-34a/Bcl-2 signaling pathway may play a role in the mechanisms related to age-related hearing loss (AHL) in the auditory cortex. BACKGROUND: The auditory cortex plays a key role in the recognition and processing of complex sound. It is difficult to explain why patients with AHL have poor speech recognition, so increasing numbers of studies have focused on its central change. Although micro (mi)RNAs in the central nervous system have recently been increasingly reported to be associated with age-related diseases, the molecular mechanisms of AHL in the auditory cortex are not fully understood...
August 18, 2017: Audiology & Neuro-otology
https://www.readbyqxmd.com/read/28816204/deep-band-modulated-phrase-perception-in-quiet-and-noise-in-individuals-with-auditory-neuropathy-spectrum-disorder-and-sensorineural-hearing-loss
#2
Hemanth Narayan Shetty, Vishal Kooknoor
CONTEXT: Deep band modulation (DBM) improves speech perception in individuals with learning disability and older adults, who had temporal impairment in them. However, it is unclear on perception of DBM phrases at quiet and noise conditions in individuals with auditory neuropathy spectrum disorder (ANSD) and sensorineural hearing loss (SNHL), as these individuals suffer from temporal impairment. AIM: The aim is to study the effect of DBM and noise on phrase perception in individuals with normal hearing, SNHL, and ANSD...
July 2017: Noise & Health
https://www.readbyqxmd.com/read/28813266/intuitive-parenting-understanding-the-neural-mechanisms-of-parents-adaptive-responses-to-infants
#3
REVIEW
Christine E Parsons, Katherine S Young, Alan Stein, Morten L Kringelbach
When interacting with an infant, parents intuitively enact a range of behaviours that support infant communicative development. These behaviours include altering speech, establishing eye contact and mirroring infant expressions and are argued to occur largely in the absence of conscious intent. Here, we describe studies investigating early, pre-conscious neural responses to infant cues, which we suggest support aspects of parental intuitive behaviour towards infants. This work has provided converging evidence for rapid differentiation of infant cues from other salient social signals in the adult brain...
June 2017: Current Opinion in Psychology
https://www.readbyqxmd.com/read/28793847/effects-of-electrode-deactivation-on-speech-recognition-in-multichannel-cochlear-implant-recipients
#4
Kara C Schvartz-Leyzac, Teresa A Zwolan, Bryan E Pfingst
OBJECTIVES: The objective of the current study is to evaluate how speech recognition performance is affected by the number of active electrodes that are turned off in multichannel cochlear implants. Several recent studies have demonstrated positive effects of deactivating stimulation sites based on an objective measure in cochlear implant processing strategies. Previous studies using an analysis of variance have shown that, on average, cochlear implant listeners' performance does not improve beyond eight active electrodes...
August 10, 2017: Cochlear Implants International
https://www.readbyqxmd.com/read/28783570/speech-reception-with-different-bilateral-directional-processing-schemes-influence-of-binaural-hearing-audiometric-asymmetry-and-acoustic-scenario
#5
Tobias Neher, Kirsten C Wagener, Matthias Latzel
Hearing aid (HA) users can differ markedly in their benefit from directional processing (or beamforming) algorithms. The current study therefore investigated candidacy for different bilateral directional processing schemes. Groups of elderly listeners with symmetric (N = 20) or asymmetric (N = 19) hearing thresholds for frequencies below 2 kHz, a large spread in the binaural intelligibility level difference (BILD), and no difference in age, overall degree of hearing loss, or performance on a measure of selective attention took part...
July 29, 2017: Hearing Research
https://www.readbyqxmd.com/read/28776506/neural-decoding-of-attentional-selection-in-multi-speaker-environments-without-access-to-clean-sources
#6
James O'Sullivan, Zhuo Chen, Jose Herrero, Guy M McKhann, Sameer A Sheth, Ashesh D Mehta, Nima Mesgarani
OBJECTIVE: People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals...
August 4, 2017: Journal of Neural Engineering
https://www.readbyqxmd.com/read/28764460/recognition-of-asynchronous-auditory-visual-speech-by-younger-and-older-listeners-a-preliminary-study
#7
Sandra Gordon-Salant, Grace H Yeni-Komshian, Peter J Fitzgibbons, Hannah M Willison, Maya S Freund
This study examined the effects of age and hearing loss on recognition of speech presented when the auditory and visual speech information was misaligned in time (i.e., asynchronous). Prior research suggests that older listeners are less sensitive than younger listeners in detecting the presence of asynchronous speech for auditory-lead conditions, but recognition of speech in auditory-lead conditions has not yet been examined. Recognition performance was assessed for sentences and words presented in the auditory-visual modalities with varying degrees of auditory lead and lag...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28764456/binaural-masking-release-in-symmetric-listening-conditions-with-spectro-temporally-modulated-maskers
#8
Stephan D Ewert, Wiebke Schubotz, Thomas Brand, Birger Kollmeier
Speech reception thresholds (SRTs) decrease as target and maskers are spatially separated (spatial release from masking, SRM). The current study systematically assessed how SRTs and SRM for a frontal target in a spatially symmetric masker configuration depend on spectro-temporal masker properties, the availability of short-time interaural level difference (ILD) and interaural time difference (ITD), and informational masking. Maskers ranged from stationary noise to single, interfering talkers and were modified by head-related transfer functions to provide: (i) different binaural cues (ILD, ITD, or both) and (ii) independent maskers in each ear ("infinite ILD")...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28764450/the-articulatory-dynamics-of-pre-velar-and-pre-nasal-%C3%A3-raising-in-english-an-ultrasound-study
#9
Jeff Mielke, Christopher Carignan, Erik R Thomas
Most dialects of North American English exhibit /æ/-raising in some phonological contexts. Both the conditioning environments and the temporal dynamics of the raising vary from region to region. To explore the articulatory basis of /æ/-raising across North American English dialects, acoustic and articulatory data were collected from a regionally diverse group of 24 English speakers from the United States, Canada, and the United Kingdom. A method for examining the temporal dynamics of speech directly from ultrasound video using EigenTongues decomposition [Hueber, Aversano, Chollet, Denby, Dreyfus, Oussar, Roussel, and Stone (2007)...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28764441/effects-of-reverberation-background-talker-number-and-compression-release-time-on-signal-to-noise-ratio
#10
Paul Reinhart, Pavel Zahorik, Pamela E Souza
Wide dynamic range compression (WDRC) processing in hearing aids alters the signal-to-noise ratio (SNR) of a speech-in-noise signal. This effect depends on the modulations of the speech and noise, input SNR, and WDRC speed. The purpose of the present experiment was to examine the change in output SNR caused by the interaction between modulation characteristics and WDRC speed. Two modulation manipulations were examined: (1) reverberation and (2) variation in background talker number. Results indicated that fast-acting WDRC altered SNR more than slow-acting WDRC; however, reverberation reduced this difference...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28764418/localization-and-separation-of-acoustic-sources-by-using-a-2-5-dimensional-circular-microphone-array
#11
Mingsian R Bai, Chang-Sheng Lai, Po-Chen Wu
Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28750682/behavioural-and-neuroanatomical-correlates-of-auditory-speech-analysis-in-primary-progressive-aphasias
#12
Chris J D Hardy, Jennifer L Agustus, Charles R Marshall, Camilla N Clark, Lucy L Russell, Rebecca L Bond, Emilie V Brotherhood, David L Thomas, Sebastian J Crutch, Jonathan D Rohrer, Jason D Warren
BACKGROUND: Non-verbal auditory impairment is increasingly recognised in the primary progressive aphasias (PPAs) but its relationship to speech processing and brain substrates has not been defined. Here we addressed these issues in patients representing the non-fluent variant (nfvPPA) and semantic variant (svPPA) syndromes of PPA. METHODS: We studied 19 patients with PPA in relation to 19 healthy older individuals. We manipulated three key auditory parameters-temporal regularity, phonemic spectral structure and prosodic predictability (an index of fundamental information content, or entropy)-in sequences of spoken syllables...
July 27, 2017: Alzheimer's Research & Therapy
https://www.readbyqxmd.com/read/28748487/predictions-of-speech-chimaera-intelligibility-using-auditory-nerve-mean-rate-and-spike-timing-neural-cues
#13
Michael R Wirtzfeld, Rasha A Ibrahim, Ian C Bruce
Perceptual studies of speech intelligibility have shown that slow variations of acoustic envelope (ENV) in a small set of frequency bands provides adequate information for good perceptual performance in quiet, whereas acoustic temporal fine-structure (TFS) cues play a supporting role in background noise. However, the implications for neural coding are prone to misinterpretation because the mean-rate neural representation can contain recovered ENV cues from cochlear filtering of TFS. We investigated ENV recovery and spike-time TFS coding using objective measures of simulated mean-rate and spike-timing neural representations of chimaeric speech, in which either the ENV or the TFS is replaced by another signal...
July 26, 2017: Journal of the Association for Research in Otolaryngology: JARO
https://www.readbyqxmd.com/read/28737705/emotion-recognition-from-chinese-speech-for-smart-affective-services-using-a-combination-of-svm-and-dbn
#14
Lianzhang Zhu, Leiming Chen, Dehai Zhao, Jiehan Zhou, Weishan Zhang
Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy...
July 24, 2017: Sensors
https://www.readbyqxmd.com/read/28735495/neuroanatomical-and-resting-state-eeg-power-correlates-of-central-hearing-loss-in-older-adults
#15
Nathalie Giroud, Sarah Hirsiger, Raphaela Muri, Andrea Kegel, Norbert Dillier, Martin Meyer
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant...
July 22, 2017: Brain Structure & Function
https://www.readbyqxmd.com/read/28729443/theta-and-beta-band-neural-activity-reflect-independent-syllable-tracking-and-comprehension-of-time-compressed-speech
#16
Maria Pefkou, Luc H Arnal, Lorenzo Fontolan, Anne-Lise Giraud
Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural gamma activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of theta rhythm to follow speech syllabic rhythm, or constrained by a more endogenous top-down mechanism, e.g. involving beta activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences...
July 20, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28727776/language-related-differences-of-the-sustained-response-evoked-by-natural-speech-sounds
#17
Christina Siu-Dschu Fan, Xingyu Zhu, Hans Günter Dosch, Christiane von Stutterheim, André Rupp
In tonal languages, such as Mandarin Chinese, the pitch contour of vowels discriminates lexical meaning, which is not the case in non-tonal languages such as German. Recent data provide evidence that pitch processing is influenced by language experience. However, there are still many open questions concerning the representation of such phonological and language-related differences at the level of the auditory cortex (AC). Using magnetoencephalography (MEG), we recorded transient and sustained auditory evoked fields (AEF) in native Chinese and German speakers to investigate language related phonological and semantic aspects in the processing of acoustic stimuli...
2017: PloS One
https://www.readbyqxmd.com/read/28722648/perspectives-on-the-pure-tone-audiogram
#18
REVIEW
Frank E Musiek, Jennifer Shinn, Gail D Chermak, Doris-Eva Bamiou
BACKGROUND: The pure-tone audiogram, though fundamental to audiology, presents limitations, especially in the case of central auditory involvement. Advances in auditory neuroscience underscore the considerably larger role of the central auditory nervous system (CANS) in hearing and related disorders. Given the availability of behavioral audiological tests and electrophysiological procedures that can provide better insights as to the function of the various components of the auditory system, this perspective piece reviews the limitations of the pure-tone audiogram and notes some of the advantages of other tests and procedures used in tandem with the pure-tone threshold measurement...
July 2017: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/28691782/a-novel-microduplication-of-arid1b-clinical-genetic-and-proteomic-findings
#19
Catarina M Seabra, Nicholas Szoko, Serkan Erdin, Ashok Ragavendran, Alexei Stortchevoi, Patrícia Maciel, Kathleen Lundberg, Daniela Schlatzer, Janice Smith, Michael E Talkowski, James F Gusella, Marvin R Natowicz
Genetic alterations of ARID1B have been recently recognized as one of the most common mendelian causes of intellectual disability and are associated with both syndromic and non-syndromic phenotypes. The ARID1B protein, a subunit of the chromatin remodeling complex SWI/SNF-A, is involved in the regulation of transcription and multiple downstream cellular processes. We report here the clinical, genetic, and proteomic phenotypes of an individual with a unique apparent de novo mutation of ARID1B due to an intragenic duplication...
September 2017: American Journal of Medical Genetics. Part A
https://www.readbyqxmd.com/read/28679277/masking-release-for-hearing-impaired-listeners-the-effect-of-increased-audibility-through-reduction-of-amplitude-variability
#20
Joseph G Desloge, Charlotte M Reed, Louis D Braida, Zachary D Perez, Laura A D'Aquila
The masking release (i.e., better speech recognition in fluctuating compared to continuous noise backgrounds) observed for normal-hearing (NH) listeners is generally reduced or absent in hearing-impaired (HI) listeners. One explanation for this lies in the effects of reduced audibility: elevated thresholds may prevent HI listeners from taking advantage of signals available to NH listeners during the dips of temporally fluctuating noise where the interference is relatively weak. This hypothesis was addressed through the development of a signal-processing technique designed to increase the audibility of speech during dips in interrupted noise...
June 2017: Journal of the Acoustical Society of America
keyword
keyword
48120
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"