keyword
MENU ▼
Read by QxMD icon Read
search

speech signal processing

keyword
https://www.readbyqxmd.com/read/28439236/auditory-visual-and-audiovisual-speech-processing-streams-in-superior-temporal-sulcus
#1
Jonathan H Venezia, Kenneth I Vaden, Feng Rong, Dale Maddox, Kourosh Saberi, Gregory Hickok
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design...
2017: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/28400328/you-talkin-to-me-communicative-talker-gaze-activates-left-lateralized-superior-temporal-cortex-during-perception-of-degraded-speech
#2
Carolyn McGettigan, Kyle Jasmin, Frank Eisner, Zarinah K Agnew, Oliver J Josephs, Andrew J Calder, Rosemary Jessop, Rebecca P Lawson, Mona Spielmann, Sophie K Scott
Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013)...
April 8, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28400265/convergence-of-semantics-and-emotional-expression-within-the-ifg-pars-orbitalis
#3
Michel Belyk, Steven Brown, Jessica Lim, Sonja A Kotz
Humans communicate through a combination of linguistic and emotional channels, including propositional speech, writing, sign language, music, but also prosodic, facial, and gestural expression. These channels can be interpreted separately or they can be integrated to multimodally convey complex meanings. Neural models of the perception of semantics and emotion include nodes for both functions in the inferior frontal gyrus pars orbitalis (IFGorb). However, it is not known whether this convergence involves a common functional zone or instead specialized subregions that process semantics and emotion separately...
April 8, 2017: NeuroImage
https://www.readbyqxmd.com/read/28399064/multisensory-integration-in-cochlear-implant-recipients
#4
Ryan A Stevenson, Sterling W Sheffield, Iliza M Butera, René H Gifford, Mark T Wallace
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry...
April 10, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28395319/speech-rate-normalization-and-phonemic-boundary-perception-in-cochlear-implant-users
#5
Brittany N Jaekel, Rochelle S Newman, Matthew J Goupell
Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Method: Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates...
April 10, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28379608/neural-mechanisms-for-integrating-consecutive-and-interleaved-natural-events
#6
Juha M Lahnakoski, Iiro P Jääskeläinen, Mikko Sams, Lauri Nummenmaa
To understand temporally extended events, the human brain needs to accumulate information continuously across time. Interruptions that require switching of attention to other event sequences disrupt this process. To reveal neural mechanisms supporting integration of event information, we measured brain activity with functional magnetic resonance imaging (fMRI) from 18 participants while they viewed 6.5-minute excerpts from three movies (i) consecutively and (ii) as interleaved segments of approximately 50-s in duration...
April 5, 2017: Human Brain Mapping
https://www.readbyqxmd.com/read/28373850/the-contribution-of-brainstem-and-cerebellar-pathways-to-auditory-recognition
#7
REVIEW
Neil M McLachlan, Sarah J Wilson
The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28372055/assessing-the-efficacy-of-hearing-aid-amplification-using-a-phoneme-test
#8
Christoph Scheidiger, Jont B Allen, Torsten Dau
Consonant-vowel (CV) perception experiments provide valuable insights into how humans process speech. Here, two CV identification experiments were conducted in a group of hearing-impaired (HI) listeners, using 14 consonants followed by the vowel /ɑ/. The CVs were presented in quiet and with added speech-shaped noise at signal-to-noise ratios of 0, 6, and 12 dB. The HI listeners were provided with two different amplification schemes for the CVs. In the first experiment, a frequency-independent amplification (flat-gain) was provided and the CVs were presented at the most-comfortable loudness level...
March 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28372048/pure-linguistic-interference-during-comprehension-of-competing-speech-signals
#9
Bohan Dai, James M McQueen, Peter Hagoort, Anne Kösem
Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i...
March 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28372042/the-influence-of-signal-type-on-perceived-reverberance
#10
Elizabeth Teret, M Torben Pastore, Jonas Braasch
Currently, architectural room acoustic metrics make no real distinction between a room impulse response and the auditory system's internal representation of a room. These metrics are generally based on impulse responses, and indirectly assume that the internal representation of the acoustic features of a room is independent of the sound source. However, while a room can be approximated as a linear, time-invariant system, auditory processing is highly non-linear and varies a great deal over time in response to different acoustic inputs...
March 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28360829/activation-and-functional-connectivity-of-the-left-inferior-temporal-gyrus-during-visual-speech-priming-in-healthy-listeners-and-listeners-with-schizophrenia
#11
Chao Wu, Yingjun Zheng, Juanhua Li, Bei Zhang, Ruikeng Li, Haibo Wu, Shenglin She, Sha Liu, Hongjun Peng, Yuping Ning, Liang Li
Under a "cocktail-party" listening condition with multiple-people talking, compared to healthy people, people with schizophrenia benefit less from the use of visual-speech (lipreading) priming (VSP) cues to improve speech recognition. The neural mechanisms underlying the unmasking effect of VSP remain unknown. This study investigated the brain substrates underlying the unmasking effect of VSP in healthy listeners and the schizophrenia-induced changes in the brain substrates. Using functional magnetic resonance imaging, brain activation and functional connectivity for the contrasts of the VSP listening condition vs...
2017: Frontiers in Neuroscience
https://www.readbyqxmd.com/read/28355951/output-signal-to-noise-ratio-and-speech-perception-in-noise-effects-of-algorithm
#12
Christi W Miller, Ruth A Bentler, Yu-Hsiang Wu, James Lewis, Kelly Tremblay
OBJECTIVE: The aims of this study were to: 1) quantify the amount of change in signal-to-noise ratio (SNR) as a result of compression and noise reduction (NR) processing in devices from three hearing aid (HA) manufacturers and 2) use the SNR changes to predict changes in speech perception. We hypothesised that the SNR change would differ across processing type and manufacturer, and that improvements in SNR would relate to improvements in performance. DESIGN: SNR at the output of the HAs was quantified using a phase-inversion technique...
March 30, 2017: International Journal of Audiology
https://www.readbyqxmd.com/read/28348400/faster-phonological-processing-and-right-occipito-temporal-coupling-in-deaf-adults-signal-poor-cochlear-implant-outcome
#13
Diane S Lazard, Anne-Lise Giraud
The outcome of adult cochlear implantation is predicted positively by the involvement of visual cortex in speech processing, and negatively by the cross-modal recruitment of the right temporal cortex during and after deafness. How these two neurofunctional predictors concur to modulate cochlear implant (CI) performance remains unclear. In this fMRI study, we explore the joint involvement of occipital and right hemisphere regions in a visual-based phonological task in post-lingual deafness. Intriguingly, we show that some deaf subjects perform faster than controls...
March 28, 2017: Nature Communications
https://www.readbyqxmd.com/read/28343629/biallelic-variants-in-otud6b-cause-an-intellectual-disability-syndrome-associated-with-seizures-and-dysmorphic-features
#14
Teresa Santiago-Sim, Lindsay C Burrage, Frédéric Ebstein, Mari J Tokita, Marcus Miller, Weimin Bi, Alicia A Braxton, Jill A Rosenfeld, Maher Shahrour, Andrea Lehmann, Benjamin Cogné, Sébastien Küry, Thomas Besnard, Bertrand Isidor, Stéphane Bézieau, Isabelle Hazart, Honey Nagakura, LaDonna L Immken, Rebecca O Littlejohn, Elizabeth Roeder, Bulent Kara, Katia Hardies, Sarah Weckhuysen, Patrick May, Johannes R Lemke, Orly Elpeleg, Bassam Abu-Libdeh, Kiely N James, Jennifer L Silhavy, Mahmoud Y Issa, Maha S Zaki, Joseph G Gleeson, John R Seavitt, Mary E Dickinson, M Cecilia Ljungberg, Sara Wells, Sara J Johnson, Lydia Teboul, Christine M Eng, Yaping Yang, Peter-Michael Kloetzel, Jason D Heaney, Magdalena A Walkiewicz
Ubiquitination is a posttranslational modification that regulates many cellular processes including protein degradation, intracellular trafficking, cell signaling, and protein-protein interactions. Deubiquitinating enzymes (DUBs), which reverse the process of ubiquitination, are important regulators of the ubiquitin system. OTUD6B encodes a member of the ovarian tumor domain (OTU)-containing subfamily of deubiquitinating enzymes. Herein, we report biallelic pathogenic variants in OTUD6B in 12 individuals from 6 independent families with an intellectual disability syndrome associated with seizures and dysmorphic features...
April 6, 2017: American Journal of Human Genetics
https://www.readbyqxmd.com/read/28335558/modeling-the-development-of-audiovisual-cue-integration-in-speech-perception
#15
Laura M Getz, Elke R Nordeen, Sarah C Vrabic, Joseph C Toscano
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories...
March 21, 2017: Brain Sciences
https://www.readbyqxmd.com/read/28331007/advantages-of-comparative-studies-in-songbirds-to-understand-the-neural-basis-of-sensorimotor-integration
#16
Karagh Murphy, Logan S James, Jon T Sakata, Jonathan F Prather
Sensorimotor integration is the process through which the nervous system creates a link between motor commands and associated sensory feedback. This process allows for the acquisition and refinement of many behaviors, including learned communication behaviors like speech and birdsong. Consequently, it is important to understand fundamental mechanisms of sensorimotor integration, and comparative analyses of this process can provide vital insight. Songbirds offer a powerful comparative model system to study how the nervous system links motor and sensory information for learning and control...
March 22, 2017: Journal of Neurophysiology
https://www.readbyqxmd.com/read/28320627/maximal-ambient-noise-levels-and-type-of-voice-material-required-for-valid-use-of-smartphones-in-clinical-voice-research
#17
Jean Lebacq, Jean Schoentgen, Giovanna Cantarella, Franz Thomas Bruss, Claudia Manfredi, Philippe DeJonckere
PURPOSE: Smartphone technology provides new opportunities for recording standardized voice samples of patients and transmitting the audio files to the voice laboratory. This drastically improves the achievement of baseline designs, used in research on efficiency of voice treatments. However, the basic requirement is the suitability of smartphones for recording and digitizing pathologic voices (mainly characterized by period perturbations and noise) without significant distortion. In a previous article, this was tested using realistic synthesized deviant voice samples (/a:/) with three precisely known levels of jitter and of noise in all combinations...
March 17, 2017: Journal of Voice: Official Journal of the Voice Foundation
https://www.readbyqxmd.com/read/28303412/auditory-enhancement-in-cochlear-implant-users-under-simultaneous-and-forward-masking
#18
Heather A Kreft, Andrew J Oxenham
Auditory enhancement is the phenomenon whereby the salience or detectability of a target sound within a masker is enhanced by the prior presentation of the masker alone. Enhancement has been demonstrated using both simultaneous and forward masking in normal-hearing listeners and may play an important role in auditory and speech perception within complex and time-varying acoustic environments. The few studies of enhancement in hearing-impaired listeners have reported reduced or absent enhancement effects under forward masking, suggesting a potentially peripheral locus of the effect...
March 16, 2017: Journal of the Association for Research in Otolaryngology: JARO
https://www.readbyqxmd.com/read/28284736/contributions-of-sensory-tuning-to-auditory-vocal-interactions-in-marmoset-auditory-cortex
#19
Steven J Eliades, Xiaoqin Wang
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing...
May 2017: Hearing Research
https://www.readbyqxmd.com/read/28277212/the-effect-of-signal-to-noise-ratio-on-linguistic-processing-in-a-semantic-judgment-task-an-aging-study
#20
Nicholas Stanley, Tara Davis, Julie Estis
BACKGROUND: Aging effects on speech understanding in noise have primarily been assessed through speech recognition tasks. Recognition tasks, which focus on bottom-up, perceptual aspects of speech understanding, intentionally limit linguistic and cognitive factors by asking participants to only repeat what they have heard. On the other hand, linguistic processing tasks require bottom-up and top-down (linguistic, cognitive) processing skills and are, therefore, more reflective of speech understanding abilities used in everyday communication...
March 2017: Journal of the American Academy of Audiology
keyword
keyword
48120
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"