keyword
MENU ▼
Read by QxMD icon Read
search

speech signal processing

keyword
https://www.readbyqxmd.com/read/29928594/speech-on-speech-masking-and-psychotic-symptoms-in-schizophrenia
#1
Chao Wu, Chuanyue Wang, Liang Li
People with schizophrenia have impairments of target-speech recognition (TSR) in noisy environments with multiple people talking. This study investigated whether the TSR impairment in schizophrenia is associated with their impaired auditory working memory or certain psychotic symptoms. Thirty participants with schizophrenia (mean age = 35.2 ± 12.7 years) and 30 demographics-matched healthy controls (mean age = 32.9 ± 10.9 years) were tested for their TSR against a two-talker-speech masker...
June 2018: Schizophrenia Research. Cognition
https://www.readbyqxmd.com/read/29916792/alpha-and-beta-oscillations-index-semantic-congruency-between-speech-and-gestures-in-clear-and-degraded-speech
#2
Linda Drijvers, Asli Özyürek, Ole Jensen
Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture...
June 19, 2018: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/29911176/differences-in-hearing-acuity-among-normal-hearing-young-adults-modulate-the-neural-basis-for-speech-comprehension
#3
Yune S Lee, Arthur Wingfield, Nam-Eun Min, Ethan Kotloff, Murray Grossman, Jonathan E Peelle
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18-41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause...
May 2018: ENeuro
https://www.readbyqxmd.com/read/29905670/behavioral-measures-of-listening-effort-in-school-age-children-examining-the-effects-of-signal-to-noise-ratio-hearing-loss-and-amplification
#4
Ronan McGarrigle, Samantha J Gustafson, Benjamin W Y Hornsby, Fred H Bess
OBJECTIVES: Increased listening effort in school-age children with hearing loss (CHL) could compromise learning and academic achievement. Identifying a sensitive behavioral measure of listening effort for this group could have both clinical and research value. This study examined the effects of signal-to-noise ratio (SNR), hearing loss, and personal amplification on 2 commonly used behavioral measures of listening effort: dual-task visual response times (visual RTs) and verbal response times (verbal RTs)...
June 13, 2018: Ear and Hearing
https://www.readbyqxmd.com/read/29900799/influence-of-multi-microphone-signal-enhancement-algorithms-on-the-acoustics-and-detectability-of-angular-and-radial-source-movements
#5
Micha Lundbeck, Laura Hartog, Giso Grimm, Volker Hohmann, Lars Bramsløw, Tobias Neher
Hearing-impaired listeners are known to have difficulties not only with understanding speech in noise but also with judging source distance and movement, and these deficits are related to perceived handicap. It is possible that the perception of spatially dynamic sounds can be improved with hearing aids (HAs), but so far this has not been investigated. In a previous study, older hearing-impaired listeners showed poorer detectability for virtual left-right (angular) and near-far (radial) source movements due to lateral interfering sounds and reverberation, respectively...
January 2018: Trends in Hearing
https://www.readbyqxmd.com/read/29891730/neural-prediction-errors-distinguish-perception-and-misperception-of-speech
#6
Helen Blank, Marlene Spangenberg, Matthew H Davis
Humans use prior expectations to improve perception, especially of sensory signals that are degraded or ambiguous. However, if sensory input deviates from prior expectations, correct perception depends on adjusting or rejecting prior expectations. Failure to adjust or reject the prior leads to perceptual illusions especially if there is partial overlap (hence partial mismatch) between expectations and input. With speech, "Slips of the ear" occur when expectations lead to misperception. For instance, a entomologist, might be more susceptible to hear "The ants are my friends" for "The answer, my friend" (in the Bob Dylan song "Blowing in the Wind")...
June 11, 2018: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/29888819/electrocorticography-reveals-continuous-auditory-and-visual-speech-tracking-in-temporal-and-occipital-cortex
#7
C Micheli, I M Schepers, M Ozker, D Yoshor, M S Beauchamp, J W Rieger
During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and visual speech envelopes...
June 11, 2018: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29867654/exploring-the-link-between-cognitive-abilities-and-speech-recognition-in-the-elderly-under-different-listening-conditions
#8
Theresa Nuesse, Rike Steenken, Tobias Neher, Inga Holube
Elderly listeners are known to differ considerably in their ability to understand speech in noise. Several studies have addressed the underlying factors that contribute to these differences. These factors include audibility, and age-related changes in supra-threshold auditory processing abilities, and it has been suggested that differences in cognitive abilities may also be important. The objective of this study was to investigate associations between performance in cognitive tasks and speech recognition under different listening conditions in older adults with either age appropriate hearing or hearing-impairment...
2018: Frontiers in Psychology
https://www.readbyqxmd.com/read/29863462/mild-gain-hearing-aids-as-a-treatment-for-adults-with-self-reported-hearing-difficulties
#9
Christina M Roup, Emily Post, Jessica Lewis
BACKGROUND: There is a growing body of evidence demonstrating self-reported hearing difficulties (HD; i.e., substantial difficulty in understanding speech in complex listening situations) in adults with normal pure-tone sensitivity. Anecdotally, some audiologists have tried personal mild-gain amplification as a treatment option for adults with HD. In 2008, Kuk and colleagues reported positive results of a mild-gain hearing aid trial for children with auditory processing disorders. To date, however, there have been no studies investigating the benefit of mild-gain amplification to treat HD in adults with normal audiograms...
June 2018: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/29862161/shifts-in-audiovisual-processing-in-healthy-aging
#10
Sarah H Baum, Ryan Stevenson
Purpose of Review: The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes...
September 2017: Current Behavioral Neuroscience Reports
https://www.readbyqxmd.com/read/29861132/a-spatial-map-of-onset-and-sustained-responses-to-speech-in-the-human-superior-temporal-gyrus
#11
Liberty S Hamilton, Erik Edwards, Edward F Chang
To derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals. That is, equally important to processing phonetic features is the detection of acoustic cues that give structure and context to the information we hear. How the brain organizes this information is unknown. Using data-driven computational methods on high-density intracranial recordings from 27 human participants, we reveal the functional distinction of neural responses to speech in the posterior superior temporal gyrus according to either onset or sustained response profiles...
May 25, 2018: Current Biology: CB
https://www.readbyqxmd.com/read/29857755/a-flexible-high-directivity-beamformer-with-spherical-microphone-arrays
#12
Gongping Huang, Jingdong Chen, Jacob Benesty
The maximum directivity (MD) beamformer with spherical microphone arrays has many salient features in processing broadband acoustic and speech signals while suppressing noise and reverberation; but it is sensitive to sensors' self-noise and mismatch among these sensors. One effective way to deal with this sensitivity is by increasing the number of microphones, thereby improving the so-called white noise gain (WNG), but this increase may lead to many other design issues in terms of cost, array aperture, and possibly other performance degradation...
May 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29857737/speech-intelligibility-in-rooms-disrupting-the-effect-of-prior-listening-exposure
#13
Eugene J Brandewie, Pavel Zahorik
It has been demonstrated that prior listening exposure to reverberant environments can improve speech understanding in that environment. Previous studies have shown that the buildup of this effect is brief (less than 1 s) and seems largely to be elicited by exposure to the temporal modulation characteristics of the room environment. Situations that might be expected to cause a disruption in this process have yet to be demonstrated. This study seeks to address this issue by showing what types of changes in the acoustic environment cause a breakdown of the room exposure phenomenon...
May 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29845159/flexible-hemispheric-microarrays-of-highly-pressure-sensitive-sensors-based-on-breath-figure-method
#14
Zhihui Wang, Ling Zhang, Jin Liu, Hao Jiang, Chunzhong Li
Recently, flexible pressure sensors featuring high sensitivity, broad sensing range and real-time detection have aroused great attention owing to their crucial role in the development of artificial intelligent devices and healthcare systems. Herein, highly sensitive pressure sensors based on hemisphere-microarray flexible substrates are fabricated via inversely templating honeycomb structures deriving from a facile and static breath figure process. The interlocked and subtle microstructures greatly improve the sensing characteristics and compressibility of the as-prepared pressure sensor, endowing it a sensitivity as high as 196 kPa-1 and a wide pressure sensing range (0-100 kPa), as well as other superior performance, including a lower detection limit of 0...
May 30, 2018: Nanoscale
https://www.readbyqxmd.com/read/29802881/taking-attention-away-from-the-auditory-modality-context-dependent-effects-on-early-sensory-representation-of-speech
#15
Zilong Xie, Rachel Reetzke, Bharath Chandrasekaran
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here we demonstrate that modulating visual perceptual load can impact the early sensory representation of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to task-irrelevant native speech sounds...
May 24, 2018: Neuroscience
https://www.readbyqxmd.com/read/29801981/eyes-and-ears-using-eye-tracking-and-pupillometry-to-understand-challenges-to-speech-recognition
#16
REVIEW
Kristin J Van Engen, Drew J McLaughlin
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners...
May 4, 2018: Hearing Research
https://www.readbyqxmd.com/read/29790122/talking-points-a-modulating-circle-reduces-listening-effort-without-improving-speech-recognition
#17
Julia F Strand, Violet A Brown, Dennis L Barbour
Speech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423-425 1969; Sumby & Pollack in The Journal of the Acoustical Society of America, 26(2), 212-215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall et al...
May 22, 2018: Psychonomic Bulletin & Review
https://www.readbyqxmd.com/read/29779607/impact-of-snr-masker-type-and-noise-reduction-processing-on-sentence-recognition-performance-and-listening-effort-as-indicated-by-the-pupil-dilation-response
#18
Barbara Ohlenforst, Dorothea Wendt, Sophia E Kramer, Graham Naylor, Adriana A Zekveld, Thomas Lunner
Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, at 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to 'giving up' and 'easy listening', respectively...
May 6, 2018: Hearing Research
https://www.readbyqxmd.com/read/29774624/what-does-it-take-to-stress-a-word-digital-manipulation-of-stress-markers-in-ataxic-dysarthria
#19
Anja Lowit, Tolulope Ijitona, Anja Kuschmann, Stephen Corson, John Soraghan
BACKGROUND: Stress production is important for effective communication, but this skill is frequently impaired in people with motor speech disorders. The literature reports successful treatment of these deficits in this population, thus highlighting the therapeutic potential of this area. However, no specific guidance is currently available to clinicians about whether any of the stress markers are more effective than others, to what degree they have to be manipulated, and whether strategies need to differ according to the underlying symptoms...
May 18, 2018: International Journal of Language & Communication Disorders
https://www.readbyqxmd.com/read/29771359/neural-bases-of-social-communicative-intentions-in-speech
#20
Nele Hellbernd, Daniela Sammler
Our ability to understand others' communicative intentions in speech is key to successful social interaction. Indeed, misunderstanding an "excuse me" as apology, while meant as criticism, may have important consequences. Recent behavioural studies have provided evidence that prosody, i.e., vocal tone, is an important indicator for speakers' intentions. Using a novel audio-morphing paradigm, the present fMRI study examined the neurocognitive mechanisms that allow listeners to 'read' speakers' intents from vocal-prosodic patterns...
May 16, 2018: Social Cognitive and Affective Neuroscience
keyword
keyword
48120
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"