keyword
MENU ▼
Read by QxMD icon Read
search

speech signal processing

keyword
https://www.readbyqxmd.com/read/28544871/signs-of-social-class-the-experience-of-economic-inequality-in-everyday-life
#1
Michael W Kraus, Jun Won Park, Jacinth J X Tan
By some accounts, global economic inequality is at its highest point on record. The pernicious effects of this broad societal trend are striking: Rising inequality is linked to poorer health and well-being across countries, continents, and cultures. The economic and psychological forces that perpetuate inequality continue to be studied, and in this theoretical review, we examine the role of daily experiences of economic inequality-the communication of social class signals between interaction partners-in this process...
May 2017: Perspectives on Psychological Science: a Journal of the Association for Psychological Science
https://www.readbyqxmd.com/read/28534732/evaluation-of-adaptive-noise-management-technologies-for-school-age-children-with-hearing-loss
#2
Jace Wolfe, Mila Duke, Erin Schafer, Christine Jones, Lori Rakita
BACKGROUND: Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations...
May 2017: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/28525641/erp-correlates-of-motivating-voices-quality-of-motivation-and-time-course-matters
#3
Konstantina Zougkou, Netta Weinstein, Silke Paulmann
Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g., "You absolutely have to do it my way" spoken in a controlling tone of voice), or lacked motivationally biasing words (e...
May 19, 2017: Social Cognitive and Affective Neuroscience
https://www.readbyqxmd.com/read/28510618/auditory-processing-of-older-adults-with-probable-mild-cognitive-impairment
#4
Jerri D Edwards, Jennifer J Lister, Maya N Elias, Amber M Tetlow, Angela L Sardina, Nasreen A Sadeq, Amanda D Brandino, Aryn L Harrison Bush
Purpose: Studies suggest that deficits in auditory processing predict cognitive decline and dementia, but those studies included limited measures of auditory processing. The purpose of this study was to compare older adults with and without probable mild cognitive impairment (MCI) across two domains of auditory processing (auditory performance in competing acoustic signals and temporal aspects of audition). Method: The Montreal Cognitive Assessment (Nasreddine et al...
May 24, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28506442/boosting-syntax-training-with-temporally-regular-musical-primes-in-children-with-cochlear-implants
#5
N Bedoin, A-M Besombes, E Escande, A Dumont, P Lalitte, B Tillmann
OBJECTIVES: Previous research has suggested the use of rhythmic structures (implemented in musical material) to improve linguistic structure processing (i.e., syntax processing), in particular for populations showing deficits in syntax and temporal processing (e.g., children with developmental language disorders). The present study proposes a long-term training program to improve syntax processing in children with cochlear implants, a population showing syntax processing deficits in perception and production...
May 11, 2017: Annals of Physical and Rehabilitation Medicine
https://www.readbyqxmd.com/read/28499298/-cochlear-implant-state-of-the-art
#6
Thomas Lenarz
Cochlear implants are the treatment of choice for the auditory rehabilitation of patients with sensory deafness. They restore the missing function of inner hair cells by transforming the acoustic signal into electrical stimuli for activation of auditory nerve fibers. Due to the very fast technology development cochlear implants provide open-set speech understanding in the majority of patients including the use of the telephone. Children can achieve a near to normal speech and language development provided their deafness is detected early after onset and implantation is performed quickly thereafter...
April 2017: Laryngo- Rhino- Otologie
https://www.readbyqxmd.com/read/28499253/the-role-of-sodium-hydrosulfide-in-attenuating-the-aging-process-via-pi3k-akt-and-camkk%C3%AE-ampk-pathways
#7
Xubo Chen, Xueyan Zhao, Hua Cai, Haiying Sun, Yujuan Hu, Xiang Huang, Wen Kong, Weijia Kong
Age-related dysfunction of the central auditory system, known as central presbycusis, is characterized by defects in speech perception and sound localization. It is important to determine the pathogenesis of central presbycusis in order to explore a feasible and effective intervention method. Recent work has provided fascinating insight into the beneficial function of H2S on oxidative stress and stress-related disease. In this study, we investigated the pathogenesis of central presbycusis and tried to explore the mechanism of H2S action on different aspects of aging by utilizing a mimetic aging rat and senescent cellular model...
April 25, 2017: Redox Biology
https://www.readbyqxmd.com/read/28487827/an-automatic-prolongation-detection-approach-in-continuous-speech-with-robustness-against-speaking-rate-variations
#8
Iman Esmaili, Nader Jafarnia Dabanloo, Mansour Vali
In recent years, many methods have been introduced for supporting the diagnosis of stuttering for automatic detection of prolongation in the speech of people who stutter. However, less attention has been paid to treatment processes in which clients learn to speak more slowly. The aim of this study was to develop a method to help speech-language pathologists (SLPs) during diagnosis and treatment sessions. To this end, speech signals were initially parameterized to perceptual linear predictive (PLP) features...
January 2017: Journal of Medical Signals and Sensors
https://www.readbyqxmd.com/read/28467888/auditory-object-perception-a-neurobiological-model-and-prospective-review
#9
Julie A Brefczynski-Lewis, James W Lewis
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects"...
April 30, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28464693/extension-and-evaluation-of-a-near-end-listening-enhancement-algorithm-for-listeners-with-normal-and-impaired-hearing
#10
Jan Rennies, Jakob Drefs, David Hülsmeier, Henning Schepker, Simon Doclo
In many applications in which speech is played back via a sound reinforcement system such as public address systems and mobile phones, speech intelligibility is degraded by additive environmental noise. A possible solution to maintain high intelligibility in noise is to pre-process the speech signal based on the estimated noise power at the position of the listener. The previously proposed AdaptDRC algorithm [Schepker, Rennies, and Doclo (2015). J. Acoust. Soc. Am. 138, 2692-2706] applies both frequency shaping and dynamic range compression under an equal-power constraint, where the processing is adaptively controlled by short-term estimates of the speech intelligibility index...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464692/effects-of-hearing-aid-dynamic-range-compression-on-spatial-perception-in-a-reverberant-environment
#11
Henrik Gert Hassager, Alan Wiinberg, Torsten Dau
This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners' spatial perception in a reverberant environment. Three compression schemes-independent compression at each ear, linked compression between the two ears, and "spatially ideal" compression operating solely on the dry source signal-were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464690/how-many-images-are-in-an-auditory-scene
#12
Xuan Zhong, William A Yost
If an auditory scene consists of many spatially separated sound sources, how many sound sources can be processed by the auditory system? Experiment I determined how many speech sources could be localized simultaneously on the azimuth plane. Different words were played from multiple loudspeakers, and listeners reported the total number of sound sources and their individual locations. In experiment II the accuracy of localizing one speech source in a mixture of multiple speech sources was determined. An extra sound source was added to an existing set of sound sources, and the task was to localize that extra source...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28450537/being-first-matters-topographical-representational-similarity-analysis-of-erp-signals-reveals-separate-networks-for-audiovisual-temporal-binding-depending-on-the-leading-sense
#13
Roberto Cecere, Joachim Gross, Ashleigh Willis, Gregor Thut
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Inter-sensory timing is crucial in this process as only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window (TBW), revealing asymmetries in its size and plasticity depending on the leading input (auditory-visual, AV; visual-auditory, VA). We here tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans...
April 27, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28439236/auditory-visual-and-audiovisual-speech-processing-streams-in-superior-temporal-sulcus
#14
Jonathan H Venezia, Kenneth I Vaden, Feng Rong, Dale Maddox, Kourosh Saberi, Gregory Hickok
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design...
2017: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/28400328/you-talkin-to-me-communicative-talker-gaze-activates-left-lateralized-superior-temporal-cortex-during-perception-of-degraded-speech
#15
Carolyn McGettigan, Kyle Jasmin, Frank Eisner, Zarinah K Agnew, Oliver J Josephs, Andrew J Calder, Rosemary Jessop, Rebecca P Lawson, Mona Spielmann, Sophie K Scott
Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013)...
April 8, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28400265/convergence-of-semantics-and-emotional-expression-within-the-ifg-pars-orbitalis
#16
Michel Belyk, Steven Brown, Jessica Lim, Sonja A Kotz
Humans communicate through a combination of linguistic and emotional channels, including propositional speech, writing, sign language, music, but also prosodic, facial, and gestural expression. These channels can be interpreted separately or they can be integrated to multimodally convey complex meanings. Neural models of the perception of semantics and emotion include nodes for both functions in the inferior frontal gyrus pars orbitalis (IFGorb). However, it is not known whether this convergence involves a common functional zone or instead specialized subregions that process semantics and emotion separately...
April 8, 2017: NeuroImage
https://www.readbyqxmd.com/read/28399064/multisensory-integration-in-cochlear-implant-recipients
#17
Ryan A Stevenson, Sterling W Sheffield, Iliza M Butera, René H Gifford, Mark T Wallace
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry...
April 10, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28395319/speech-rate-normalization-and-phonemic-boundary-perception-in-cochlear-implant-users
#18
Brittany N Jaekel, Rochelle S Newman, Matthew J Goupell
Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Method: Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates...
May 24, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28379608/neural-mechanisms-for-integrating-consecutive-and-interleaved-natural-events
#19
Juha M Lahnakoski, Iiro P Jääskeläinen, Mikko Sams, Lauri Nummenmaa
To understand temporally extended events, the human brain needs to accumulate information continuously across time. Interruptions that require switching of attention to other event sequences disrupt this process. To reveal neural mechanisms supporting integration of event information, we measured brain activity with functional magnetic resonance imaging (fMRI) from 18 participants while they viewed 6.5-minute excerpts from three movies (i) consecutively and (ii) as interleaved segments of approximately 50-s in duration...
April 5, 2017: Human Brain Mapping
https://www.readbyqxmd.com/read/28373850/the-contribution-of-brainstem-and-cerebellar-pathways-to-auditory-recognition
#20
REVIEW
Neil M McLachlan, Sarah J Wilson
The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans...
2017: Frontiers in Psychology
keyword
keyword
48120
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"