keyword
MENU ▼
Read by QxMD icon Read
search

Speech understanding

keyword
https://www.readbyqxmd.com/read/27913315/speech-enhancement-based-on-neural-networks-improves-speech-intelligibility-in-noise-for-cochlear-implant-users
#1
Tobias Goehring, Federico Bolner, Jessica J M Monaghan, Bas van Dijk, Andrzej Zarowski, Stefan Bleeck
Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR)...
November 29, 2016: Hearing Research
https://www.readbyqxmd.com/read/27911747/new-developments-in-understanding-the-complexity-of-human-speech-production
#2
Kristina Simonyan, Hermann Ackermann, Edward F Chang, Jeremy D Greenlee
Speech is one of the most unique features of human communication. Our ability to articulate our thoughts by means of speech production depends critically on the integrity of the motor cortex. Long thought to be a low-order brain region, exciting work in the past years is overturning this notion. Here, we highlight some of major experimental advances in speech motor control research and discuss the emerging findings about the complexity of speech motocortical organization and its large-scale networks. This review summarizes the talks presented at a symposium at the Annual Meeting of the Society of Neuroscience; it does not represent a comprehensive review of contemporary literature in the broader field of speech motor control...
November 9, 2016: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/27911019/the-impact-and-measurement-of-social-dysfunction-in-late-life-depression-an-evaluation-of-current-methods-with-a-focus-on-wearable-technology
#3
REVIEW
Sophie Hodgetts, Peter Gallagher, Daniel Stow, I Nicol Ferrier, John T O'Brien
OBJECTIVE: Depression is known to negatively impact social functioning, with patients commonly reporting difficulties maintaining social relationships. Moreover, a large body of evidence suggests poor social functioning is not only present in depression but that social functioning is an important factor in illness course and outcome. In addition, good social relationships can play a protective role against the onset of depressive symptoms, particularly in late-life depression. However, the majority of research in this area has employed self-report measures of social function...
December 2, 2016: International Journal of Geriatric Psychiatry
https://www.readbyqxmd.com/read/27909888/off-the-ear-with-no-loss-in-speech-understanding-comparing-the-rondo-and-the-opus-2-cochlear-implant-audio-processors
#4
Stefan Dazert, Jan Peter Thomas, Andreas Büchner, Joachim Müller, John Martin Hempel, Hubert Löwenheim, Robert Mlynski
The RONDO is a single-unit cochlear implant audio processor, which omits the need for a behind-the-ear (BTE) audio processor. The primary aim was to compare speech perception results in quiet and in noise with the RONDO and the OPUS 2, a BTE audio processor. Secondary aims were to determine subjects' self-assessed levels of sound quality and gather subjective feedback on RONDO use. All speech perception tests were performed with the RONDO and the OPUS 2 behind-the-ear audio processor at 3 test intervals. Subjects were required to use the RONDO between test intervals...
December 1, 2016: European Archives of Oto-rhino-laryngology
https://www.readbyqxmd.com/read/27908075/the-relationship-between-perceptual-disturbances-in-dysarthric-speech-and-automatic-speech-recognition-performance
#5
Ming Tu, Alan Wisler, Visar Berisha, Julie M Liss
State-of-the-art automatic speech recognition (ASR) engines perform well on healthy speech; however recent studies show that their performance on dysarthric speech is highly variable. This is because of the acoustic variability associated with the different dysarthria subtypes. This paper aims to develop a better understanding of how perceptual disturbances in dysarthric speech relate to ASR performance. Accurate ratings of a representative set of 32 dysarthric speakers along different perceptual dimensions are obtained and the performance of a representative ASR algorithm on the same set of speakers is analyzed...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27908052/visual-tactile-integration-in-speech-perception-evidence-for-modality-neutral-speech-primitives
#6
Katie Bicevskis, Donald Derrick, Bryan Gick
Audio-visual [McGurk and MacDonald (1976). Nature 264, 746-748] and audio-tactile [Gick and Derrick (2009). Nature 462(7272), 502-504] speech stimuli enhance speech perception over audio stimuli alone. In addition, multimodal speech stimuli form an asymmetric window of integration that is consistent with the relative speeds of the various signals [Munhall, Gribble, Sacco, and Ward (1996). Percept. Psychophys. 58(3), 351-362; Gick, Ikegami, and Derrick (2010). J. Acoust. Soc. Am. 128(5), EL342-EL346]. In this experiment, participants were presented video of faces producing /pa/ and /ba/ syllables, both alone and with air puffs occurring synchronously and at different timings up to 300 ms before and after the stop release...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27908048/age-equivalence-in-the-benefit-of-repetition-for-speech-understanding
#7
Karen S Helfer, Richard L Freyman
Although repetition is the most commonly used conversational repair strategy, little is known about its relative effectiveness among listeners spanning the adult age range. The purpose of this study was to identify differences in how younger, middle-aged, and older adults were able to use immediate repetition to improve speech recognition in the presence of different kinds of maskers. Results suggest that all groups received approximately the same amount of benefit from repetition. Repetition benefit was largest when the masker was fluctuating noise and smallest when it was competing speech...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27908037/the-structure-of-hindi-stop-consonants
#8
Kushagra Singh, Nachiketa Tiwari
The pronunciation of stop consonants varies markedly with age, gender, accent, etc. Yet by extracting appropriate cues common to these varying pronunciations, it is possible to correctly identify the spoken consonant. In this paper, the structure underlying Hindi stop consonants is presented. This understanding may potentially be used as a "recipe" for their artificial synthesis. Hindi alphabet stops were analyzed for this purpose. This alphabet has an organized and comprehensive inventory of stop consonants, and its consonants invariably terminate with the neutral vowel schwa...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27908030/influences-of-noise-interruption-and-information-bearing-acoustic-changes-on-understanding-simulated-electric-acoustic-speech
#9
Christian Stilp, Gail Donaldson, Soohee Oh, Ying-Yee Kong
In simulations of electrical-acoustic stimulation (EAS), vocoded speech intelligibility is aided by preservation of low-frequency acoustic cues. However, the speech signal is often interrupted in everyday listening conditions, and effects of interruption on hybrid speech intelligibility are poorly understood. Additionally, listeners rely on information-bearing acoustic changes to understand full-spectrum speech (as measured by cochlea-scaled entropy [CSE]) and vocoded speech (CSECI), but how listeners utilize these informational changes to understand EAS speech is unclear...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27908027/aging-and-the-effect-of-target-masker-alignment
#10
Karen S Helfer, Gabrielle R Merchant, Richard L Freyman
Similarity between target and competing speech messages plays a large role in how easy or difficult it is to understand messages of interest. Much research on informational masking has used highly aligned target and masking utterances that are very similar semantically and syntactically. However, listeners rarely encounter situations in real life where they must understand one sentence in the presence of another (or more than one) highly aligned, syntactically similar competing sentence(s). The purpose of the present study was to examine the effect of syntactic/semantic similarity of target and masking speech in different spatial conditions among younger, middle-aged, and older adults...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27907811/using-the-oases-a-to-illustrate-how-network-analysis-can-be-applied-to-understand-the-experience-of-stuttering
#11
Cynthia S Q Siew, Kristin M Pelczarski, J Scott Yaruss, Michael S Vitevitch
PURPOSE: Network science uses mathematical and computational techniques to examine how individual entities in a system, represented by nodes, interact, as represented by connections between nodes. This approach has been used by Cramer et al. (2010) to make "symptom networks" to examine various psychological disorders. In the present analysis we examined a network created from the items in the Overall Assessment of the Speaker's Experience of Stuttering-Adult (OASES-A), a commonly used measure for evaluating adverse impact in the lives of people who stutter...
November 21, 2016: Journal of Communication Disorders
https://www.readbyqxmd.com/read/27904958/feasibility-of-an-implanted-microphone-for-cochlear-implant-listening
#12
Jean-Marc Gérard, Laurent Demanez, Caroline Salmon, Filiep Vanpoucke, Joris Walraevens, Anke Plasmans, Daniele De Siati, Philippe Lefèbvre
This study aimed at evaluating the feasibility of an implanted microphone for cochlear implants (CI) by comparison of hearing outcomes, sound quality and patient satisfaction of a subcutaneous microphone to a standard external microphone of a behind-the-ear sound processor. In this prospective feasibility study with a within-subject repeated measures design comparing the microphone modalities, ten experienced adult unilateral CI users received an implantable contralateral subcutaneous microphone attached to a percutaneous plug...
November 30, 2016: European Archives of Oto-rhino-laryngology
https://www.readbyqxmd.com/read/27900919/do-infants-discriminate-non-linguistic-vocal-expressions-of-positive-emotions
#13
Melanie Soderstrom, Melissa Reimchen, Disa Sauter, James L Morgan
Adults are highly proficient in understanding emotional signals from both facial and vocal cues, including when communicating across cultural boundaries. However, the developmental origin of this ability is poorly understood, and in particular, little is known about the ontogeny of differentiation of signals with the same valence. The studies reported here employed a habituation paradigm to test whether preverbal infants discriminate between non-linguistic vocal expressions of relief and triumph. Infants as young as 6 months who had habituated to relief or triumph showed significant discrimination of relief and triumph tokens at test (i...
February 2017: Cognition & Emotion
https://www.readbyqxmd.com/read/27897244/neural-oscillations-in-the-temporal-pole-for-a-temporally-congruent-audio-visual-speech-detection-task
#14
Takefumi Ohki, Atsuko Gunji, Yuichi Takei, Hidetoshi Takahashi, Yuu Kaneko, Yosuke Kita, Naruhito Hironaga, Shozo Tobimatsu, Yoko Kamio, Takashi Hanakawa, Masumi Inagaki, Kazuo Hiraki
Though recent studies have elucidated the earliest mechanisms of processing in multisensory integration, our understanding of how multisensory integration of more sustained and complicated stimuli is implemented in higher-level association cortices is lacking. In this study, we used magnetoencephalography (MEG) to determine how neural oscillations alter local and global connectivity during multisensory integration processing. We acquired MEG data from 15 healthy volunteers performing an audio-visual speech matching task...
November 29, 2016: Scientific Reports
https://www.readbyqxmd.com/read/27893274/perceptual-adaptation-of-vowels-generalizes-across-the-phonology-and-does-not-require-local-context
#15
Kateřina Chládková, Václav Jonáš Podlipský, Anastasia Chionidou
Listeners usually understand without difficulty even speech that sounds atypical. When they encounter noncanonical realizations of speech sounds, listeners can make short-term adjustments of their long-term representations of those sounds. Previous research, focusing mostly on adaptation in consonants, has suggested that for perceptual adaptation to take place some local cues (lexical, phonotactic, or visual) have to guide listeners' interpretation of the atypical sounds. In the present experiment we investigated perceptual adaptation in vowels...
November 28, 2016: Journal of Experimental Psychology. Human Perception and Performance
https://www.readbyqxmd.com/read/27888257/a-flexible-question-and-answer-task-for-measuring-speech-understanding
#16
Virginia Best, Timothy Streeter, Elin Roverud, Christine R Mason, Gerald Kidd
This report introduces a new speech task based on simple questions and answers. The task differs from a traditional sentence recall task in that it involves an element of comprehension and can be implemented in an ongoing fashion. It also contains two target items (the question and the answer) that may be associated with different voices and locations to create dynamic listening scenarios. A set of 227 questions was created, covering six broad categories (days of the week, months of the year, numbers, colors, opposites, and sizes)...
November 24, 2016: Trends in Hearing
https://www.readbyqxmd.com/read/27885978/factors-affecting-daily-cochlear-implant-use-in-children-datalogging-evidence
#17
Vijayalakshmi Easwar, Joseph Sanfilippo, Blake Papsin, Karen Gordon
BACKGROUND: Children with profound hearing loss can gain access to sound through cochlear implants (CIs), but these devices must be worn consistently to promote auditory development. Although subjective parent reports have identified several factors limiting long-term CI use in children, it is also important to understand the day-to-day issues which may preclude consistent device use. In the present study, objective measures gathered through datalogging software were used to quantify the following in children: (1) number of hours of CI use per day, (2) practical concerns including repeated disconnections between the external transmission coil and the internal device (termed "coil-offs"), and (3) listening environments experienced during daily use...
November 2016: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/27885976/negative-effect-of-acoustic-panels-on-listening-effort-in-a-classroom-environment
#18
Amyn M Amlani, Timothy A Russo
BACKGROUND: Acoustic panels are used to lessen the pervasive effects of noise and reverberation on speech understanding in a classroom environment. These panels, however, predominately absorb high-frequency energy important to speech understanding. Therefore, a classroom environment treated with acoustic panels might negatively influence the transmission of the target signal, resulting in an increase in listening effort exerted by the listener. PURPOSE: Acoustic panels were installed in a public school environment that did not meet the ANSI-recommended guidelines for classroom design...
November 2016: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/27884735/elderly-listeners-with-low-intelligibility-scores-under-reverberation-show-degraded-subcortical-representation-of-reverberant-speech
#19
H Fujihira, K Shiraishi, G B Remijn
In order to elucidate why many elderly listeners have difficulty understanding speech under reverberation, we investigated the relationship between word intelligibility and auditory brainstem responses (ABRs) in 28 elderly listeners. We hypothesized that the elderly listeners with low word intelligibility scores under reverberation would show degraded subcortical encoding information of reverberant speech as expressed in their ABRs towards a reverberant /da/ syllable. The participants were divided into two groups (top and bottom performance groups) according to their word intelligibility scores for anechoic and reverberant words, and ABR characteristics between groups were compared...
November 21, 2016: Neuroscience Letters
https://www.readbyqxmd.com/read/27875590/spatio-temporal-progression-of-cortical-activity-related-to-continuous-overt-and-covert-speech-production-in-a-reading-task
#20
Jonathan S Brumberg, Dean J Krusienski, Shreya Chakrabarti, Aysegul Gunduz, Peter Brunner, Anthony L Ritaccio, Gerwin Schalk
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech...
2016: PloS One
keyword
keyword
15149
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"