keyword
MENU ▼
Read by QxMD icon Read
search

Motor speech neural network

keyword
https://www.readbyqxmd.com/read/28516196/coupling-dynamics-in-speech-gestures-amplitude-and-rate-influences
#1
Pascal H H M van Lieshout
Speech is a complex oral motor function that involves multiple articulators that need to be coordinated in space and time at relatively high movement speeds. How this is accomplished remains an important and largely unresolved empirical question. From a coordination dynamics perspective, coordination involves the assembly of coordinative units that are characterized by inherently stable coupling patterns that act as attractor states for task-specific actions. In the motor control literature, one particular model formulated by Haken et al...
May 17, 2017: Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale
https://www.readbyqxmd.com/read/28483857/cortical-dynamics-of-disfluency-in-adults-who-stutter
#2
Ranit Sengupta, Shalin Shah, Torrey M J Loucks, Kristin Pelczarski, J Scott Yaruss, Katie Gore, Sazzad M Nasir
Stuttering is a disorder of speech production whose origins have been traced to the central nervous system. One of the factors that may underlie stuttering is aberrant neural miscommunication within the speech motor network. It is thus argued that disfluency (any interruption in the forward flow of speech) in adults who stutter (AWS) could be associated with anomalous cortical dynamics. Aberrant brain activity has been demonstrated in AWS in the absence of overt disfluency, but recording neural activity during disfluency is more challenging...
May 2017: Physiological Reports
https://www.readbyqxmd.com/read/28467888/auditory-object-perception-a-neurobiological-model-and-prospective-review
#3
Julie A Brefczynski-Lewis, James W Lewis
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects"...
April 30, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28447851/-understanding-the-role-of-speech-production-in-reading-evidence-for-a-print-to-speech-neural-network-using-graphical-analysis-correction-to-cummine-et-al-2016
#4
(no author information available yet)
Reports an error in "Understanding the role of speech production in reading: Evidence for a print-to-speech neural network using graphical analysis" by Jacqueline Cummine, Ivor Cribben, Connie Luu, Esther Kim, Reyhaneh Bahktiari, George Georgiou and Carol A. Boliek (Neuropsychology, 2016[May], Vol 30[4], 385-397). In the article, the fifth author's name [Bakhtiari] was misspelled. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2015-49069-001...
May 2017: Neuropsychology
https://www.readbyqxmd.com/read/28400265/convergence-of-semantics-and-emotional-expression-within-the-ifg-pars-orbitalis
#5
Michel Belyk, Steven Brown, Jessica Lim, Sonja A Kotz
Humans communicate through a combination of linguistic and emotional channels, including propositional speech, writing, sign language, music, but also prosodic, facial, and gestural expression. These channels can be interpreted separately or they can be integrated to multimodally convey complex meanings. Neural models of the perception of semantics and emotion include nodes for both functions in the inferior frontal gyrus pars orbitalis (IFGorb). However, it is not known whether this convergence involves a common functional zone or instead specialized subregions that process semantics and emotion separately...
April 8, 2017: NeuroImage
https://www.readbyqxmd.com/read/28397076/neural-evidence-for-predictive-coding-in-auditory-cortex-during-speech-production
#6
Kayoko Okada, William Matchin, Gregory Hickok
Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010)...
April 10, 2017: Psychonomic Bulletin & Review
https://www.readbyqxmd.com/read/28349620/connectivity-patterns-during-music-listening-evidence-for-action-based-processing-in-musicians
#7
Vinoo Alluri, Petri Toiviainen, Iballa Burunat, Marina Kliuchko, Peter Vuust, Elvira Brattico
Musical expertise is visible both in the morphology and functionality of the brain. Recent research indicates that functional integration between multi-sensory, somato-motor, default-mode (DMN), and salience (SN) networks of the brain differentiates musicians from non-musicians during resting state. Here, we aimed at determining whether brain networks differentially exchange information in musicians as opposed to non-musicians during naturalistic music listening. Whole-brain graph-theory analyses were performed on participants' fMRI responses...
June 2017: Human Brain Mapping
https://www.readbyqxmd.com/read/28320669/regularized-speaker-adaptation-of-kl-hmm-for-dysarthric-speech-recognition
#8
Myungjong Kim, Younggwan Kim, Joohong Yoo, Jun Wang, Hoirin Kim
This paper addresses the problem of recognizing the speech uttered by patients with dysarthria, which is a motor speech disorder impeding the physical production of speech. Patients with dysarthria have articulatory limitation, and therefore, they often have trouble in pronouncing certain sounds, resulting in undesirable phonetic variation. Modern automatic speech recognition systems designed for regular speakers are ineffective for dysarthric sufferers due to the phonetic variation. To capture the phonetic variation, Kullback-Leibler divergence based hidden Markov model (KL-HMM) is adopted, where the emission probability of state is parametrized by a categorical distribution using phoneme posterior probabilities obtained from a deep neural network-based acoustic model...
March 13, 2017: IEEE Transactions on Neural Systems and Rehabilitation Engineering
https://www.readbyqxmd.com/read/28268413/recent-machine-learning-advancements-in-sensor-based-mobility-analysis-deep-learning-for-parkinson-s-disease-assessment
#9
Bjoern M Eskofier, Sunghoon I Lee, Jean-Francois Daneault, Fatemeh N Golabchi, Gabriela Ferreira-Carvalho, Gloria Vergara-Diaz, Stefano Sapienza, Gianluca Costante, Jochen Klucken, Thomas Kautz, Paolo Bonato
The development of wearable sensors has opened the door for long-term assessment of movement disorders. However, there is still a need for developing methods suitable to monitor motor symptoms in and outside the clinic. The purpose of this paper was to investigate deep learning as a method for this monitoring. Deep learning recently broke records in speech and image classification, but it has not been fully investigated as a potential approach to analyze wearable sensor data. We collected data from ten patients with idiopathic Parkinson's disease using inertial measurement units...
August 2016: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
https://www.readbyqxmd.com/read/28260167/the-role-of-the-supplementary-motor-region-in-overt-reading-evidence-for-differential-processing-in-sma-proper-and-pre-sma-as-a-function-of-task-demands
#10
Jacqueline Cummine, Wahab Hanif, Inna Dymouriak-Tymashov, Kavya Anchuri, Stephanie Chiu, Carol A Boliek
A differentiation in function between the pre-SMA (i.e., cognitive load) and the SMA-proper (i.e., motor execution) has been described (Zhang et al., Cereb Cortex 22:99-111, 2012). These differential SMA functions may be influential in overt reading tasks. The present study examined the relationships between various segments of the SMA and overt reading through the modulation of task demands in an effort to explore the complexity of the print-to-speech network. Skilled reading adults (N = 15) took part in five overt reading tasks: pure regular word reading, pure exception word reading, mixed regular word and exception word reading, go/no-go reading with nonword foils and go/no-go reading with pseudohomophone foils...
March 4, 2017: Brain Topography
https://www.readbyqxmd.com/read/28226583/recent-machine-learning-advancements-in-sensor-based-mobility-analysis-deep-learning-for-parkinson-s-disease-assessment
#11
Bjoern M Eskofier, Sunghoon I Lee, Jean-Francois Daneault, Fatemeh N Golabchi, Gabriela Ferreira-Carvalho, Gloria Vergara-Diaz, Stefano Sapienza, Gianluca Costante, Jochen Klucken, Thomas Kautz, Paolo Bonato, Bjoern M Eskofier, Sunghoon I Lee, Jean-Francois Daneault, Fatemeh N Golabchi, Gabriela Ferreira-Carvalho, Gloria Vergara-Diaz, Stefano Sapienza, Gianluca Costante, Jochen Klucken, Thomas Kautz, Paolo Bonato, Fatemeh N Golabchi, Gianluca Costante, Gloria Vergara-Diaz, Paolo Bonato, Gabriela Ferreira-Carvalho, Jean-Francois Daneault, Bjoern M Eskofier, Jochen Klucken, Sunghoon I Lee, Thomas Kautz, Stefano Sapienza
The development of wearable sensors has opened the door for long-term assessment of movement disorders. However, there is still a need for developing methods suitable to monitor motor symptoms in and outside the clinic. The purpose of this paper was to investigate deep learning as a method for this monitoring. Deep learning recently broke records in speech and image classification, but it has not been fully investigated as a potential approach to analyze wearable sensor data. We collected data from ten patients with idiopathic Parkinson's disease using inertial measurement units...
August 2016: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
https://www.readbyqxmd.com/read/28214015/anomalous-network-architecture-of-the-resting-brain-in-children-who-stutter
#12
Soo-Eun Chang, Michael Angstadt, Ho Ming Chow, Andrew C Etchell, Emily O Garnett, Ai Leen Choo, Daniel Kessler, Robert C Welsh, Chandra Sripada
PURPOSE: We combined a large longitudinal neuroimaging dataset that includes children who do and do not stutter and a whole-brain network analysis in order to examine the intra- and inter-network connectivity changes associated with stuttering. Additionally, we asked whether whole brain connectivity patterns observed at the initial year of scanning could predict persistent stuttering in later years. METHODS: A total of 224 high-quality resting state fMRI scans collected from 84 children (42 stuttering, 42 controls) were entered into an independent component analysis (ICA), yielding a number of distinct network connectivity maps ("components") as well as expression scores for each component that quantified the degree to which it is expressed for each child...
January 25, 2017: Journal of Fluency Disorders
https://www.readbyqxmd.com/read/28168061/right-hemisphere-remapping-of-naming-functions-depends-on-lesion-size-and-location-in-poststroke-aphasia
#13
Laura M Skipper-Kallal, Elizabeth H Lacey, Shihui Xing, Peter E Turkeltaub
The study of language network plasticity following left hemisphere stroke is foundational to the understanding of aphasia recovery and neural plasticity in general. Damage in different language nodes may influence whether local plasticity is possible and whether right hemisphere recruitment is beneficial. However, the relationships of both lesion size and location to patterns of remapping are poorly understood. In the context of a picture naming fMRI task, we tested whether lesion size and location relate to activity in surviving left hemisphere language nodes, as well as homotopic activity in the right hemisphere during covert name retrieval and overt name production...
2017: Neural Plasticity
https://www.readbyqxmd.com/read/28139959/inside-speech-multisensory-and-modality-specific-processing-of-tongue-and-lip-speech-actions
#14
Avril Treille, Coriandre Vilain, Thomas Hueber, Laurent Lamalle, Marc Sato
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible...
March 2017: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/28069925/cerebellar-tdcs-modulates-neural-circuits-during-semantic-prediction-a-combined-tdcs-fmri-study
#15
Anila M D'Mello, Peter E Turkeltaub, Catherine J Stoodley
It has been proposed that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. In language, semantic prediction speeds speech production and comprehension. Right cerebellar lobules VI and VII (including Crus I/II) are engaged during a variety of language processes and are functionally connected with cerebral cortical language networks. Further, right posterolateral cerebellar neuromodulation modifies behavior during predictive language processing...
February 8, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28024236/cerebral-blood-flow-and-its-connectivity-features-of-auditory-verbal-hallucinations-in-schizophrenia-a-perfusion-study
#16
Long-Biao Cui, Gang Chen, Zi-Liang Xu, Lin Liu, Hua-Ning Wang, Li Guo, Wen-Ming Liu, Ting-Ting Liu, Shun Qi, Kang Liu, Wei Qin, Jin-Bo Sun, Yi-Bin Xi, Hong Yin
The goal of the study was to investigate cerebral blood flow (CBF) and its connectivity (an across-subject covariance measure) patterns of schizophrenia (SZ) patients with auditory verbal hallucinations (AVHs). A total of demographically matched 25 SZ patients with AVHs, 25 without AVHs, and 25 healthy controls (HCs) underwent resting state perfusion imaging using a pulsed arterial spin labeling sequence. CBF and its connectivity were analyzed and then CBF topological properties were calculated. AVHs patients exhibited decreased CBF in the bilateral superior and middle frontal gyri and postcentral gyri, and right supplementary motor area compared with SZ patients without AVHs...
February 28, 2017: Psychiatry Research
https://www.readbyqxmd.com/read/27923733/cognitive-control-of-vocalizations-in-the-primate-ventrolateral-dorsomedial-frontal-vlf-dmf-brain-network
#17
REVIEW
Kep Kee Loh, Michael Petrides, William D Hopkins, Emmanuel Procyk, Céline Amiez
This review centers on the neural mechanisms underlying the primate cognitive control of vocalizations, i.e. the capacity to regulate vocal productions in a goal-directed manner. In both human and non-human primates (NHPs), two main frontal brain regions are associated with top-down vocal control: a ventrolateral frontal region (VLF), comprising the ventrolateral prefrontal cortex and ventral premotor region; and a dorsomedial frontal region (DMF), comprising the mid-cingulate cortex, pre-supplementary and supplementary motor areas...
December 5, 2016: Neuroscience and Biobehavioral Reviews
https://www.readbyqxmd.com/read/27884462/dual-neural-network-model-for-the-evolution-of-speech-and-language
#18
REVIEW
Steffen R Hage, Andreas Nieder
Explaining the evolution of speech and language poses one of the biggest challenges in biology. We propose a dual network model that posits a volitional articulatory motor network (VAMN) originating in the prefrontal cortex (PFC; including Broca's area) that cognitively controls vocal output of a phylogenetically conserved primary vocal motor network (PVMN) situated in subcortical structures. By comparing the connections between these two systems in human and nonhuman primate brains, we identify crucial biological preadaptations in monkeys for the emergence of a language system in humans...
December 2016: Trends in Neurosciences
https://www.readbyqxmd.com/read/27875590/spatio-temporal-progression-of-cortical-activity-related-to-continuous-overt-and-covert-speech-production-in-a-reading-task
#19
Jonathan S Brumberg, Dean J Krusienski, Shreya Chakrabarti, Aysegul Gunduz, Peter Brunner, Anthony L Ritaccio, Gerwin Schalk
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech...
2016: PloS One
https://www.readbyqxmd.com/read/27833009/anomaly-in-neural-phase-coherence-accompanies-reduced-sensorimotor-integration-in-adults-who-stutter
#20
Ranit Sengupta, Shalin Shah, Katie Gore, Torrey Loucks, Sazzad M Nasir
Despite advances in our understanding of the human speech system, the neurophysiological basis of stuttering remains largely unknown. Here, it is hypothesized that the speech of adults who stutter (AWS) is susceptible to disruptions in sensorimotor integration caused by neural miscommunication within the speech motor system. Human speech unfolds over rapid timescales and relies on a distributed system of brain regions working in a parallel and synchronized manner, and a breakdown in neural communication between the putative brain regions could increase susceptibility to dysfluency...
December 2016: Neuropsychologia
keyword
keyword
89184
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"