keyword
MENU ▼
Read by QxMD icon Read
search

recurrent neural network

keyword
https://www.readbyqxmd.com/read/28334612/computing-by-robust-transience-how-the-fronto-parietal-network-performs-sequential-category-based-decisions
#1
Warasinee Chaisangmongkon, Sruthi K Swaminathan, David J Freedman, Xiao-Jing Wang
Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels...
March 22, 2017: Neuron
https://www.readbyqxmd.com/read/28333646/recurrent-neural-networks-with-auxiliary-memory-units
#2
Jianyong Wang, Lei Zhang, Quan Guo, Zhang Yi
Memory is one of the most important mechanisms in recurrent neural networks (RNNs) learning. It plays a crucial role in practical applications, such as sequence learning. With a good memory mechanism, long term history can be fused with current information, and can thus improve RNNs learning. Developing a suitable memory mechanism is always desirable in the field of RNNs. This paper proposes a novel memory mechanism for RNNs. The main contributions of this paper are: 1) an auxiliary memory unit (AMU) is proposed, which results in a new special RNN model (AMU-RNN), separating the memory and output explicitly and 2) an efficient learning algorithm is developed by employing the technique of error flow truncation...
March 21, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28329014/prediction-of-chronic-damage-in-systemic-lupus-erythematosus-by-using-machine-learning-models
#3
Fulvia Ceccarelli, Marco Sciandrone, Carlo Perricone, Giulio Galvan, Francesco Morelli, Luis Nunes Vicente, Ilaria Leccese, Laura Massaro, Enrica Cipriano, Francesca Romana Spinelli, Cristiano Alessandri, Guido Valesini, Fabrizio Conti
OBJECTIVE: The increased survival in Systemic Lupus Erythematosus (SLE) patients implies the development of chronic damage, occurring in up to 50% of cases. Its prevention is a major goal in the SLE management. We aimed at predicting chronic damage in a large monocentric SLE cohort by using neural networks. METHODS: We enrolled 413 SLE patients (M/F 30/383; mean age ± SD 46.3±11.9 years; mean disease duration ± SD 174.6 ± 112.1 months). Chronic damage was assessed by the SLICC/ACR Damage Index (SDI)...
2017: PloS One
https://www.readbyqxmd.com/read/28328515/solving-multiextremal-problems-by-using-recurrent-neural-networks
#4
Alaeddin Malek, Najmeh Hosseinipour-Mahani
In this paper, a neural network model for solving a class of multiextremal smooth nonconvex constrained optimization problems is proposed. Neural network is designed in such a way that its equilibrium points coincide with the local and global optimal solutions of the corresponding optimization problem. Based on the suitable underestimators for the Lagrangian of the problem, one give geometric criteria for an equilibrium point to be a global minimizer of multiextremal constrained optimization problem with or without bounds on the variables...
March 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28318903/understanding-human-intention-by-connecting-perception-and-action-learning-in-artificial-agents
#5
Sangwook Kim, Zhibin Yu, Minho Lee
To develop an advanced human-robot interaction system, it is important to first understand how human beings learn to perceive, think, and act in an ever-changing world. In this paper, we propose an intention understanding system that uses an Object Augmented-Supervised Multiple Timescale Recurrent Neural Network (OA-SMTRNN) and demonstrate the effects of perception-action connected learning in an artificial agent, which is inspired by psychological and neurological phenomena in humans. We believe that action and perception are not isolated processes in human mental development, and argue that these psychological and neurological interactions can be replicated in a human-machine scenario...
February 11, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28297595/decreased-resting-state-activity-in-the-precuneus-is-associated-with-depressive-episodes-in-recurrent-depression
#6
Chun-Hong Liu, Xin Ma, Zhen Yuan, Lu-Ping Song, Bing Jing, Hong-Yu Lu, Li-Rong Tang, Jin Fan, Martin Walter, Cun-Zhi Liu, Lihong Wang, Chuan-Yue Wang
OBJECTIVE: To investigate alterations in resting-state spontaneous brain activity in patients with major depressive disorder (MDD) experiencing multiple episodes. METHODS: Between May 2007 and September 2014, 24 recurrent and 22 remitted patients diagnosed with MDD with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I), and 69 healthy controls matched for age, sex, and educational level participated in this study. Among them, 1 healthy control was excluded due to excessive head motion...
March 14, 2017: Journal of Clinical Psychiatry
https://www.readbyqxmd.com/read/28290507/inhibitory-interneuron-circuits-at-cortical-and-spinal-levels-are-associated-with-individual-differences-in-corticomuscular-coherence-during-isometric-voluntary-contraction
#7
Ryosuke Matsuya, Junichi Ushiyama, Junichi Ushiba
Corticomuscular coherence (CMC) is an oscillatory synchronization of 15-35 Hz (β-band) between electroencephalogram (EEG) of the sensorimotor cortex and electromyogram of contracting muscles. Although we reported that the magnitude of CMC varies among individuals, the physiological mechanisms underlying this variation are still unclear. Here, we aimed to investigate the associations between CMC and intracortical inhibition (ICI) in the primary motor cortex (M1)/recurrent inhibition (RI) in the spinal cord, which probably affect oscillatory neural activities...
March 14, 2017: Scientific Reports
https://www.readbyqxmd.com/read/28288158/stimulus-specific-adaptation-in-a-recurrent-network-model-of-primary-auditory-cortex
#8
Tohar S Yarden, Israel Nelken
Stimulus-specific adaptation (SSA) occurs when neurons decrease their responses to frequently-presented (standard) stimuli but not, or not as much, to other, rare (deviant) stimuli. SSA is present in all mammalian species in which it has been tested as well as in birds. SSA confers short-term memory to neuronal responses, and may lie upstream of the generation of mismatch negativity (MMN), an important human event-related potential. Previously published models of SSA mostly rely on synaptic depression of the feedforward, thalamocortical input...
March 13, 2017: PLoS Computational Biology
https://www.readbyqxmd.com/read/28287991/adaptive-sliding-mode-control-of-dynamic-systems-using-double-loop-recurrent-neural-network-structure
#9
Juntao Fei, Cheng Lu
In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop...
March 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28286600/doctor-ai-predicting-clinical-events-via-recurrent-neural-networks
#10
Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F Stewart, Jimeng Sun
Leveraging large historical data in electronic health record (EHR), we developed Doctor AI, a generic predictive model that covers observed medical conditions and medication uses. Doctor AI is a temporal model using recurrent neural networks (RNN) and was developed and applied to longitudinal time stamped EHR data from 260K patients over 8 years. Encounter records (e.g. diagnosis codes, medication codes or procedure codes) were input to RNN to predict (all) the diagnosis and medication categories for a subsequent visit...
August 2016: JMLR Workshop and Conference Proceedings
https://www.readbyqxmd.com/read/28282439/iterative-free-energy-optimization-for-recurrent-neural-networks-inferno
#11
Alexandre Pitti, Philippe Gaussier, Mathias Quoy
The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains...
2017: PloS One
https://www.readbyqxmd.com/read/28282400/developing-a-benchmark-for-emotional-analysis-of-music
#12
Anna Aljanaki, Yi-Hsuan Yang, Mohammad Soleymani
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution)...
2017: PloS One
https://www.readbyqxmd.com/read/28281563/multiplex-visibility-graphs-to-investigate-recurrent-neural-network-dynamics
#13
Filippo Maria Bianchi, Lorenzo Livi, Cesare Alippi, Robert Jenssen
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity...
March 10, 2017: Scientific Reports
https://www.readbyqxmd.com/read/28276474/multiunit-activity-based-real-time-limb-state-estimation-from-dorsal-root-ganglion-recordings
#14
Sungmin Han, Jun-Uk Chu, Hyungmin Kim, Jong Woong Park, Inchan Youn
Proprioceptive afferent activities could be useful for providing sensory feedback signals for closed-loop control during functional electrical stimulation (FES). However, most previous studies have used the single-unit activity of individual neurons to extract sensory information from proprioceptive afferents. This study proposes a new decoding method to estimate ankle and knee joint angles using multiunit activity data. Proprioceptive afferent signals were recorded from a dorsal root ganglion with a single-shank microelectrode during passive movements of the ankle and knee joints, and joint angles were measured as kinematic data...
March 9, 2017: Scientific Reports
https://www.readbyqxmd.com/read/28268448/predicting-local-field-potentials-with-recurrent-neural-networks
#15
Louis Kim, Jacob Harer, Akshay Rangamani, James Moran, Philip D Parks, Alik Widge, Emad Eskandar, Darin Dougherty, Sang Peter Chin
We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.
August 2016: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
https://www.readbyqxmd.com/read/28230528/biologically-plausible-learning-in-recurrent-neural-networks-reproduces-neural-dynamics-observed-during-cognitive-tasks
#16
Thomas Miconi
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial...
February 23, 2017: ELife
https://www.readbyqxmd.com/read/28226620/predicting-local-field-potentials-with-recurrent-neural-networks
#17
Louis Kim, Jacob Harer, Akshay Rangamani, James Moran, Philip D Parks, Alik Widge, Emad Eskandar, Darin Dougherty, Sang Peter Chin, Louis Kim, Jacob Harer, Akshay Rangamani, James Moran, Philip D Parks, Alik Widge, Emad Eskandar, Darin Dougherty, Sang Peter Chin, Sang Peter Chin, Jacob Harer, Emad Eskandar, Darin Dougherty, Louis Kim, Philip D Parks, Akshay Rangamani, James Moran, Alik Widge
We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.
August 2016: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
https://www.readbyqxmd.com/read/28221057/-understanding-the-neural-basis-of-cognitive-bias-modification-as-a-clinical-treatment-for-depression-correction-to-eguchi-et-al-2016
#18
(no author information available yet)
Reports an error in "Understanding the neural basis of cognitive bias modification as a clinical treatment for depression" by Akihiro Eguchi, Daniel Walters, Nele Peerenboom, Hannah Dury, Elaine Fox and Simon Stringer (Journal of Consulting and Clinical Psychology, Advanced Online Publication, Dec 19, 2016, np). In the article, there was an error in the Discussion section's first paragraph for Implications and Future Work. The in-text reference citation for Penton-Voak et al. (2013) was incorrectly listed as "Blumenfeld, Preminger, Sagi, and Tsodyks (2006)"...
March 2017: Journal of Consulting and Clinical Psychology
https://www.readbyqxmd.com/read/28219984/motor-neurons-tune-premotor-activity-in-a-vertebrate-central-pattern-generator
#19
Kristy J Lawton, Wick M Perry, Ayako Yamaguchi, Erik Zornik
Central patterns generators (CPGs) are neural circuits that drive rhythmic motor output without sensory feedback. Vertebrate CPGs are generally believed to operate in a top-down manner in which premotor interneurons activate motor neurons that in turn drive muscles. In contrast, the frog (Xenopus laevis) vocal CPG contains a functionally unexplored neuronal projection from the motor nucleus to the premotor nucleus, indicating a recurrent pathway that may contribute to rhythm generation. In this study we characterized the function of this bottom-up connection...
February 20, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28212073/a-two-time-scale-neurodynamic-approach-to-constrained-minimax-optimization
#20
Xinyi Le, Jun Wang
This paper presents a two-time-scale neurodynamic approach to constrained minimax optimization using two coupled neural networks. One of the recurrent neural networks is used for minimizing the objective function and another is used for maximization. It is shown that the coupled neurodynamic systems operating in two different time scales work well for minimax optimization. The effectiveness and characteristics of the proposed approach are illustrated using several examples. Furthermore, the proposed approach is applied for H∞ model predictive control...
March 2017: IEEE Transactions on Neural Networks and Learning Systems
keyword
keyword
75726
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"