Read by QxMD icon Read

recurrent neural network

Mitsuko Watabe-Uchida, Neir Eshel, Naoshige Uchida
Dopamine neurons facilitate learning by calculating reward prediction error, or the difference between expected and actual reward. Despite two decades of research, it remains unclear how dopamine neurons make this calculation. Here we review studies that tackle this problem from a diverse set of approaches, from anatomy to electrophysiology to computational modeling and behavior. Several patterns emerge from this synthesis: that dopamine neurons themselves calculate reward prediction error, rather than inherit it passively from upstream regions; that they combine multiple separate and redundant inputs, which are themselves interconnected in a dense recurrent network; and that despite the complexity of inputs, the output from dopamine neurons is remarkably homogeneous and robust...
April 24, 2017: Annual Review of Neuroscience
Tao Liu, Jie Huang
This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xu-Yao Zhang, Fei Yin, Yan-Ming Zhang, Cheng-Lin Liu, Yoshua Bengio
Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters...
April 18, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
Rhys Heffernan, Yuedong Yang, Kuldip Paliwal, Yaoqi Zhou
Motivation: The accuracy of predicting protein local and global structural properties such as secondary structure and solvent accessible surface area has been stagnant for many years because of the challenge of accounting for non-local interactions between amino acid residues that are close in three-dimensional structural space but far from each other in their sequence positions. All existing machine-learning techniques relied on a sliding window of 10-20 amino acid residues to capture some "short to intermediate" non-local interactions...
April 18, 2017: Bioinformatics
Shuai Li, Huanqing Wang, Muhammad Usman Rafique
In this paper, we propose a novel recurrent neural network to resolve the redundancy of manipulators for efficient kinematic control in the presence of noises in a polynomial type. Leveraging the high-order derivative properties of polynomial noises, a deliberately devised neural network is proposed to eliminate the impact of noises and recover the accurate tracking of desired trajectories in workspace. Rigorous analysis shows that the proposed neural law stabilizes the system dynamics and the position tracking error converges to zero in the presence of noises...
April 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Haytham M Fayek, Margaret Lech, Lawrence Cavedon
Speech Emotion Recognition (SER) can be regarded as a static or dynamic classification problem, which makes SER an excellent test bed for investigating and comparing various deep learning architectures. We describe a frame-based formulation to SER that relies on minimal speech processing and end-to-end deep learning to model intra-utterance dynamics. We use the proposed SER system to empirically explore feed-forward and recurrent neural network architectures and their variants. Experiments conducted illuminate the advantages and limitations of these architectures in paralinguistic speech recognition and emotion recognition in particular...
March 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
Xiaolei Ma, Zhuang Dai, Zhengbing He, Jihui Ma, Yong Wang, Yunpeng Wang
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network...
April 10, 2017: Sensors
Ahmadreza Ahmadi, Jun Tani
The current paper examines how a recurrent neural network (RNN) model using a dynamic predictive coding scheme can cope with fluctuations in temporal patterns through generalization in learning. The conjecture driving this present inquiry is that a RNN model with multiple timescales (MTRNN) learns by extracting patterns of change from observed temporal patterns, developing an internal dynamic structure such that variance in initial internal states account for modulations in corresponding observed patterns. We trained a MTRNN with low-dimensional temporal patterns, and assessed performance on an imitation task employing these patterns...
March 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
Johannes Felsenberg, Oliver Barnstedt, Paola Cognigni, Suewei Lin, Scott Waddell
Animals constantly assess the reliability of learned information to optimize their behaviour. On retrieval, consolidated long-term memory can be neutralized by extinction if the learned prediction was inaccurate. Alternatively, retrieved memory can be maintained, following a period of reconsolidation during which it is labile. Although extinction and reconsolidation provide opportunities to alleviate problematic human memories, we lack a detailed mechanistic understanding of memory updating. Here we identify neural operations underpinning the re-evaluation of memory in Drosophila...
April 13, 2017: Nature
Hao Chen, Lingyun Wu, Qi Dou, Jing Qin, Shengli Li, Jie-Zhi Cheng, Dong Ni, Pheng-Ann Heng
Ultrasound (US) imaging is a widely used screening tool for obstetric examination and diagnosis. Accurate acquisition of fetal standard planes with key anatomical structures is very crucial for substantial biometric measurement and diagnosis. However, the standard plane acquisition is a labor-intensive task and requires operator equipped with a thorough knowledge of fetal anatomy. Therefore, automatic approaches are highly demanded in clinical practice to alleviate the workload and boost the examination efficiency...
March 30, 2017: IEEE Transactions on Cybernetics
Kaihao Zhang, Yongzhen Huang, Yong Du, Liang Wang
One key challenging issue of facial expression recognition is to capture the dynamic variation of facial physical structure from videos. In this paper, we propose a Part-based Hierarchical Bidirectional Recurrent Neural Network (PHRNN) to analyze the facial expression information of temporal sequences. Our PHRNN models facial morphological variations and dynamical evolution of expressions, which is effective to extract "temporal features" based on facial landmarks (geometry information) from consecutive frames...
March 30, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Mohammad Bagher Khodabakhshi, Mohammad Hassan Moradi
The respiratory system dynamic is of high significance when it comes to the detection of lung abnormalities, which highlights the importance of presenting a reliable model for it. In this paper, we introduce a novel dynamic modelling method for the characterization of the lung sounds (LS), based on the attractor recurrent neural network (ARNN). The ARNN structure allows the development of an effective LS model. Additionally, it has the capability to reproduce the distinctive features of the lung sounds using its formed attractors...
March 23, 2017: Computers in Biology and Medicine
Qiang Xiao, Zhigang Zeng
The existed results of Lagrange stability and finite-time synchronization for memristive recurrent neural networks (MRNNs) are scale-free on time evolvement, and some restrictions appear naturally. In this paper, two novel scale-limited comparison principles are established by means of inequality techniques and induction principle on time scales. Then the results concerning Lagrange stability and global finite-time synchronization of MRNNs on time scales are obtained. Scaled-limited Lagrange stability criteria are derived, in detail, via nonsmooth analysis and theory of time scales...
March 10, 2017: IEEE Transactions on Cybernetics
Xi-Lin Li
Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization...
March 9, 2017: IEEE Transactions on Neural Networks and Learning Systems
Hai'e Gong, Haicang Zhang, Jianwei Zhu, Chao Wang, Shiwei Sun, Wei-Mou Zheng, Dongbo Bu
BACKGROUND: Residues in a protein might be buried inside or exposed to the solvent surrounding the protein. The buried residues usually form hydrophobic cores to maintain the structural integrity of proteins while the exposed residues are tightly related to protein functions. Thus, the accurate prediction of solvent accessibility of residues will greatly facilitate our understanding of both structure and functionalities of proteins. Most of the state-of-the-art prediction approaches consider the burial state of each residue independently, thus neglecting the correlations among residues...
March 14, 2017: BMC Bioinformatics
Jeroen Joukes, Yunguo Yu, Jonathan D Victor, Bart Krekelberg
To discriminate visual features such as corners and contours, the brain must be sensitive to spatial correlations between multiple points in an image. Consistent with this, macaque V2 neurons respond selectively to patterns with well-defined multipoint correlations. Here, we show that a standard feedforward model (a cascade of linear-non-linear filters) does not capture this multipoint selectivity. As an alternative, we developed an artificial neural network model with two hierarchical stages of processing and locally recurrent connectivity...
2017: Frontiers in Systems Neuroscience
Francesca Fardo, Ryszard Auksztulewicz, Micah Allen, Martin J Dietz, Andreas Roepstorff, Karl J Friston
The neural processing and experience of pain are influenced by both expectations and attention. For example, the amplitude of event-related pain responses is enhanced by both novel and unexpected pain, and by moving the focus of attention towards a painful stimulus. Under predictive coding, this congruence can be explained by appeal to a precision-weighting mechanism, which mediates bottom-up and top-down attentional processes by modulating the influence of feedforward and feedback signals throughout the cortical hierarchy...
March 21, 2017: NeuroImage
Anne Cocos, Alexander G Fiks, Aaron J Masino
Objective: Social media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media...
February 22, 2017: Journal of the American Medical Informatics Association: JAMIA
Eleonora Arena, Paolo Arena, Roland Strauss, Luca Patané
In nature, insects show impressive adaptation and learning capabilities. The proposed computational model takes inspiration from specific structures of the insect brain: after proposing key hypotheses on the direct involvement of the mushroom bodies (MBs) and on their neural organization, we developed a new architecture for motor learning to be applied in insect-like walking robots. The proposed model is a nonlinear control system based on spiking neurons. MBs are modeled as a nonlinear recurrent spiking neural network (SNN) with novel characteristics, able to memorize time evolutions of key parameters of the neural motor controller, so that existing motor primitives can be improved...
2017: Frontiers in Neurorobotics
Warasinee Chaisangmongkon, Sruthi K Swaminathan, David J Freedman, Xiao-Jing Wang
Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels...
March 22, 2017: Neuron
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"