Read by QxMD icon Read

recurrent neural network

Xiaoyu Zhang, Han Ju, Trevor B Penney, Antonius M J VanDongen
Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face...
May 2017: ENeuro
Ran Darshan, William E Wood, Susan Peters, Arthur Leblois, David Hansel
The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours...
May 22, 2017: Nature Communications
Benjamin Scellier, Yoshua Bengio
We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function...
2017: Frontiers in Computational Neuroscience
Ian Kaplan Christie, Paul Miller, Stephen D Van Hooser
The development of direction-selective cortical columns requires visual experience, but the neural circuits and plasticity mechanisms that are responsible for this developmental transition are unknown. To gain insight into the mechanisms that could underlie experience-dependent increases in selectivity, we explored families of cortical amplifier models that enhance weakly biased feed-forward signals. Here, we focused exclusively on possible contributions of cortico-cortical connections and took feed-forward input to be constant...
May 17, 2017: Journal of Neurophysiology
Sven Festag, Cord Spreckelsen
BACKGROUND: Tagging text data with codes representing biomedical concepts plays an important role in medical data management and analysis. A problem occurs if there are ambiguous words linked to several concepts. OBJECTIVES AND METHODS: This study aims at investigating word sense disambiguation based on word embedding and recurrent convolutional neural networks. The study focuses on terms mapped to multiple concepts of the Unified Medical Language System (UMLS)...
2017: Studies in Health Technology and Informatics
Jiaheng Xie, Xiao Liu, Daniel Dajun Zeng
Objective: Recent years have seen increased worldwide popularity of e-cigarette use. However, the risks of e-cigarettes are underexamined. Most e-cigarette adverse event studies have achieved low detection rates due to limited subject sample sizes in the experiments and surveys. Social media provides a large data repository of consumers' e-cigarette feedback and experiences, which are useful for e-cigarette safety surveillance. However, it is difficult to automatically interpret the informal and nontechnical consumer vocabulary about e-cigarettes in social media...
May 13, 2017: Journal of the American Medical Informatics Association: JAMIA
Ashley Prater
Reservoir computing is a recently introduced machine learning paradigm that has been shown to be well-suited for the processing of spatiotemporal data. Rather than training the network node connections and weights via backpropagation in traditional recurrent neural networks, reservoirs instead have fixed connections and weights among the 'hidden layer' nodes, and traditionally only the weights to the output layer of neurons are trained using linear regression. We claim that for signal classification tasks one may forgo the weight training step entirely and instead use a simple supervised clustering method based upon principal components of reservoir states...
April 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
Rebecca B Price, Kathleen Gates, Thomas E Kraynak, Michael E Thase, Greg J Siegle
Depressed patients show abnormalities in brain connectivity at rest, including hyperconnectivity within the Default Mode Network (DMN). However, there is well-known heterogeneity in the clinical presentation of depression that is overlooked when averaging connectivity data. We used data-driven parsing of neural connectivity to reveal subgroups among 80 depressed patients completing resting state fMRI. Directed functional connectivity paths (eg, region A influences region B) within a depression-relevant network were characterized using Group Iterative Multiple Model Estimation, a method shown to accurately recover the direction and presence of connectivity paths in individual participants...
May 12, 2017: Neuropsychopharmacology: Official Publication of the American College of Neuropsychopharmacology
Abdol-Hossein Vahabie, Mohammad-Reza A Dehaqani, Majid Nili Ahmadabadi, Babak Nadjar Araabi, Hossein Esteky
Neuronal networks of the brain adapt their information processing according to the history of stimuli. Whereas most studies have linked adaptation to repetition suppression, recurrent connections within a network and disinhibition due to adaptation predict more complex response patterns. The main questions of this study are as follows: what is the effect of the selectivity of neurons on suppression/enhancement of neural responses? What are the consequences of adaptation on information representation in neural population and the temporal structure of response patterns? We studied rapid face adaptation using spiking activities of neurons in the inferior-temporal (IT) cortex...
May 10, 2017: Scientific Reports
Yan Huang, Wei Wang, Liang Wang
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR...
May 4, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
Yanqing Chen
A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps...
2017: Frontiers in Computational Neuroscience
Cesc Park, Youngjin Kim, Gunhee Kim
We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences...
May 2, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
Karlis Kanders, Tom Lorimer, Ruedi Stoop
There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality...
April 2017: Chaos
R Butler, P M Bernier, J Lefebvre, Guillaume Gilbert, K Whittingstall
Although Functional Magnetic Resonance imaging (fMRI) using the blood-oxygen-level-dependent (BOLD) contrast is widely used for non-invasively mapping hemodynamic brain activity in humans, its exact link to underlying neural processing is poorly understood. While some studies have reported that BOLD signals measured in visual cortex are tightly linked to neural activity in the narrow band gamma (NBG) range, others have found a weak correlation between the two. To elucidate the mechanisms behind these conflicting findings, we hypothesized that BOLD reflects the strength of synaptic inputs to cortex whereas NBG is more dependent on how well these inputs are correlated...
April 28, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
Andrea K Barreiro, Cheng Ly
A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. Because neural spiking is noisy, spiking patterns are often quantified via pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments, is that correlations can increase systematically with firing rate. Theoretical studies have determined that stimulus-dependent correlations that increase with firing rate can have beneficial effects on information coding; however, we still have an incomplete understanding of what circuit mechanisms do, or do not, produce this correlation-firing rate relationship...
April 2017: PLoS Computational Biology
Danesh Shahnazian, Clay B Holroyd
Anterior cingulate cortex (ACC) has been the subject of intense debate over the past 2 decades, but its specific computational function remains controversial. Here we present a simple computational model of ACC that incorporates distributed representations across a network of interconnected processing units. Based on the proposal that ACC is concerned with the execution of extended, goal-directed action sequences, we trained a recurrent neural network to predict each successive step of several sequences associated with multiple tasks...
April 25, 2017: Psychonomic Bulletin & Review
Mitsuko Watabe-Uchida, Neir Eshel, Naoshige Uchida
Dopamine neurons facilitate learning by calculating reward prediction error, or the difference between expected and actual reward. Despite two decades of research, it remains unclear how dopamine neurons make this calculation. Here we review studies that tackle this problem from a diverse set of approaches, from anatomy to electrophysiology to computational modeling and behavior. Several patterns emerge from this synthesis: that dopamine neurons themselves calculate reward prediction error, rather than inherit it passively from upstream regions; that they combine multiple separate and redundant inputs, which are themselves interconnected in a dense recurrent network; and that despite the complexity of inputs, the output from dopamine neurons is remarkably homogeneous and robust...
April 24, 2017: Annual Review of Neuroscience
Tao Liu, Jie Huang
This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xu-Yao Zhang, Fei Yin, Yan-Ming Zhang, Cheng-Lin Liu, Yoshua Bengio
Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters...
April 18, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
Rhys Heffernan, Yuedong Yang, Kuldip Paliwal, Yaoqi Zhou
Motivation: The accuracy of predicting protein local and global structural properties such as secondary structure and solvent accessible surface area has been stagnant for many years because of the challenge of accounting for non-local interactions between amino acid residues that are close in three-dimensional structural space but far from each other in their sequence positions. All existing machine-learning techniques relied on a sliding window of 10-20 amino acid residues to capture some "short to intermediate" non-local interactions...
April 18, 2017: Bioinformatics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"