Read by QxMD icon Read

Neural Computation

Takashi Kanamaru
In this study, I considered quantifying the strength of chaos in the population firing rate of a pulse-coupled neural network. In particular, I considered the dynamics where the population firing rate is chaotic and the firing of each neuron is stochastic. I calculated a time histogram of firings to show the variation in the population firing rate over time. To smooth this histogram, I used Bayesian adaptive regression splines and a gaussian filter. The nonlinear prediction method, based on reconstruction, was applied to a sequence of interpeak intervals in the smoothed time histogram of firings...
December 8, 2017: Neural Computation
Wiktor Młynarski, Josh H McDermott
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics...
December 8, 2017: Neural Computation
Melika Payvand, Luke Theogarajan
In this letter, we have implemented and compared two neural coding algorithms in the networks of spiking neurons: Winner-takes-all (WTA) and winners-share-all (WSA). Winners-Share-All exploits the code space provided by the temporal code by training a different combination of [Formula: see text] out of [Formula: see text] neurons to fire together in response to different patterns, while WTA uses a one-hot-coding to respond to distinguished patterns. Using WSA, the maximum value of [Formula: see text] in order to maximize information capacity using [Formula: see text] output neurons was theoretically determined and utilized...
December 8, 2017: Neural Computation
Aaron R Voelker, Chris Eliasmith
Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks...
December 8, 2017: Neural Computation
Aritra Bhaduri, Amitava Banerjee, Subhrajit Roy, Sougata Kar, Arindam Basu
We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources...
December 8, 2017: Neural Computation
Jing Fang, Naima Rüther, Christian Bellebaum, Laurenz Wiskott, Sen Cheng
The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input...
December 8, 2017: Neural Computation
Qiulei Dong, Hong Wang, Zhanyi Hu
Under the goal-driven paradigm, Yamins et al. (2014; Yamins & DiCarlo, 2016) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1...
November 21, 2017: Neural Computation
Jianguang Zhang, Jianmin Jiang
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns...
November 21, 2017: Neural Computation
Noam Brezis, Zohar Z Bronfman, Marius Usher
Humans possess a remarkable ability to rapidly form coarse estimations of numerical averages. This ability is important for making decisions that are based on streams of numerical or value-based information, as well as for preference formation. Nonetheless, the mechanism underlying rapid approximate numerical averaging remains unknown, and several competing mechanism may account for it. Here, we tested the hypothesis that approximate numerical averaging relies on perceptual-like processes, based on population coding...
November 21, 2017: Neural Computation
Xiaowei Zhao, Zhigang Ma, Zhi Li, Zhihui Li
In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraphs that is proved to be beneficial for relational learning...
November 21, 2017: Neural Computation
Hiroaki Sasaki, Voot Tangkaratt, Gang Niu, Masashi Sugiyama
Sufficient dimension reduction (SDR) is aimed at obtaining the low-rank projection matrix in the input space such that information about output data is maximally preserved. Among various approaches to SDR, a promising method is based on the eigendecomposition of the outer product of the gradient of the conditional density of output given input. In this letter, we propose a novel estimator of the gradient of the logarithmic conditional density that directly fits a linear-in-parameter model to the true gradient under the squared loss...
November 21, 2017: Neural Computation
Christoph Börgers, R Melody Takeuchi, Daniel T Rosebrock
We investigate rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of "reverberating" activity is often thought to be the basis of working memory. Recurrent excitation can also lead to "runaway" transitions, sudden transitions to high-frequency firing; this may be related to epileptic seizures. Not all fundamental questions about these phenomena have been answered with clarity in the literature...
November 21, 2017: Neural Computation
Rasmus Troelsgaard, Lars Kai Hansen
Model-based classification of sequence data using a set of hidden Markov models is a well-known technique. The involved score function, which is often based on the class-conditional likelihood, can, however, be computationally demanding, especially for long data sequences. Inspired by recent theoretical advances in spectral learning of hidden Markov models, we propose a score function based on third-order moments. In particular, we propose to use the Kullback-Leibler divergence between theoretical and empirical third-order moments for classification of sequence data with discrete observations...
November 21, 2017: Neural Computation
Dylan R Muir
Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input...
November 21, 2017: Neural Computation
N F Hardy, Dean V Buonomano
Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing...
November 21, 2017: Neural Computation
Zhe Li, Pietro Mazzoni, Sen Song, Ning Qian
It has been debated whether kinematic features, such as the number of peaks or decomposed submovements in a velocity profile, indicate the number of discrete motor impulses or result from a continuous control process. The debate is particularly relevant for tasks involving target perturbation, which can alter movement kinematics. To simulate such tasks, finite-horizon models require two preset movement durations to compute two control policies before and after the perturbation. Another model employs infinite- and finite-horizon formulations to determine, respectively, movement durations and control policies, which are updated every time step...
November 21, 2017: Neural Computation
Sizhen Du, Guojie Song, Lei Han, Haikun Hong
Accurate causal inference among time series helps to better understand the interactive scheme behind the temporal variables. For time series analysis, an unavoidable issue is the existence of time lag among different temporal variables. That is, past evidence would take some time to cause a future effect instead of an immediate response. To model this process, existing approaches commonly adopt a prefixed time window to define the lag. However, in many real-world applications, this parameter may vary among different time series, and it is hard to be predefined with a fixed value...
October 24, 2017: Neural Computation
Osamu Hoshino, Meihong Zheng, Kazuo Watanabe
Learning of sensory cues is believed to rely on synchronous pre- and postsynaptic neuronal firing. Evidence is mounting that such synchronicity is not merely caused by properties of the underlying neuronal network but could also depend on the integrity of gap junctions that connect neurons and astrocytes in networks too. In this perspective, we set out to investigate the effect of astrocytic gap junctions on perceptual learning, introducing a model for coupled neuron-astrocyte networks. In particular, we focus on the fact that astrocytes are rich of GABA transporters (GATs) which can either uptake or release GABA depending on the astrocyte membrane potential, which is a function of local neural activity...
October 24, 2017: Neural Computation
Minkyu Choi, Jun Tani
This letter proposes a novel predictive coding type neural network model, the predictive multiple spatiotemporal scales recurrent neural network (P-MSTRNN). The P-MSTRNN learns to predict visually perceived human whole-body cyclic movement patterns by exploiting multiscale spatiotemporal constraints imposed on network dynamics by using differently sized receptive fields as well as different time constant values for each layer. After learning, the network can imitate target movement patterns by inferring or recognizing corresponding intentions by means of the regression of prediction error...
October 24, 2017: Neural Computation
Mohammadjavad Faraji, Kerstin Preuschoff, Wulfram Gerstner
Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task...
October 24, 2017: Neural Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"