journal
MENU ▼
Read by QxMD icon Read
search

Neural Computation

journal
https://www.readbyqxmd.com/read/29162010/statistics-of-visual-responses-to-image-object-stimuli-from-primate-ait-neurons-to-dnn-neurons
#1
Qiulei Dong, Hong Wang, Zhanyi Hu
Under the goal-driven paradigm, Yamins et al. (2014; Yamins & DiCarlo, 2016) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162009/rank-optimized-logistic-matrix-regression-toward-improved-matrix-data-classification
#2
Jianguang Zhang, Jianmin Jiang
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162008/a-perceptual-like-population-coding-mechanism-of-approximate-numerical-averaging
#3
Noam Brezis, Zohar Z Bronfman, Marius Usher
Humans possess a remarkable ability to rapidly form coarse estimations of numerical averages. This ability is important for making decisions that are based on streams of numerical or value-based information, as well as for preference formation. Nonetheless, the mechanism underlying rapid approximate numerical averaging remains unknown, and several competing mechanism may account for it. Here, we tested the hypothesis that approximate numerical averaging relies on perceptual-like processes, based on population coding...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162007/joint-concept-correlation-and-feature-concept-relevance-learning-for-multilabel-classification
#4
Xiaowei Zhao, Zhigang Ma, Zhi Li, Zhihui Li
In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraphs that is proved to be beneficial for relational learning...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162006/sufficient-dimension-reduction-via-direct-estimation-of-the-gradients-of-logarithmic-conditional-densities
#5
Hiroaki Sasaki, Voot Tangkaratt, Gang Niu, Masashi Sugiyama
Sufficient dimension reduction (SDR) is aimed at obtaining the low-rank projection matrix in the input space such that information about output data is maximally preserved. Among various approaches to SDR, a promising method is based on the eigendecomposition of the outer product of the gradient of the conditional density of output given input. In this letter, we propose a novel estimator of the gradient of the logarithmic conditional density that directly fits a linear-in-parameter model to the true gradient under the squared loss...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162005/on-rhythms-in-neuronal-networks-with-recurrent-excitation
#6
Christoph Börgers, R Melody Takeuchi, Daniel T Rosebrock
We investigate rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of "reverberating" activity is often thought to be the basis of working memory. Recurrent excitation can also lead to "runaway" transitions, sudden transitions to high-frequency firing; this may be related to epileptic seizures. Not all fundamental questions about these phenomena have been answered with clarity in the literature...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162004/sequence-classification-using-third-order-moments
#7
Rasmus Troelsgaard, Lars Kai Hansen
Model-based classification of sequence data using a set of hidden Markov models is a well-known technique. The involved score function, which is often based on the class-conditional likelihood, can, however, be computationally demanding, especially for long data sequences. Inspired by recent theoretical advances in spectral learning of hidden Markov models, we propose a score function based on third-order moments. In particular, we propose to use the Kullback-Leibler divergence between theoretical and empirical third-order moments for classification of sequence data with discrete observations...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162003/feedforward-approximations-to-dynamic-recurrent-network-architectures
#8
Dylan R Muir
Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162002/encoding-time-in-feedforward-trajectories-of-a-recurrent-neural-network-model
#9
N F Hardy, Dean V Buonomano
Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29162001/a-single-continuously-applied-control-policy-for-modeling-reaching-movements-with-and-without-perturbation
#10
Zhe Li, Pietro Mazzoni, Sen Song, Ning Qian
It has been debated whether kinematic features, such as the number of peaks or decomposed submovements in a velocity profile, indicate the number of discrete motor impulses or result from a continuous control process. The debate is particularly relevant for tasks involving target perturbation, which can alter movement kinematics. To simulate such tasks, finite-horizon models require two preset movement durations to compute two control policies before and after the perturbation. Another model employs infinite- and finite-horizon formulations to determine, respectively, movement durations and control policies, which are updated every time step...
November 21, 2017: Neural Computation
https://www.readbyqxmd.com/read/29064787/temporal-causal-inference-with-time-lag
#11
Sizhen Du, Guojie Song, Lei Han, Haikun Hong
Accurate causal inference among time series helps to better understand the interactive scheme behind the temporal variables. For time series analysis, an unavoidable issue is the existence of time lag among different temporal variables. That is, past evidence would take some time to cause a future effect instead of an immediate response. To model this process, existing approaches commonly adopt a prefixed time window to define the lag. However, in many real-world applications, this parameter may vary among different time series, and it is hard to be predefined with a fixed value...
October 24, 2017: Neural Computation
https://www.readbyqxmd.com/read/29064786/improved-perceptual-learning-by-control-of-extracellular-gaba-concentration-by-astrocytic-gap-junctions
#12
Osamu Hoshino, Meihong Zheng, Kazuo Watanabe
Learning of sensory cues is believed to rely on synchronous pre- and postsynaptic neuronal firing. Evidence is mounting that such synchronicity is not merely caused by properties of the underlying neuronal network but could also depend on the integrity of gap junctions that connect neurons and astrocytes in networks too. In this perspective, we set out to investigate the effect of astrocytic gap junctions on perceptual learning, introducing a model for coupled neuron-astrocyte networks. In particular, we focus on the fact that astrocytes are rich of GABA transporters (GATs) which can either uptake or release GABA depending on the astrocyte membrane potential, which is a function of local neural activity...
October 24, 2017: Neural Computation
https://www.readbyqxmd.com/read/29064785/predictive-coding-for-dynamic-visual-processing-development-of-functional-hierarchy-in-a-multiple-spatiotemporal-scales-rnn-model
#13
Minkyu Choi, Jun Tani
This letter proposes a novel predictive coding type neural network model, the predictive multiple spatiotemporal scales recurrent neural network (P-MSTRNN). The P-MSTRNN learns to predict visually perceived human whole-body cyclic movement patterns by exploiting multiscale spatiotemporal constraints imposed on network dynamics by using differently sized receptive fields as well as different time constant values for each layer. After learning, the network can imitate target movement patterns by inferring or recognizing corresponding intentions by means of the regression of prediction error...
October 24, 2017: Neural Computation
https://www.readbyqxmd.com/read/29064784/balancing-new-against-old-information-the-role-of-puzzlement-surprise-in-learning
#14
Mohammadjavad Faraji, Kerstin Preuschoff, Wulfram Gerstner
Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task...
October 24, 2017: Neural Computation
https://www.readbyqxmd.com/read/29064783/mechanism-based-and-input-output-modeling-of-the-key-neuronal-connections-and-signal-transformations-in-the-ca3-ca1-regions-of-the-hippocampus
#15
Kunling Geng, Dae C Shin, Dong Song, Robert E Hampson, Samuel A Deadwyler, Theodore W Berger, Vasilis Z Marmarelis
This letter examines the results of input-output (nonparametric) modeling based on the analysis of data generated by a mechanism-based (parametric) model of CA3-CA1 neuronal connections in the hippocampus. The motivation is to obtain biological insight into the interpretation of such input-output (Volterra-equivalent) models estimated from synthetic data. The insights obtained may be subsequently used to interpretat input-output models extracted from actual experimental data. Specifically, we found that a simplified parametric model may serve as a useful tool to study the signal transformations in the hippocampal CA3-CA1 regions...
October 24, 2017: Neural Computation
https://www.readbyqxmd.com/read/29064782/capturing-spike-variability-in-noisy-izhikevich-neurons-using-point-process-generalized-linear-models
#16
Jacob Østergaard, Mark A Kramer, Uri T Eden
To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current...
October 24, 2017: Neural Computation
https://www.readbyqxmd.com/read/29064781/dynamics-of-learning-in-mlp-natural-gradient-and-singularity-revisited
#17
Shun-Ichi Amari, Tomoko Ozeki, Ryo Karakida, Yuki Yoshida, Masato Okada
The dynamics of supervised learning play a main role in deep learning, which takes place in the parameter space of a multilayer perceptron (MLP). We review the history of supervised stochastic gradient learning, focusing on its singular structure and natural gradient. The parameter space includes singular regions in which parameters are not identifiable. One of our results is a full exploration of the dynamical behaviors of stochastic gradient learning in an elementary singular network. The bad news is its pathological nature, in which part of the singular region becomes an attractor and another part a repulser at the same time, forming a Milnor attractor...
October 24, 2017: Neural Computation
https://www.readbyqxmd.com/read/28957029/learning-simpler-language-models-with-the-differential-state-framework
#18
Alexander G Ororbia Ii, Tomas Mikolov, David Reitter
Learning useful information across long time lags is a critical and difficult problem for temporal neural models in tasks such as language modeling. Existing architectures that address the issue are often complex and costly to train. The differential state framework (DSF) is a simple and high-performing design that unifies previously introduced gated neural models. DSF models maintain longer-term memory by learning to interpolate between a fast-changing data-driven representation and a slowly changing, implicitly stable state...
September 28, 2017: Neural Computation
https://www.readbyqxmd.com/read/28957028/first-passage-time-memory-lifetimes-for-simple-multistate-synapses
#19
Terry Elliott
Memory models based on synapses with discrete and bounded strengths store new memories by forgetting old ones. Memory lifetimes in such memory systems may be defined in a variety of ways. A mean first passage time (MFPT) definition overcomes much of the arbitrariness and many of the problems associated with the more usual signal-to-noise ratio (SNR) definition. We have previously computed MFPT lifetimes for simple, binary-strength synapses that lack internal, plasticity-related states. In simulation we have also seen that for multistate synapses, optimality conditions based on SNR lifetimes are absent with MFPT lifetimes, suggesting that such conditions may be artifactual...
September 28, 2017: Neural Computation
https://www.readbyqxmd.com/read/28957023/dopamine-inference-and-uncertainty
#20
Samuel J Gershman
The hypothesis that the phasic dopamine response reports a reward prediction error has become deeply entrenched. However, dopamine neurons exhibit several notable deviations from this hypothesis. A coherent explanation for these deviations can be obtained by analyzing the dopamine response in terms of Bayesian reinforcement learning. The key idea is that prediction errors are modulated by probabilistic beliefs about the relationship between cues and outcomes, updated through Bayesian inference. This account can explain dopamine responses to inferred value in sensory preconditioning, the effects of cue preexposure (latent inhibition), and adaptive coding of prediction errors when rewards vary across orders of magnitude...
September 28, 2017: Neural Computation
journal
journal
31799
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"