Read by QxMD icon Read

Neural Computation

Xian Liu, Jing Gao, Guan Wang, Zhi-Wang Chen
The development of control technology for the brain is of potential significance to the prevention and treatment of neuropsychiatric disorders and the improvement of humans' mental health. A controllability analysis of the brain is necessary to ensure the feasibility of the brain control. In this letter, we investigate the influences of dynamical parameters on the controllability in the neural mass model by using controllability indices as quantitative indicators. The indices are obtained by computing Lie brackets and condition numbers of the system model...
December 28, 2016: Neural Computation
Saket Navlakha
Networks have become instrumental in deciphering how information is processed and transferred within systems in almost every scientific field today. Nearly all network analyses, however, have relied on humans to devise structural features of networks believed to be most discriminative for an application. We present a framework for comparing and classifying networks without human-crafted features using deep learning. After training, autoencoders contain hidden units that encode a robust structural vocabulary for succinctly describing graphs...
December 28, 2016: Neural Computation
Ruibo Wang, Yu Wang, Jihong Li, Xingli Yang, Jing Yang
A cross-validation method based on m replications of two-fold cross validation is called an m × 2 cross validation. An m × 2 cross validation is used in estimating the generalization error and comparing of algorithms' performance in machine learning. However, the variance of the estimator of the generalization error in m × 2 cross validation is easily affected by random partitions. Poor data partitioning may cause a large fluctuation in the number of overlapping samples between any two training (test) sets in m × 2 cross validation...
December 28, 2016: Neural Computation
Peng Liu, Zhigang Zeng, Jun Wang
This letter studies the multistability analysis of delayed recurrent neural networks with Mexican hat activation function. Some sufficient conditions are obtained to ensure that an n-dimensional recurrent neural network can have [Formula: see text] equilibrium points with [Formula: see text], and [Formula: see text] of them are locally exponentially stable. Furthermore, the attraction basins of these stable equilibrium points are estimated. We show that the attraction basins of these stable equilibrium points can be larger than their originally partitioned subsets...
December 28, 2016: Neural Computation
Marta Favali, Giovanna Citti, Alessandro Sarti
This letter presents a mathematical model of figure-ground articulation that takes into account both local and global gestalt laws and is compatible with the functional architecture of the primary visual cortex (V1). The local gestalt law of good continuation is described by means of suitable connectivity kernels that are derived from Lie group theory and quantitatively compared with long-range connectivity in V1. Global gestalt constraints are then introduced in terms of spectral analysis of a connectivity matrix derived from these kernels...
December 28, 2016: Neural Computation
Yu Wang, Eshwar Ghumare, Rik Vandenberghe, Patrick Dupont
Binary undirected graphs are well established, but when these graphs are constructed, often a threshold is applied to a parameter describing the connection between two nodes. Therefore, the use of weighted graphs is more appropriate. In this work, we focus on weighted undirected graphs. This implies that we have to incorporate edge weights in the graph measures, which require generalizations of common graph metrics. After reviewing existing generalizations of the clustering coefficient and the local efficiency, we proposed new generalizations for these graph measures...
November 21, 2016: Neural Computation
Yuriy Romanyshyn, Andriy Smerdov, Svitlana Petrytska
On the basis of the neurophysiological strength-duration (amplitude-duration) curve of neuron activation (which relates the threshold amplitude of a rectangular current pulse of neuron activation to the pulse duration), as well as with the use of activation energy constraint (the threshold curve corresponds to the energy threshold of neuron activation by a rectangular current pulse), an energy model of neuron activation by a single current pulse has been constructed. The constructed model of activation, which determines its spectral properties, is a bandpass filter...
November 21, 2016: Neural Computation
Samuel P Muscinelli, Wulfram Gerstner, Johanni Brea
We show that Hopfield neural networks with synchronous dynamics and asymmetric weights admit stable orbits that form sequences of maximal length. For [Formula: see text] units, these sequences have length [Formula: see text]; that is, they cover the full state-space. We present a mathematical proof that maximal-length orbits exist for all [Formula: see text], and we provide a method to construct both the sequence and the weight matrix that allow its production. The orbit is relatively robust to dynamical noise, and perturbations of the optimal weights reveal other periodic orbits that are not maximal but typically still very long...
November 21, 2016: Neural Computation
Nils Kurzawa, Christopher Summerfield, Rafal Bogacz
Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given the two options. Several models have been proposed for how neural circuits can learn these log-likelihood ratios from experience, but all of these models introduced novel and specially dedicated synaptic plasticity rules...
November 21, 2016: Neural Computation
Takeru Matsuda, Fumiyasu Komaki
Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise...
November 21, 2016: Neural Computation
William Joseph MacInnes
Cuing a location in space produces a short-lived advantage in reaction time to targets at that location. This early advantage, however, switches to a reaction time cost and has been termed inhibition of return (IOR). IOR behaves differently for different response modalities, suggesting that it may not be a unified effect. This letter presents new data from two experiments testing the gradient of IOR with random, continuous cue-target Euclidean distance and cue-target onset asynchrony. This data were then used to train multiple diffusion models of saccadic and manual reaction time for these cuing experiments...
October 20, 2016: Neural Computation
Yongseok Yoo, Woori Kim
Neural systems are inherently noisy. One well-studied example of a noise reduction mechanism in the brain is the population code, where representing a variable with multiple neurons allows the encoded variable to be recovered with fewer errors. Studies have assumed ideal observer models for decoding population codes, and the manner in which information in the neural population can be retrieved remains elusive. This letter addresses a mechanism by which realistic neural circuits can recover encoded variables...
October 20, 2016: Neural Computation
Thomas Burwick, Alexandros Bouras
The communication-through-coherence (CTC) hypothesis states that a sending group of neurons will have a particularly strong effect on a receiving group if both groups oscillate in a phase-locked ("coherent") manner (Fries, 2005, 2015). Here, we consider a situation with two visual stimuli, one in the focus of attention and the other distracting, resulting in two sites of excitation at an early cortical area that project to a common site in a next area. Taking a modeler's perspective, we confirm the workings of a mechanism that was proposed by Bosman et al...
October 20, 2016: Neural Computation
Jonathan Cannon
Mutual information is a commonly used measure of communication between neurons, but little theory exists describing the relationship between mutual information and the parameters of the underlying neuronal interaction. Such a theory could help us understand how specific physiological changes affect the capacity of neurons to synaptically communicate, and, in particular, they could help us characterize the mechanisms by which neuronal dynamics gate the flow of information in the brain. Here we study a pair of linear-nonlinear-Poisson neurons coupled by a weak synapse...
January 2017: Neural Computation
Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, Giovanni Pezzulo
This article describes a process theory based on active inference and belief propagation. Starting from the premise that all neuronal processing (and action selection) can be explained by maximizing Bayesian model evidence-or minimizing variational free energy-we ask whether neuronal responses can be described as a gradient descent on variational free energy. Using a standard (Markov decision process) generative model, we derive the neuronal dynamics implicit in this description and reproduce a remarkable range of well-characterized neuronal phenomena...
January 2017: Neural Computation
Rongchang Zhao, Min Wu, Xiyao Liu, Beiji Zou, Fangfang Li
Contour is a critical feature for image description and object recognition in many computer vision tasks. However, detection of object contour remains a challenging problem because of disturbances from texture edges. This letter proposes a scheme to handle texture edges by implementing contour integration. The proposed scheme integrates structural segments into contours while inhibiting texture edges with the help of the orientation histogram-based center-surround interaction model. In the model, local edges within surroundings exert a modulatory effect on central contour cues based on the co-occurrence statistics of local edges described by the divergence of orientation histograms in the local region...
January 2017: Neural Computation
Cian O'Donnell, J Tiago Gonçalves, Nick Whiteley, Carlos Portera-Cailliau, Terrence J Sejnowski
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded ([Formula: see text]). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time...
January 2017: Neural Computation
Chao Zhang, Lei Du, Dacheng Tao
The techniques of random matrices have played an important role in many machine learning models. In this letter, we present a new method to study the tail inequalities for sums of random matrices. Different from other work (Ahlswede & Winter, 2002 ; Tropp, 2012 ; Hsu, Kakade, & Zhang, 2012 ), our tail results are based on the largest singular value (LSV) and independent of the matrix dimension. Since the LSV operation and the expectation are noncommutative, we introduce a diagonalization method to convert the LSV operation into the trace operation of an infinitely dimensional diagonal matrix...
January 2017: Neural Computation
Min Wu, Ting Wan, Xiongbo Wan, Yuxiao Du, Jinhua She
This letter describes the improvement of two methods of detecting high-frequency oscillations (HFOs) and their use to localize epileptic seizure onset zones (SOZs). The wavelet transform (WT) method was improved by combining the complex Morlet WT with Shannon entropy to enhance the temporal-frequency resolution during HFO detection. And the matching pursuit (MP) method was improved by combining it with an adaptive genetic algorithm to improve the speed and accuracy of the calculations for HFO detection. The HFOs detected by these two methods were used to localize SOZs in five patients...
January 2017: Neural Computation
Bruno Cessac, Arnaud Le Ny, Eva Löcherbach
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process that is no longer Markov but is instead a variable-length Markov chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains...
January 2017: Neural Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"