Read by QxMD icon Read

Neural Computation

William Joseph MacInnes
Cuing a location in space produces a short-lived advantage in reaction time to targets at that location. This early advantage, however, switches to a reaction time cost and has been termed inhibition of return (IOR). IOR behaves differently for different response modalities, suggesting that it may not be a unified effect. This letter presents new data from two experiments testing the gradient of IOR with random, continuous cue-target Euclidean distance and cue-target onset asynchrony. This data were then used to train multiple diffusion models of saccadic and manual reaction time for these cuing experiments...
October 20, 2016: Neural Computation
María da Fonseca, Inés Samengo
The accuracy with which humans detect chromatic differences varies throughout color space. For example, we are far more precise when discriminating two similar orange stimuli than two similar green stimuli. In order for two colors to be perceived as different, the neurons representing chromatic information must respond differently, and the difference must be larger than the trial-to-trial variability of the response to each separate color. Photoreceptors constitute the first stage in the processing of color information; many more stages are required before humans can consciously report whether two stimuli are perceived as chromatically distinguishable...
October 20, 2016: Neural Computation
Yongseok Yoo, Woori Kim
Neural systems are inherently noisy. One well-studied example of a noise reduction mechanism in the brain is the population code, where representing a variable with multiple neurons allows the encoded variable to be recovered with fewer errors. Studies have assumed ideal observer models for decoding population codes, and the manner in which information in the neural population can be retrieved remains elusive. This letter addresses a mechanism by which realistic neural circuits can recover encoded variables...
October 20, 2016: Neural Computation
Chao Zhang, Lei Du, Dacheng Tao
The techniques of random matrices have played an important role in many machine learning models. In this letter, we present a new method to study the tail inequalities for sums of random matrices. Different from other work (Ahlswede & Winter, 2002; Tropp, 2012; Hsu, Kakade, & Zhang, 2012), our tail results are based on the largest singular value (LSV) and independent of the matrix dimension. Since the LSV operation and the expectation are noncommutative, we introduce a diagonalization method to convert the LSV operation into the trace operation of an infinitely dimensional diagonal matrix...
October 20, 2016: Neural Computation
Zhuo Wang, Alan A Stocker, Daniel D Lee
The efficient coding hypothesis assumes that biological sensory systems use neural codes that are optimized to best possibly represent the stimuli that occur in their environment. Most common models use information-theoretic measures, whereas alternative formulations propose incorporating downstream decoding performance. Here we provide a systematic evaluation of different optimality criteria using a parametric formulation of the efficient coding problem based on the Lp reconstruction error of the maximum likelihood decoder...
October 20, 2016: Neural Computation
Min Wu, Ting Wan, Xiongbo Wan, Yuxiao Du, Jinhua She
This letter describes the improvement of two methods of detecting high-frequency oscillations (HFO) and their use to localize epileptic seizure onset zones (SOZs). The wavelet transform (WT) method was improved by combining the complex Morlet WT with Shannon entropy to enhance the temporal-frequency resolution during HFO detection. And the matching pursuit (MP) method was improved by combining it with an adaptive genetic algorithm to improve the speed and accuracy of the calculations for HFO detection. The HFOs detected by these two methods were used to localize SOZs in five patients...
October 20, 2016: Neural Computation
Bruno Cessac, Arnaud Le Ny, Eva Löcherbach
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process that is no longer Markov but is instead a variable-length Markov chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains...
October 20, 2016: Neural Computation
Thomas Burwick, Alexandros Bouras
The communication-through-coherence (CTC) hypothesis states that a sending group of neurons will have a particularly strong effect on a receiving group if both groups oscillate in a phase-locked ("coherent") manner (Fries, 2005, 2015). Here, we consider a situation with two visual stimuli, one in the focus of attention and the other distracting, resulting in two sites of excitation at an early cortical area that project to a common site in a next area. Taking a modeler's perspective, we confirm the workings of a mechanism that was proposed by Bosman et al...
October 20, 2016: Neural Computation
Shashanka Ubaru, Abd-Krim Seghouane, Yousef Saad
This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first updated the dictionary using the method of optimal directions (MOD) and then applied a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach...
October 20, 2016: Neural Computation
Alejandro Agostini, Enric Celaya
Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In kinds of tasks, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods...
October 20, 2016: Neural Computation
William Severa, Ojas Parekh, Conrad D James, James B Aimone
The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation-similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus's (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal...
October 20, 2016: Neural Computation
Terry Elliott
Integrate-and-express models of synaptic plasticity propose that synapses integrate plasticity induction signals before expressing synaptic plasticity. By discerning trends in their induction signals, synapses can control destabilizing fluctuations in synaptic strength. In a feedforward perceptron framework with binary-strength synapses for associative memory storage, we have previously shown that such a filter-based model outperforms other, nonintegrative, "cascade"-type models of memory storage in most regions of biologically relevant parameter space...
September 14, 2016: Neural Computation
Ken Takano, Hideitsu Hino, Shotaro Akaho, Noboru Murata
This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible...
September 14, 2016: Neural Computation
Baosheng Yu, Meng Fang, Dacheng Tao
Linear submodular bandits has been proven to be effective in solving the diversification and feature-based exploration problem in information retrieval systems. Considering there is inevitably a budget constraint in many web-based applications, such as news article recommendations and online advertising, we study the problem of diversification under a budget constraint in a bandit setting. We first introduce a budget constraint to each exploration step of linear submodular bandits as a new problem, which we call per-round knapsack-constrained linear submodular bandits...
September 14, 2016: Neural Computation
Subhrajit Roy, Arindam Basu
In this letter, we propose a novel neuro-inspired low-resolution online unsupervised learning rule to train the reservoir or liquid of liquid state machines. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formating and eliminating synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks...
September 14, 2016: Neural Computation
Vitaly L Galinsky, Lawrence R Frank
We present a quantitative statistical analysis of pairwise crossings for all fibers obtained from whole brain tractography that confirms with high confidence that the brain grid theory (Wedeen et al., 2012a) is not supported by the evidence. The overall fiber tracts structure appears to be more consistent with small angle treelike branching of tracts rather than with near-orthogonal gridlike crossing of fiber sheets. The analysis uses our new method for high-resolution whole brain tractography that is capable of resolving fibers crossing of less than 10 degrees and correctly following a continuous angular distribution of fibers even when the individual fiber directions are not resolved...
September 14, 2016: Neural Computation
Changjin Xu, Peiluan Li, Yicheng Pang
In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results...
September 14, 2016: Neural Computation
Sou Nobukawa, Haruhiko Nishimura
It is well known that cerebellar motor control is fine-tuned by the learning process adjusted according to rich error signals from inferior olive (IO) neurons. Schweighofer and colleagues proposed that these signals can be produced by chaotic irregular firing in the IO neuron assembly; such chaotic resonance (CR) was replicated in their computer demonstration of a Hodgkin-Huxley (HH)-type compartment model. In this study, we examined the response of CR to a periodic signal in the IO neuron assembly comprising the Llinás approach IO neuron model...
September 14, 2016: Neural Computation
Yuwei Cui, Subutar Ahmad, Jeff Hawkins
The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule...
September 14, 2016: Neural Computation
Hien G Nguyen, Luke R Lloyd Jones, Geoffrey J McLachlan
The mixture-of-experts (MoE) model is a popular neural network architecture for nonlinear regression and classification. The class of MoE mean functions is known to be uniformly convergent to any unknown target function, assuming that the target function is from Sobolev space that is sufficiently differentiable and that the domain of estimation is a compact unit hypercube. We provide an alternative result, which shows that the class of MoE mean functions is dense in the class of all continuous functions over arbitrary compact domains of estimation...
September 14, 2016: Neural Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"