Read by QxMD icon Read

Neural Computation

Ian H Stevenson
Generalized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables, as well as the dynamics of single neurons. However, in any given experiment, many variables that have an impact on neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. In three case studies, we estimate tuning functions for common experiments in motor cortex, hippocampus, and visual cortex...
October 12, 2018: Neural Computation
Cunle Qian, Xuyun Sun, Shaomin Zhang, Dong Xing, Hongbao Li, Xiaoxiang Zheng, Gang Pan, Yiwen Wang
Neurons communicate nonlinearly through spike activities. Generalized linear models (GLMs) describe spike activities with a cascade of a linear combination across inputs, a static nonlinear function, and an inhomogeneous Bernoulli or Poisson process, or Cox process if a self-history term is considered. This structure considers the output nonlinearity in spike generation but excludes the nonlinear interaction among input neurons. Recent studies extend GLMs by modeling the interaction among input neurons with a quadratic function, which considers the interaction between every pair of input spikes...
October 12, 2018: Neural Computation
D J Strouse, David J Schwab
The information bottleneck (IB) approach to clustering takes a joint distribution [Formula: see text] and maps the data [Formula: see text] to cluster labels [Formula: see text], which retain maximal information about [Formula: see text] (Tishby, Pereira, & Bialek, 1999). This objective results in an algorithm that clusters data points based on the similarity of their conditional distributions [Formula: see text]. This is in contrast to classic geometric clustering algorithms such as [Formula: see text]-means and gaussian mixture models (GMMs), which take a set of observed data points [Formula: see text] and cluster them based on their geometric (typically Euclidean) distance from one another...
October 12, 2018: Neural Computation
Dmitry Krotov, John Hopfield
Deep neural networks (DNNs) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNNs and humans classify patterns and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods...
October 12, 2018: Neural Computation
Jonathan Vacher, Andrew Isaac Meso, Laurent U Perrinet, Gabriel Peyré
A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. We extend previous contributions by detailing three equivalent formulations of this texture model...
October 12, 2018: Neural Computation
William T Adler, Wei Ji Ma
The Bayesian model of confidence posits that confidence reflects the observer's posterior probability that the decision is correct. Hangya, Sanders, and Kepecs (2016) have proposed that researchers can test the Bayesian model by deriving qualitative signatures of Bayesian confidence (i.e., patterns that one would expect to see if an observer were Bayesian) and looking for those signatures in human or animal data. We examine two proposed signatures, showing that their derivations contain hidden assumptions that limit their applicability and that they are neither necessary nor sufficient conditions for Bayesian confidence...
October 12, 2018: Neural Computation
Hong Zhu, Li-Zhi Liao, Michael K Ng
We study a multi-instance (MI) learning dimensionality-reduction algorithm through sparsity and orthogonality, which is especially useful for high-dimensional MI data sets. We develop a novel algorithm to handle both sparsity and orthogonality constraints that existing methods do not handle well simultaneously. Our main idea is to formulate an optimization problem where the sparse term appears in the objective function and the orthogonality term is formed as a constraint. The resulting optimization problem can be solved by using approximate augmented Lagrangian iterations as the outer loop and inertial proximal alternating linearized minimization (iPALM) iterations as the inner loop...
October 12, 2018: Neural Computation
Yoichi Hayashi
We describe a simple method to transfer from weights in deep neural networks (NNs) trained by a deep belief network (DBN) to weights in a backpropagation NN (BPNN) in the recursive-rule eXtraction (Re-RX) algorithm with J48graft (Re-RX with J48graft) and propose a new method to extract accurate and interpretable classification rules for rating category data sets. We apply this method to the Wisconsin Breast Cancer Data Set (WBCD), the Mammographic Mass Data Set, and the Dermatology Dataset, which are small, high-abstraction data sets with prior knowledge...
October 12, 2018: Neural Computation
Hongqiao Wang, Jinglai Li
We consider Bayesian inference problems with computationally intensive likelihood functions. We propose a Gaussian process (GP)-based method to approximate the joint distribution of the unknown parameters and the data, built on recent work (Kandasamy, Schneider, & Póczos, 2015). In particular, we write the joint density approximately as a product of an approximate posterior density and an exponentiated GP surrogate. We then provide an adaptive algorithm to construct such an approximation, where an active learning method is used to choose the design points...
September 14, 2018: Neural Computation
Yingyi Chen, Qianqian Cheng, Yanjun Cheng, Hao Yang, Huihui Yu
Analysis and forecasting of sequential data, key problems in various domains of engineering and science, have attracted the attention of many researchers from different communities. When predicting the future probability of events using time series, recurrent neural networks (RNNs) are an effective tool that have the learning ability of feedforward neural networks and expand their expression ability using dynamic equations. Moreover, RNNs are able to model several computational structures. Researchers have developed various RNNs with different architectures and topologies...
September 14, 2018: Neural Computation
Andrés Pomi, Eduardo Mizraji, Juan Lin
Human brains seem to represent categories of objects and actions as locations in a continuous semantic space across the cortical surface that reflects the similarity among categories. This vision of the semantic organization of information in the brain, suggested by recent experimental findings, is in harmony with the well-known topographically organized somatotopic, retinotopic, and tonotopic maps in the cerebral cortex. Here we show that these topographies can be operationally represented with context-dependent associative memories...
September 14, 2018: Neural Computation
Rongxin Bao, Xu Yuan, Zhikui Chen, Ruixin Ma
The success of CNNs is accompanied by deep models and heavy storage costs. For compressing CNNs, we propose an efficient and robust pruning approach, cross-entropy pruning (CEP). Given a trained CNN model, connections were divided into groups in a group-wise way according to their corresponding output neurons. All connections with their cross-entropy errors below a grouping threshold were then removed. A sparse model was obtained and the number of parameters in the baseline model significantly reduced. This letter also presents a highest cross-entropy pruning (HCEP) method that keeps a small portion of weights with the highest CEP...
September 14, 2018: Neural Computation
Sander W Keemink, Dharmesh V Tailor, Mark C W van Rossum
Throughout the nervous system, information is commonly coded in activity distributed over populations of neurons. In idealized situations where a single, continuous stimulus is encoded in a homogeneous population code, the value of the encoded stimulus can be read out without bias. However, in many situations, multiple stimuli are simultaneously present; for example, multiple motion patterns might overlap. Here we find that when multiple stimuli that overlap in their neural representation are simultaneously encoded in the population, biases in the read-out emerge...
September 14, 2018: Neural Computation
David M Brandman, Michael C Burkhart, Jessica Kelemen, Brian Franco, Matthew T Harrison, Leigh R Hochberg
Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities-specifically, the changes in the statistics of the data between training and testing data sets. This includes changes to the neural tuning profiles and baseline shifts in firing rates of recorded neurons, as well as nonphysiological noise. While progress has been made toward providing long-term user control via decoder recalibration, relatively little work has been dedicated to making the decoding algorithm more resilient to signal nonstationarities...
September 14, 2018: Neural Computation
Yoram Baram
Experimental constraints have traditionally implied separate studies of different cortical functions, such as memory and sensory-motor control. Yet certain cortical modalities, while repeatedly observed and reported, have not been clearly identified with one cortical function or another. Specifically, while neuronal membrane and synapse polarities with respect to a certain potential value have been attracting considerable interest in recent years, the purposes of such polarities have largely remained a subject for speculation and debate...
September 14, 2018: Neural Computation
Beatrice Alexandra Golomb
IMPORTANCE: A mystery illness striking U.S. and Canadian diplomats to Cuba (and now China) "has confounded the FBI, the State Department and US intelligence agencies" (Lederman, Weissenstein, & Lee, 2017). Sonic explanations for the so-called health attacks have long dominated media reports, propelled by peculiar sounds heard and auditory symptoms experienced. Sonic mediation was justly rejected by experts. We assessed whether pulsed radiofrequency/microwave radiation (RF/MW) exposure can accommodate reported facts in diplomats, including unusual ones...
September 5, 2018: Neural Computation
Ulisse Ferrari, Stéphane Deny, Olivier Marre, Thierry Mora
Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, that can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters but can reproduce the observed variability in retinal recordings in various conditions...
August 27, 2018: Neural Computation
Kishan Wimalawarne, Makoto Yamada, Hiroshi Mamitsuka
We propose a set of convex low-rank inducing norms for coupled matrices and tensors (hereafter referred to as coupled tensors), in which information is shared between the matrices and tensors through common modes. More specifically, we first propose a mixture of the overlapped trace norm and the latent norms with the matrix trace norm, and then, propose a completion model regularized using these norms to impute coupled tensors. A key advantage of the proposed norms is that they are convex and can be used to find a globally optimal solution, whereas existing methods for coupled learning are nonconvex...
August 27, 2018: Neural Computation
Vafa Andalibi, Henri Hokkanen, Simo Vanni
Simulation of the cerebral cortex requires a combination of extensive domain-specific knowledge and efficient software. However, when the complexity of the biological system is combined with that of the software, the likelihood of coding errors increases, which slows model adjustments. Moreover, few life scientists are familiar with software engineering and would benefit from simplicity in form of a high-level abstraction of the biological model. Our primary aim was to build a scalable cortical simulation framework for personal computers...
August 27, 2018: Neural Computation
Tingwei Gao, Yueting Chai
This study focuses on predicting stock closing prices by using recurrent neural networks (RNNs). A long short-term memory (LSTM) model, a type of RNN coupled with stock basic trading data and technical indicators, is introduced as a novel method to predict the closing price of the stock market. We realize dimension reduction for the technical indicators by conducting principal component analysis (PCA). To train the model, some optimization strategies are followed, including adaptive moment estimation (Adam) and Glorot uniform initialization...
October 2018: Neural Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"